From ggreif at gmail.com Wed Jan 1 15:42:03 2014 From: ggreif at gmail.com (Gabor Greif) Date: Wed, 1 Jan 2014 16:42:03 +0100 Subject: [commit: packages/base] master: Improve error messages for partial functions in Data.Data (d0b74ca) In-Reply-To: <20140101134557.0B0052406B@ghc.haskell.org> References: <20140101134557.0B0052406B@ghc.haskell.org> Message-ID: On 1/1/14, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/base > > On branch : master > Link : > http://ghc.haskell.org/trac/ghc/changeset/d0b74cac0b0ab5371d15b6f73c0e627b41c3a152/base > >>--------------------------------------------------------------- > > commit d0b74cac0b0ab5371d15b6f73c0e627b41c3a152 > Author: Krzysztof Langner > Date: Wed Jan 1 14:14:46 2014 +0100 > > Improve error messages for partial functions in Data.Data > > This closes: #5412 Hi Krzysztof, there are typos "an Real" --> "a Real" but this actually your commit begs for a refactoring > "something" `notAsExpected` "a Real" etc. as it would eliminate a bunch of (string) redundancy. Cheers, Gabor > > >>--------------------------------------------------------------- > > d0b74cac0b0ab5371d15b6f73c0e627b41c3a152 > Data/Data.hs | 76 > ++++++++++++++++++++++++++++++++++++++++------------------ > 1 file changed, 53 insertions(+), 23 deletions(-) > > Diff suppressed because of size. To see it, use: > > git diff-tree --root --patch-with-stat --no-color --find-copies-harder > --ignore-space-at-eol --cc d0b74cac0b0ab5371d15b6f73c0e627b41c3a152 > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-commits > From aaron at frieltek.com Wed Jan 1 22:38:22 2014 From: aaron at frieltek.com (Aaron Friel) Date: Wed, 1 Jan 2014 22:38:22 +0000 Subject: LLVM and dynamic linking In-Reply-To: <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> , <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> Message-ID: <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com> Replying to include the email list. You?re right, the llvm backend and the gmp licensing issues are orthogonal - or should be. The problem is I get build errors when trying to build GHC with LLVM and dynamic libraries. The result is that I get a few different choices when producing a platform image for development, with some uncomfortable tradeoffs: 1. LLVM-built GHC, dynamic libs - doesn?t build. 2. LLVM-built GHC, static libs - potential licensing oddities with me shipping a statically linked ghc binary that is now gpled. I am not a lawyer, but the situation makes me uncomfortable. 3. GCC/ASM-built GHC, dynamic libs - this is the *standard* for most platforms shipping ghc binaries, but it means that one of the biggest and most critical users of the LLVM backend is neglecting it. It also bifurcates development resources for GHC. Optimization work is duplicated and already devs are getting into the uncomfortable position of suggesting to users that they should trust GHC to build your programs in a particular way, but not itself. 4. GCC/ASM-built GHC, static libs - worst of all possible worlds. Because of this, the libgmp and llvm-backend issues aren?t entirely orthogonal. Trac ticket #7885 is exactly the issue I get when trying to compile #1. From: Carter Schonwald Sent: ?Monday?, ?December? ?30?, ?2013 ?1?:?05? ?PM To: Aaron Friel Good question but you forgot to email the mailing list too :-) Using llvm has nothing to do with Gmp. Use the native code gen (it's simper) and integer-simple. That said, standard ghc dylinks to a system copy of Gmp anyways (I think ). Building ghc as a Dylib is orthogonal. -Carter On Dec 30, 2013, at 1:58 PM, Aaron Friel > wrote: Excellent research - I?m curious if this is the right thread to inquire about the status of trying to link GHC itself dynamically. I?ve been attempting to do so with various LLVM versions (3.2, 3.3, 3.4) using snapshot builds of GHC (within the past week) from git, and I hit ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every time (even the exact same error message). I?m interested in dynamically linking GHC with LLVM to avoid the entanglement with libgmp?s license. If this is the wrong thread or if I should reply instead to the trac item, please let me know. From: Carter Schonwald Sent: ?Friday?, ?December? ?27?, ?2013 ?2?:?41? ?PM To: Ben Gamari Cc: ghc-devs at haskell.org great work! :) On Fri, Dec 27, 2013 at 3:21 PM, Ben Gamari > wrote: Simon Marlow > writes: > This sounds right to me. Did you submit a patch? > > Note that dynamic linking with LLVM is likely to produce significantly > worse code that with the NCG right now, because the LLVM back end uses > dynamic references even for symbols in the same package, whereas the NCG > back-end uses direct static references for these. > Today with the help of Edward Yang I examined the code produced by the LLVM backend in light of this statement. I was surprised to find that LLVM's code appears to be no worse than the NCG with respect to intra-package references. My test case can be found here[2] and can be built with the included `build.sh` script. The test consists of two modules build into a shared library. One module, `LibTest`, exports a few simple members while the other module (`LibTest2`) defines members that consume them. Care is taken to ensure the members are not inlined. The tests were done on x86_64 running LLVM 3.4 and GHC HEAD with the patches[1] I referred to in my last message. Please let me know if I've missed something. # Evaluation ## First example ## The first member is a simple `String` (defined in `LibTest`), helloWorld :: String helloWorld = "Hello World!" The use-site is quite straightforward, testHelloWorld :: IO String testHelloWorld = return helloWorld With `-O1` the code looks reasonable in both cases. Most importantly, both backends use IP relative addressing to find the symbol. ### LLVM ### 0000000000000ef8 : ef8: 48 8b 45 00 mov 0x0(%rbp),%rax efc: 48 8d 1d cd 11 20 00 lea 0x2011cd(%rip),%rbx # 2020d0 f03: ff e0 jmpq *%rax 0000000000000f28 : f28: eb ce jmp ef8 f2a: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1) ### NCG ### 0000000000000d58 : d58: 48 8d 1d 71 13 20 00 lea 0x201371(%rip),%rbx # 2020d0 d5f: ff 65 00 jmpq *0x0(%rbp) 0000000000000d88 : d88: eb ce jmp d58 With `-O0` the code is substantially longer but the relocation behavior is still correct, as one would expect. Looking at the definition of `helloWorld`[3] itself it becomes clear that the LLVM backend is more likely to use PLT relocations over GOT. In general, `stg_*` primitives are called through the PLT. As far as I can tell, both of these call mechanisms will incur two memory accesses. However, in the case of the PLT the call will consist of two JMPs whereas the GOT will consist of only one. Is this a cause for concern? Could these two jumps interfere with prediction? In general the LLVM backend produces a few more instructions than the NCG although this doesn't appear to be related to handling of relocations. For instance, the inexplicable (to me) `mov` at the beginning of LLVM's `rKw_info`. ## Second example ## The second example demonstrates an actual call, -- Definition (in LibTest) infoRef :: Int -> Int infoRef n = n + 1 -- Call site testInfoRef :: IO Int testInfoRef = return (infoRef 2) With `-O1` this produces the following code, ### LLVM ### 0000000000000fb0 : fb0: 48 8b 45 00 mov 0x0(%rbp),%rax fb4: 48 8d 1d a5 10 20 00 lea 0x2010a5(%rip),%rbx # 202060 fbb: ff e0 jmpq *%rax 0000000000000fe0 : fe0: eb ce jmp fb0 ### NCG ### 0000000000000e10 : e10: 48 8d 1d 51 12 20 00 lea 0x201251(%rip),%rbx # 202068 e17: ff 65 00 jmpq *0x0(%rbp) 0000000000000e40 : e40: eb ce jmp e10 Again, it seems that LLVM is a bit more verbose but seems to handle intra-package calls efficiently. [1] https://github.com/bgamari/ghc/commits/llvm-dynamic [2] https://github.com/bgamari/ghc-linking-tests/tree/master/ghc-test [3] `helloWorld` definitions: LLVM: 00000000000010a8 : 10a8: 50 push %rax 10a9: 4c 8d 75 f0 lea -0x10(%rbp),%r14 10ad: 4d 39 fe cmp %r15,%r14 10b0: 73 07 jae 10b9 10b2: 49 8b 45 f0 mov -0x10(%r13),%rax 10b6: 5a pop %rdx 10b7: ff e0 jmpq *%rax 10b9: 4c 89 ef mov %r13,%rdi 10bc: 48 89 de mov %rbx,%rsi 10bf: e8 0c fd ff ff callq dd0 10c4: 48 85 c0 test %rax,%rax 10c7: 74 22 je 10eb 10c9: 48 8b 0d 18 0f 20 00 mov 0x200f18(%rip),%rcx # 201fe8 <_DYNAMIC+0x228> 10d0: 48 89 4d f0 mov %rcx,-0x10(%rbp) 10d4: 48 89 45 f8 mov %rax,-0x8(%rbp) 10d8: 48 8d 05 21 00 00 00 lea 0x21(%rip),%rax # 1100 10df: 4c 89 f5 mov %r14,%rbp 10e2: 49 89 c6 mov %rax,%r14 10e5: 58 pop %rax 10e6: e9 b5 fc ff ff jmpq da0 10eb: 48 8b 03 mov (%rbx),%rax 10ee: 5a pop %rdx 10ef: ff e0 jmpq *%rax NCG: 0000000000000ef8 : ef8: 48 8d 45 f0 lea -0x10(%rbp),%rax efc: 4c 39 f8 cmp %r15,%rax eff: 72 3f jb f40 f01: 4c 89 ef mov %r13,%rdi f04: 48 89 de mov %rbx,%rsi f07: 48 83 ec 08 sub $0x8,%rsp f0b: b8 00 00 00 00 mov $0x0,%eax f10: e8 1b fd ff ff callq c30 f15: 48 83 c4 08 add $0x8,%rsp f19: 48 85 c0 test %rax,%rax f1c: 74 20 je f3e f1e: 48 8b 1d cb 10 20 00 mov 0x2010cb(%rip),%rbx # 201ff0 <_DYNAMIC+0x238> f25: 48 89 5d f0 mov %rbx,-0x10(%rbp) f29: 48 89 45 f8 mov %rax,-0x8(%rbp) f2d: 4c 8d 35 1c 00 00 00 lea 0x1c(%rip),%r14 # f50 f34: 48 83 c5 f0 add $0xfffffffffffffff0,%rbp f38: ff 25 7a 10 20 00 jmpq *0x20107a(%rip) # 201fb8 <_DYNAMIC+0x200> f3e: ff 23 jmpq *(%rbx) f40: 41 ff 65 f0 jmpq *-0x10(%r13) _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Jan 1 23:53:39 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 1 Jan 2014 18:53:39 -0500 Subject: LLVM and dynamic linking In-Reply-To: <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com> References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com> Message-ID: 7.8 should have working dylib support on the llvm backend. (i believe some of the relevant patches are in head already, though Ben Gamari can opine on that) why do you want ghc to be built with llvm? (i know i've tried myself in the past, and it should be doable with 7.8 using 7.8 soon too) On Wed, Jan 1, 2014 at 5:38 PM, Aaron Friel wrote: > Replying to include the email list. You?re right, the llvm backend and > the gmp licensing issues are orthogonal - or should be. The problem is I > get build errors when trying to build GHC with LLVM and dynamic libraries. > > The result is that I get a few different choices when producing a > platform image for development, with some uncomfortable tradeoffs: > > > 1. LLVM-built GHC, dynamic libs - doesn?t build. > 2. LLVM-built GHC, static libs - potential licensing oddities with me > shipping a statically linked ghc binary that is now gpled. I am not a > lawyer, but the situation makes me uncomfortable. > 3. GCC/ASM-built GHC, dynamic libs - this is the *standard* for most > platforms shipping ghc binaries, but it means that one of the biggest and > most critical users of the LLVM backend is neglecting it. It also > bifurcates development resources for GHC. Optimization work is duplicated > and already devs are getting into the uncomfortable position of suggesting > to users that they should trust GHC to build your programs in a particular > way, but not itself. > 4. GCC/ASM-built GHC, static libs - worst of all possible worlds. > > > Because of this, the libgmp and llvm-backend issues aren?t entirely > orthogonal. Trac ticket #7885 is exactly the issue I get when trying to > compile #1. > > *From:* Carter Schonwald > *Sent:* ?Monday?, ?December? ?30?, ?2013 ?1?:?05? ?PM > *To:* Aaron Friel > > Good question but you forgot to email the mailing list too :-) > > Using llvm has nothing to do with Gmp. Use the native code gen (it's > simper) and integer-simple. > > That said, standard ghc dylinks to a system copy of Gmp anyways (I think > ). Building ghc as a Dylib is orthogonal. > > -Carter > > On Dec 30, 2013, at 1:58 PM, Aaron Friel wrote: > > Excellent research - I?m curious if this is the right thread to inquire > about the status of trying to link GHC itself dynamically. > > I?ve been attempting to do so with various LLVM versions (3.2, 3.3, 3.4) > using snapshot builds of GHC (within the past week) from git, and I hit > ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every time > (even the exact same error message). > > I?m interested in dynamically linking GHC with LLVM to avoid the > entanglement with libgmp?s license. > > If this is the wrong thread or if I should reply instead to the trac > item, please let me know. > > *From:* Carter Schonwald > *Sent:* ?Friday?, ?December? ?27?, ?2013 ?2?:?41? ?PM > *To:* Ben Gamari > *Cc:* ghc-devs at haskell.org > > great work! :) > > > On Fri, Dec 27, 2013 at 3:21 PM, Ben Gamari wrote: > >> Simon Marlow writes: >> >> > This sounds right to me. Did you submit a patch? >> > >> > Note that dynamic linking with LLVM is likely to produce significantly >> > worse code that with the NCG right now, because the LLVM back end uses >> > dynamic references even for symbols in the same package, whereas the NCG >> > back-end uses direct static references for these. >> > >> Today with the help of Edward Yang I examined the code produced by the >> LLVM backend in light of this statement. I was surprised to find that >> LLVM's code appears to be no worse than the NCG with respect to >> intra-package references. >> >> My test case can be found here[2] and can be built with the included >> `build.sh` script. The test consists of two modules build into a shared >> library. One module, `LibTest`, exports a few simple members while the >> other module (`LibTest2`) defines members that consume them. Care is >> taken to ensure the members are not inlined. >> >> The tests were done on x86_64 running LLVM 3.4 and GHC HEAD with the >> patches[1] I referred to in my last message. Please let me know if I've >> missed something. >> >> >> >> # Evaluation >> >> ## First example ## >> >> The first member is a simple `String` (defined in `LibTest`), >> >> helloWorld :: String >> helloWorld = "Hello World!" >> >> The use-site is quite straightforward, >> >> testHelloWorld :: IO String >> testHelloWorld = return helloWorld >> >> With `-O1` the code looks reasonable in both cases. Most importantly, >> both backends use IP relative addressing to find the symbol. >> >> ### LLVM ### >> >> 0000000000000ef8 : >> ef8: 48 8b 45 00 mov 0x0(%rbp),%rax >> efc: 48 8d 1d cd 11 20 00 lea 0x2011cd(%rip),%rbx >> # 2020d0 >> f03: ff e0 jmpq *%rax >> >> 0000000000000f28 : >> f28: eb ce jmp ef8 >> f2a: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1) >> >> ### NCG ### >> >> 0000000000000d58 : >> d58: 48 8d 1d 71 13 20 00 lea 0x201371(%rip),%rbx >> # 2020d0 >> d5f: ff 65 00 jmpq *0x0(%rbp) >> >> 0000000000000d88 : >> d88: eb ce jmp d58 >> >> >> With `-O0` the code is substantially longer but the relocation behavior >> is still correct, as one would expect. >> >> Looking at the definition of `helloWorld`[3] itself it becomes clear that >> the LLVM backend is more likely to use PLT relocations over GOT. In >> general, `stg_*` primitives are called through the PLT. As far as I can >> tell, both of these call mechanisms will incur two memory >> accesses. However, in the case of the PLT the call will consist of two >> JMPs whereas the GOT will consist of only one. Is this a cause for >> concern? Could these two jumps interfere with prediction? >> >> In general the LLVM backend produces a few more instructions than the >> NCG although this doesn't appear to be related to handling of >> relocations. For instance, the inexplicable (to me) `mov` at the >> beginning of LLVM's `rKw_info`. >> >> >> ## Second example ## >> >> The second example demonstrates an actual call, >> >> -- Definition (in LibTest) >> infoRef :: Int -> Int >> infoRef n = n + 1 >> >> -- Call site >> testInfoRef :: IO Int >> testInfoRef = return (infoRef 2) >> >> With `-O1` this produces the following code, >> >> ### LLVM ### >> >> 0000000000000fb0 : >> fb0: 48 8b 45 00 mov 0x0(%rbp),%rax >> fb4: 48 8d 1d a5 10 20 00 lea 0x2010a5(%rip),%rbx >> # 202060 >> fbb: ff e0 jmpq *%rax >> >> 0000000000000fe0 : >> fe0: eb ce jmp fb0 >> >> ### NCG ### >> >> 0000000000000e10 : >> e10: 48 8d 1d 51 12 20 00 lea 0x201251(%rip),%rbx >> # 202068 >> e17: ff 65 00 jmpq *0x0(%rbp) >> >> 0000000000000e40 : >> e40: eb ce jmp e10 >> >> Again, it seems that LLVM is a bit more verbose but seems to handle >> intra-package calls efficiently. >> >> >> >> [1] https://github.com/bgamari/ghc/commits/llvm-dynamic >> [2] https://github.com/bgamari/ghc-linking-tests/tree/master/ghc-test >> [3] `helloWorld` definitions: >> >> LLVM: >> 00000000000010a8 : >> 10a8: 50 push %rax >> 10a9: 4c 8d 75 f0 lea -0x10(%rbp),%r14 >> 10ad: 4d 39 fe cmp %r15,%r14 >> 10b0: 73 07 jae 10b9 >> >> 10b2: 49 8b 45 f0 mov -0x10(%r13),%rax >> 10b6: 5a pop %rdx >> 10b7: ff e0 jmpq *%rax >> 10b9: 4c 89 ef mov %r13,%rdi >> 10bc: 48 89 de mov %rbx,%rsi >> 10bf: e8 0c fd ff ff callq dd0 >> 10c4: 48 85 c0 test %rax,%rax >> 10c7: 74 22 je 10eb >> >> 10c9: 48 8b 0d 18 0f 20 00 mov 0x200f18(%rip),%rcx >> # 201fe8 <_DYNAMIC+0x228> >> 10d0: 48 89 4d f0 mov %rcx,-0x10(%rbp) >> 10d4: 48 89 45 f8 mov %rax,-0x8(%rbp) >> 10d8: 48 8d 05 21 00 00 00 lea 0x21(%rip),%rax # >> 1100 >> 10df: 4c 89 f5 mov %r14,%rbp >> 10e2: 49 89 c6 mov %rax,%r14 >> 10e5: 58 pop %rax >> 10e6: e9 b5 fc ff ff jmpq da0 >> >> 10eb: 48 8b 03 mov (%rbx),%rax >> 10ee: 5a pop %rdx >> 10ef: ff e0 jmpq *%rax >> >> >> NCG: >> >> 0000000000000ef8 : >> ef8: 48 8d 45 f0 lea -0x10(%rbp),%rax >> efc: 4c 39 f8 cmp %r15,%rax >> eff: 72 3f jb f40 >> >> f01: 4c 89 ef mov %r13,%rdi >> f04: 48 89 de mov %rbx,%rsi >> f07: 48 83 ec 08 sub $0x8,%rsp >> f0b: b8 00 00 00 00 mov $0x0,%eax >> f10: e8 1b fd ff ff callq c30 >> f15: 48 83 c4 08 add $0x8,%rsp >> f19: 48 85 c0 test %rax,%rax >> f1c: 74 20 je f3e >> >> f1e: 48 8b 1d cb 10 20 00 mov 0x2010cb(%rip),%rbx >> # 201ff0 <_DYNAMIC+0x238> >> f25: 48 89 5d f0 mov %rbx,-0x10(%rbp) >> f29: 48 89 45 f8 mov %rax,-0x8(%rbp) >> f2d: 4c 8d 35 1c 00 00 00 lea 0x1c(%rip),%r14 # >> f50 >> f34: 48 83 c5 f0 add $0xfffffffffffffff0,%rbp >> f38: ff 25 7a 10 20 00 jmpq *0x20107a(%rip) # >> 201fb8 <_DYNAMIC+0x200> >> f3e: ff 23 jmpq *(%rbx) >> f40: 41 ff 65 f0 jmpq *-0x10(%r13) >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron at frieltek.com Thu Jan 2 03:03:10 2014 From: aaron at frieltek.com (Aaron Friel) Date: Thu, 2 Jan 2014 03:03:10 +0000 Subject: LLVM and dynamic linking In-Reply-To: References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com>, Message-ID: Because I think it?s going to be an organizational issue and a duplication of effort if GHC is built one way but the future direction of LLVM is another. Imagine if GCC started developing a new engine and it didn?t work with one of the biggest, most regular consumers of GCC. Say, the Linux kernel, or itself. At first, the situation is optimistic - if this engine doesn?t work for the project that has the smartest, brightest GCC hackers potentially looking at it, then it should fix itself soon enough. Suppose the situation lingers though, and continues for months without fix. The new GCC backend starts to become the default, and the community around GCC advocates for end-users to use it to optimize code for their projects and it even becomes the default for some platforms, such as ARM. What I?ve described is analogous to the GHC situation - and the result is that GHC isn?t self-hosting on some platforms and the inertia that used to be behind the LLVM backend seems to have stagnated. Whereas LLVM used to be the ?new hotness?, I?ve noticed that issues like Trac #7787 no longer have a lot of eyes on them and externally it seems like GHC has accepted a bifurcated approach for development. I dramatize the situation above, but there?s some truth to it. The LLVM backend needs some care and attention and if the majority of GHC devs can?t build GHC with LLVM, then that means the smartest, brightest GHC hackers won?t have their attention turned toward fixing those problems. If a patch to GHC-HEAD broke compilation for every backend, it would be fixed in short order. If a new version of GCC did not work with GHC, I can imagine it would be only hours before the first patches came in resolving the issue. On OS X Mavericks, an incompatibility with GHC has led to a swift reaction and strong support for resolving platform issues. The attention to the LLVM backend is visibly smaller, but I don?t know enough about the people working on GHC to know if it is actually smaller. The way I am trying to change this is by making it easier for people to start using GHC (by putting images on Docker.io) and, in the process, learning about GHC?s build process and trying to make things work for my own projects. The Docker image allows anyone with a Linux kernel to build and play with GHC HEAD. The information about building GHC yourself is difficult to approach and I found it hard to get started, and I want to improve that too, so I?m learning and asking questions. From: Carter Schonwald Sent: ?Wednesday?, ?January? ?1?, ?2014 ?5?:?54? ?PM To: Aaron Friel Cc: ghc-devs at haskell.org 7.8 should have working dylib support on the llvm backend. (i believe some of the relevant patches are in head already, though Ben Gamari can opine on that) why do you want ghc to be built with llvm? (i know i've tried myself in the past, and it should be doable with 7.8 using 7.8 soon too) On Wed, Jan 1, 2014 at 5:38 PM, Aaron Friel > wrote: Replying to include the email list. You?re right, the llvm backend and the gmp licensing issues are orthogonal - or should be. The problem is I get build errors when trying to build GHC with LLVM and dynamic libraries. The result is that I get a few different choices when producing a platform image for development, with some uncomfortable tradeoffs: 1. LLVM-built GHC, dynamic libs - doesn?t build. 2. LLVM-built GHC, static libs - potential licensing oddities with me shipping a statically linked ghc binary that is now gpled. I am not a lawyer, but the situation makes me uncomfortable. 3. GCC/ASM-built GHC, dynamic libs - this is the *standard* for most platforms shipping ghc binaries, but it means that one of the biggest and most critical users of the LLVM backend is neglecting it. It also bifurcates development resources for GHC. Optimization work is duplicated and already devs are getting into the uncomfortable position of suggesting to users that they should trust GHC to build your programs in a particular way, but not itself. 4. GCC/ASM-built GHC, static libs - worst of all possible worlds. Because of this, the libgmp and llvm-backend issues aren?t entirely orthogonal. Trac ticket #7885 is exactly the issue I get when trying to compile #1. From: Carter Schonwald Sent: ?Monday?, ?December? ?30?, ?2013 ?1?:?05? ?PM To: Aaron Friel Good question but you forgot to email the mailing list too :-) Using llvm has nothing to do with Gmp. Use the native code gen (it's simper) and integer-simple. That said, standard ghc dylinks to a system copy of Gmp anyways (I think ). Building ghc as a Dylib is orthogonal. -Carter On Dec 30, 2013, at 1:58 PM, Aaron Friel > wrote: Excellent research - I?m curious if this is the right thread to inquire about the status of trying to link GHC itself dynamically. I?ve been attempting to do so with various LLVM versions (3.2, 3.3, 3.4) using snapshot builds of GHC (within the past week) from git, and I hit ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every time (even the exact same error message). I?m interested in dynamically linking GHC with LLVM to avoid the entanglement with libgmp?s license. If this is the wrong thread or if I should reply instead to the trac item, please let me know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 2 03:53:09 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 1 Jan 2014 22:53:09 -0500 Subject: LLVM and dynamic linking In-Reply-To: References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com> Message-ID: well, please feel welcome to ask for help as much as you need! To repeat: if you use ghc HEAD, it should be doable to build GHC head (using head as the bootstrap compiler) using LLVM. Once Ben's llvm dy linking patches land, you should be able to do both dynamic and static linking with llvm. As for your Mavericks example, if you review ghc trac and the mailing lists plus irc logs, it took the effort of several folks spread over several months to make sure that once Mavericks / Xcode 5 landed, that it would be "easy" to fix. that said, theres no need to take such a polarizing tone, with speculations about the priorities of the various GHC devs. We're all volunteers (ok, theres a some who are paid volunteers) who care about making sure ghc works as well as possible for everyone, but have finite time in the day, and so many different ways to ghc can be made better. (and in many cases, have a day job that also needs attention too). please test things and holler when they don't work, and if you can debug problems and cook up good patches, great! in the case of llvm and dynamic linking, the root cause was actually pretty darn subtle, and I'm immensely grateful that Ben Gamari got to the root of it. (I'd definitely hit the problem myself, and I was absolutely stumped when I tried to investigate it.) On Wed, Jan 1, 2014 at 10:03 PM, Aaron Friel wrote: > Because I think it?s going to be an organizational issue and a > duplication of effort if GHC is built one way but the future direction of > LLVM is another. > > Imagine if GCC started developing a new engine and it didn?t work with > one of the biggest, most regular consumers of GCC. Say, the Linux kernel, > or itself. At first, the situation is optimistic - if this engine doesn?t > work for the project that has the smartest, brightest GCC hackers > potentially looking at it, then it should fix itself soon enough. Suppose > the situation lingers though, and continues for months without fix. The new > GCC backend starts to become the default, and the community around GCC > advocates for end-users to use it to optimize code for their projects and > it even becomes the default for some platforms, such as ARM. > > What I?ve described is analogous to the GHC situation - and the result > is that GHC isn?t self-hosting on some platforms and the inertia that used > to be behind the LLVM backend seems to have stagnated. Whereas LLVM used to > be the ?new hotness?, I?ve noticed that issues like Trac #7787 no longer > have a lot of eyes on them and externally it seems like GHC has accepted a > bifurcated approach for development. > > I dramatize the situation above, but there?s some truth to it. The LLVM > backend needs some care and attention and if the majority of GHC devs can?t > build GHC with LLVM, then that means the smartest, brightest GHC hackers > won?t have their attention turned toward fixing those problems. If a patch > to GHC-HEAD broke compilation for every backend, it would be fixed in short > order. If a new version of GCC did not work with GHC, I can > imagine it would be only hours before the first patches came in resolving > the issue. On OS X Mavericks, an incompatibility with GHC has led to a > swift reaction and strong support for resolving platform issues. The > attention to the LLVM backend is visibly smaller, but I don?t know enough > about the people working on GHC to know if it is actually smaller. > > The way I am trying to change this is by making it easier for people to > start using GHC (by putting images on Docker.io) and, in the process, > learning about GHC?s build process and trying to make things work for my > own projects. The Docker image allows anyone with a Linux kernel to > build and play with GHC HEAD. The information about building GHC yourself > is difficult to approach and I found it hard to get started, and I want to > improve that too, so I?m learning and asking questions. > > *From:* Carter Schonwald > *Sent:* ?Wednesday?, ?January? ?1?, ?2014 ?5?:?54? ?PM > *To:* Aaron Friel > *Cc:* ghc-devs at haskell.org > > 7.8 should have working dylib support on the llvm backend. (i believe > some of the relevant patches are in head already, though Ben Gamari can > opine on that) > > why do you want ghc to be built with llvm? (i know i've tried myself in > the past, and it should be doable with 7.8 using 7.8 soon too) > > > On Wed, Jan 1, 2014 at 5:38 PM, Aaron Friel wrote: > >> Replying to include the email list. You?re right, the llvm backend and >> the gmp licensing issues are orthogonal - or should be. The problem is I >> get build errors when trying to build GHC with LLVM and dynamic libraries. >> >> The result is that I get a few different choices when producing a >> platform image for development, with some uncomfortable tradeoffs: >> >> >> 1. LLVM-built GHC, dynamic libs - doesn?t build. >> 2. LLVM-built GHC, static libs - potential licensing oddities with me >> shipping a statically linked ghc binary that is now gpled. I am not a >> lawyer, but the situation makes me uncomfortable. >> 3. GCC/ASM-built GHC, dynamic libs - this is the *standard* for most >> platforms shipping ghc binaries, but it means that one of the biggest and >> most critical users of the LLVM backend is neglecting it. It also >> bifurcates development resources for GHC. Optimization work is duplicated >> and already devs are getting into the uncomfortable position of suggesting >> to users that they should trust GHC to build your programs in a particular >> way, but not itself. >> 4. GCC/ASM-built GHC, static libs - worst of all possible worlds. >> >> >> Because of this, the libgmp and llvm-backend issues aren?t entirely >> orthogonal. Trac ticket #7885 is exactly the issue I get when trying to >> compile #1. >> >> *From:* Carter Schonwald >> *Sent:* ?Monday?, ?December? ?30?, ?2013 ?1?:?05? ?PM >> *To:* Aaron Friel >> >> Good question but you forgot to email the mailing list too :-) >> >> Using llvm has nothing to do with Gmp. Use the native code gen (it's >> simper) and integer-simple. >> >> That said, standard ghc dylinks to a system copy of Gmp anyways (I >> think ). Building ghc as a Dylib is orthogonal. >> >> -Carter >> >> On Dec 30, 2013, at 1:58 PM, Aaron Friel wrote: >> >> Excellent research - I?m curious if this is the right thread to >> inquire about the status of trying to link GHC itself dynamically. >> >> I?ve been attempting to do so with various LLVM versions (3.2, 3.3, >> 3.4) using snapshot builds of GHC (within the past week) from git, and I >> hit ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every >> time (even the exact same error message). >> >> I?m interested in dynamically linking GHC with LLVM to avoid the >> entanglement with libgmp?s license. >> >> If this is the wrong thread or if I should reply instead to the trac >> item, please let me know. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jan 2 07:06:48 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Thu, 2 Jan 2014 07:06:48 +0000 Subject: GHC Api Message-ID: <59543203684B2244980D7E4057D5FBC148702143@DB3EX14MBXC306.europe.corp.microsoft.com> Simon and othere Happy new year! When debugging Trac #8628 I wrote the following: main = do [libdir] <- getArgs ok <- runGhc (Just libdir) $ do dflags <- getSessionDynFlags -- (1) setSessionDynFlags dflags liftIO (setUnsafeGlobalDynFlags dflags) -- (2) setContext [IIDecl (simpleImportDecl pRELUDE_NAME)] -- (3) runDecls "data X = Y Int" runStmt "print True" -- (4) return () There are several odd things here 1. Why do I have to do this "getSessionDynFlags/setSessionDynFlags" thing. Seems bizarre. I just copied it from some other tests in ghc-api/. Is it necessary? If not, can we remove it from all tests? 2. Initially I didn't have that setUnsafeGlobalDynFlags call. But then I got T8628.exe: T8628.exe: panic! (the 'impossible' happened) (GHC version 7.7.20131228 for i386-unknown-mingw32): v_unsafeGlobalDynFlags: not initialised which is a particularly unhelpful message. It arose because I was using a GHC built with assertions on, and a warnPprTrace triggered. Since this could happen to anyone, would it make sense to make this part of runGhc and setSessionDynFlags? 3. Initially I didn't have that setContext call, and got a complaint that "Int is not in scope". I was expecting the Prelude to be implicitly in scope. But I'm not sure where to fix that. Possibly part of the setup in runGhc? 4. The runStmt should print something somewhere, but it doesn't. Why not? What do you think? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron at frieltek.com Thu Jan 2 07:31:33 2014 From: aaron at frieltek.com (Aaron Friel) Date: Thu, 2 Jan 2014 07:31:33 +0000 Subject: LLVM and dynamic linking In-Reply-To: References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com> , Message-ID: I eagerly look forward to these patches, I hope they are able to land in time for the 7.8 release as well. Do you have any additional information on them - or is it part of a branch I could look at? And I apologize for the polarizing tone - I?m overdramatizing the situation and I?m new to following GHC at the root (or head, whichever). Regardless, the LLVM dynamic linking issue has popped up now and again (there are a good number of trac issues) and I?m eager to see that GHC is able to be built properly with it and have it stay working. I have no doubt the issues Ben and others have been working with are subtle and complex. There are absolutely brilliant people here working on GHC, so any problem left unsolved is bound to be uniquely difficult. From: Carter Schonwald Sent: ?Wednesday?, ?January? ?1?, ?2014 ?9?:?53? ?PM To: Aaron Friel Cc: ghc-devs at haskell.org well, please feel welcome to ask for help as much as you need! To repeat: if you use ghc HEAD, it should be doable to build GHC head (using head as the bootstrap compiler) using LLVM. Once Ben's llvm dy linking patches land, you should be able to do both dynamic and static linking with llvm. As for your Mavericks example, if you review ghc trac and the mailing lists plus irc logs, it took the effort of several folks spread over several months to make sure that once Mavericks / Xcode 5 landed, that it would be "easy" to fix. that said, theres no need to take such a polarizing tone, with speculations about the priorities of the various GHC devs. We're all volunteers (ok, theres a some who are paid volunteers) who care about making sure ghc works as well as possible for everyone, but have finite time in the day, and so many different ways to ghc can be made better. (and in many cases, have a day job that also needs attention too). please test things and holler when they don't work, and if you can debug problems and cook up good patches, great! in the case of llvm and dynamic linking, the root cause was actually pretty darn subtle, and I'm immensely grateful that Ben Gamari got to the root of it. (I'd definitely hit the problem myself, and I was absolutely stumped when I tried to investigate it.) On Wed, Jan 1, 2014 at 10:03 PM, Aaron Friel > wrote: Because I think it?s going to be an organizational issue and a duplication of effort if GHC is built one way but the future direction of LLVM is another. Imagine if GCC started developing a new engine and it didn?t work with one of the biggest, most regular consumers of GCC. Say, the Linux kernel, or itself. At first, the situation is optimistic - if this engine doesn?t work for the project that has the smartest, brightest GCC hackers potentially looking at it, then it should fix itself soon enough. Suppose the situation lingers though, and continues for months without fix. The new GCC backend starts to become the default, and the community around GCC advocates for end-users to use it to optimize code for their projects and it even becomes the default for some platforms, such as ARM. What I?ve described is analogous to the GHC situation - and the result is that GHC isn?t self-hosting on some platforms and the inertia that used to be behind the LLVM backend seems to have stagnated. Whereas LLVM used to be the ?new hotness?, I?ve noticed that issues like Trac #7787 no longer have a lot of eyes on them and externally it seems like GHC has accepted a bifurcated approach for development. I dramatize the situation above, but there?s some truth to it. The LLVM backend needs some care and attention and if the majority of GHC devs can?t build GHC with LLVM, then that means the smartest, brightest GHC hackers won?t have their attention turned toward fixing those problems. If a patch to GHC-HEAD broke compilation for every backend, it would be fixed in short order. If a new version of GCC did not work with GHC, I can imagine it would be only hours before the first patches came in resolving the issue. On OS X Mavericks, an incompatibility with GHC has led to a swift reaction and strong support for resolving platform issues. The attention to the LLVM backend is visibly smaller, but I don?t know enough about the people working on GHC to know if it is actually smaller. The way I am trying to change this is by making it easier for people to start using GHC (by putting images on Docker.io) and, in the process, learning about GHC?s build process and trying to make things work for my own projects. The Docker image allows anyone with a Linux kernel to build and play with GHC HEAD. The information about building GHC yourself is difficult to approach and I found it hard to get started, and I want to improve that too, so I?m learning and asking questions. From: Carter Schonwald Sent: ?Wednesday?, ?January? ?1?, ?2014 ?5?:?54? ?PM To: Aaron Friel Cc: ghc-devs at haskell.org 7.8 should have working dylib support on the llvm backend. (i believe some of the relevant patches are in head already, though Ben Gamari can opine on that) why do you want ghc to be built with llvm? (i know i've tried myself in the past, and it should be doable with 7.8 using 7.8 soon too) On Wed, Jan 1, 2014 at 5:38 PM, Aaron Friel > wrote: Replying to include the email list. You?re right, the llvm backend and the gmp licensing issues are orthogonal - or should be. The problem is I get build errors when trying to build GHC with LLVM and dynamic libraries. The result is that I get a few different choices when producing a platform image for development, with some uncomfortable tradeoffs: 1. LLVM-built GHC, dynamic libs - doesn?t build. 2. LLVM-built GHC, static libs - potential licensing oddities with me shipping a statically linked ghc binary that is now gpled. I am not a lawyer, but the situation makes me uncomfortable. 3. GCC/ASM-built GHC, dynamic libs - this is the *standard* for most platforms shipping ghc binaries, but it means that one of the biggest and most critical users of the LLVM backend is neglecting it. It also bifurcates development resources for GHC. Optimization work is duplicated and already devs are getting into the uncomfortable position of suggesting to users that they should trust GHC to build your programs in a particular way, but not itself. 4. GCC/ASM-built GHC, static libs - worst of all possible worlds. Because of this, the libgmp and llvm-backend issues aren?t entirely orthogonal. Trac ticket #7885 is exactly the issue I get when trying to compile #1. From: Carter Schonwald Sent: ?Monday?, ?December? ?30?, ?2013 ?1?:?05? ?PM To: Aaron Friel Good question but you forgot to email the mailing list too :-) Using llvm has nothing to do with Gmp. Use the native code gen (it's simper) and integer-simple. That said, standard ghc dylinks to a system copy of Gmp anyways (I think ). Building ghc as a Dylib is orthogonal. -Carter On Dec 30, 2013, at 1:58 PM, Aaron Friel > wrote: Excellent research - I?m curious if this is the right thread to inquire about the status of trying to link GHC itself dynamically. I?ve been attempting to do so with various LLVM versions (3.2, 3.3, 3.4) using snapshot builds of GHC (within the past week) from git, and I hit ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every time (even the exact same error message). I?m interested in dynamically linking GHC with LLVM to avoid the entanglement with libgmp?s license. If this is the wrong thread or if I should reply instead to the trac item, please let me know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 2 07:40:59 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 2 Jan 2014 02:40:59 -0500 Subject: LLVM and dynamic linking In-Reply-To: References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com> Message-ID: you can try it out yourself pretty easily, linked from the master ticket on this https://ghc.haskell.org/trac/ghc/ticket/4210#comment:27 bens ghc repo is at https://github.com/bgamari/ghc/compare/llvm-intra-package (nb: its a work in progress of his) On Thu, Jan 2, 2014 at 2:31 AM, Aaron Friel wrote: > I eagerly look forward to these patches, I hope they are able to land in > time for the 7.8 release as well. Do you have any additional information on > them - or is it part of a branch I could look at? > > And I apologize for the polarizing tone - I?m overdramatizing the > situation and I?m new to following GHC at the root (or head, whichever). > Regardless, the LLVM dynamic linking issue has popped up now and again > (there are a good number of trac issues) and I?m eager to see that GHC is > able to be built properly with it and have it stay working. > > I have no doubt the issues Ben and others have been working with are > subtle and complex. There are absolutely brilliant people here working on > GHC, so any problem left unsolved is bound to be uniquely difficult. > > *From:* Carter Schonwald > *Sent:* ?Wednesday?, ?January? ?1?, ?2014 ?9?:?53? ?PM > > *To:* Aaron Friel > *Cc:* ghc-devs at haskell.org > > well, please feel welcome to ask for help as much as you need! To > repeat: if you use ghc HEAD, it should be doable to build GHC head (using > head as the bootstrap compiler) using LLVM. Once Ben's llvm dy linking > patches land, you should be able to do both dynamic and static linking with > llvm. > > As for your Mavericks example, if you review ghc trac and the mailing > lists plus irc logs, it took the effort of several folks spread over > several months to make sure that once Mavericks / Xcode 5 landed, that it > would be "easy" to fix. > > that said, theres no need to take such a polarizing tone, with > speculations about the priorities of the various GHC devs. We're all > volunteers (ok, theres a some who are paid volunteers) who care about > making sure ghc works as well as possible for everyone, but have finite > time in the day, and so many different ways to ghc can be made better. (and > in many cases, have a day job that also needs attention too). > > please test things and holler when they don't work, and if you can debug > problems and cook up good patches, great! > > in the case of llvm and dynamic linking, the root cause was actually > pretty darn subtle, and I'm immensely grateful that Ben Gamari got to the > root of it. (I'd definitely hit the problem myself, and I was absolutely > stumped when I tried to investigate it.) > > > On Wed, Jan 1, 2014 at 10:03 PM, Aaron Friel wrote: > >> Because I think it?s going to be an organizational issue and a >> duplication of effort if GHC is built one way but the future direction of >> LLVM is another. >> >> Imagine if GCC started developing a new engine and it didn?t work with >> one of the biggest, most regular consumers of GCC. Say, the Linux kernel, >> or itself. At first, the situation is optimistic - if this engine doesn?t >> work for the project that has the smartest, brightest GCC hackers >> potentially looking at it, then it should fix itself soon enough. Suppose >> the situation lingers though, and continues for months without fix. The new >> GCC backend starts to become the default, and the community around GCC >> advocates for end-users to use it to optimize code for their projects and >> it even becomes the default for some platforms, such as ARM. >> >> What I?ve described is analogous to the GHC situation - and the result >> is that GHC isn?t self-hosting on some platforms and the inertia that used >> to be behind the LLVM backend seems to have stagnated. Whereas LLVM used to >> be the ?new hotness?, I?ve noticed that issues like Trac #7787 no longer >> have a lot of eyes on them and externally it seems like GHC has accepted a >> bifurcated approach for development. >> >> I dramatize the situation above, but there?s some truth to it. The LLVM >> backend needs some care and attention and if the majority of GHC devs can?t >> build GHC with LLVM, then that means the smartest, brightest GHC hackers >> won?t have their attention turned toward fixing those problems. If a patch >> to GHC-HEAD broke compilation for every backend, it would be fixed in short >> order. If a new version of GCC did not work with GHC, I can >> imagine it would be only hours before the first patches came in resolving >> the issue. On OS X Mavericks, an incompatibility with GHC has led to a >> swift reaction and strong support for resolving platform issues. The >> attention to the LLVM backend is visibly smaller, but I don?t know enough >> about the people working on GHC to know if it is actually smaller. >> >> The way I am trying to change this is by making it easier for people to >> start using GHC (by putting images on Docker.io) and, in the process, >> learning about GHC?s build process and trying to make things work for my >> own projects. The Docker image allows anyone with a Linux kernel to >> build and play with GHC HEAD. The information about building GHC yourself >> is difficult to approach and I found it hard to get started, and I want to >> improve that too, so I?m learning and asking questions. >> >> *From:* Carter Schonwald >> *Sent:* ?Wednesday?, ?January? ?1?, ?2014 ?5?:?54? ?PM >> *To:* Aaron Friel >> *Cc:* ghc-devs at haskell.org >> >> 7.8 should have working dylib support on the llvm backend. (i believe >> some of the relevant patches are in head already, though Ben Gamari can >> opine on that) >> >> why do you want ghc to be built with llvm? (i know i've tried myself in >> the past, and it should be doable with 7.8 using 7.8 soon too) >> >> >> On Wed, Jan 1, 2014 at 5:38 PM, Aaron Friel wrote: >> >>> Replying to include the email list. You?re right, the llvm backend and >>> the gmp licensing issues are orthogonal - or should be. The problem is I >>> get build errors when trying to build GHC with LLVM and dynamic libraries. >>> >>> The result is that I get a few different choices when producing a >>> platform image for development, with some uncomfortable tradeoffs: >>> >>> >>> 1. LLVM-built GHC, dynamic libs - doesn?t build. >>> 2. LLVM-built GHC, static libs - potential licensing oddities with >>> me shipping a statically linked ghc binary that is now gpled. I am not a >>> lawyer, but the situation makes me uncomfortable. >>> 3. GCC/ASM-built GHC, dynamic libs - this is the *standard* for most >>> platforms shipping ghc binaries, but it means that one of the biggest and >>> most critical users of the LLVM backend is neglecting it. It also >>> bifurcates development resources for GHC. Optimization work is duplicated >>> and already devs are getting into the uncomfortable position of suggesting >>> to users that they should trust GHC to build your programs in a particular >>> way, but not itself. >>> 4. GCC/ASM-built GHC, static libs - worst of all possible worlds. >>> >>> >>> Because of this, the libgmp and llvm-backend issues aren?t entirely >>> orthogonal. Trac ticket #7885 is exactly the issue I get when trying to >>> compile #1. >>> >>> *From:* Carter Schonwald >>> *Sent:* ?Monday?, ?December? ?30?, ?2013 ?1?:?05? ?PM >>> *To:* Aaron Friel >>> >>> Good question but you forgot to email the mailing list too :-) >>> >>> Using llvm has nothing to do with Gmp. Use the native code gen (it's >>> simper) and integer-simple. >>> >>> That said, standard ghc dylinks to a system copy of Gmp anyways (I >>> think ). Building ghc as a Dylib is orthogonal. >>> >>> -Carter >>> >>> On Dec 30, 2013, at 1:58 PM, Aaron Friel wrote: >>> >>> Excellent research - I?m curious if this is the right thread to >>> inquire about the status of trying to link GHC itself dynamically. >>> >>> I?ve been attempting to do so with various LLVM versions (3.2, 3.3, >>> 3.4) using snapshot builds of GHC (within the past week) from git, and I >>> hit ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every >>> time (even the exact same error message). >>> >>> I?m interested in dynamically linking GHC with LLVM to avoid the >>> entanglement with libgmp?s license. >>> >>> If this is the wrong thread or if I should reply instead to the trac >>> item, please let me know. >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From klangner at gmail.com Thu Jan 2 09:54:04 2014 From: klangner at gmail.com (Krzysztof Langner) Date: Thu, 2 Jan 2014 10:54:04 +0100 Subject: [commit: packages/base] master: Improve error messages for partial functions in Data.Data (d0b74ca) (Gabor Greif) Message-ID: Hi Gabor, Thank you for the feedback. You are right about spelling. I'll fix it. Regarding refactoring: If I understand you correctly your suggestion is to add new function which will wrap error message so the text will be less redundant. I'm not sure about it. From my experience it is better to leave error messages as simple as possible. Since when using functions in error messages it is possible that this function will fail and then you will get confusing error message (from inside function). So as a rule of thumb I never add another functions inside error processing (except the ones which makes message more meaningful). But that's of course just my point of view. It is also possible that I didn't understand your proposition correctly :-) -- Thanks Krzysztof -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Jan 2 10:19:04 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 02 Jan 2014 11:19:04 +0100 Subject: [commit: packages/base] master: Improve error messages for partial functions in Data.Data (d0b74ca) (Gabor Greif) In-Reply-To: References: Message-ID: <1388657944.2542.3.camel@kirk> Hi, Am Donnerstag, den 02.01.2014, 10:54 +0100 schrieb Krzysztof Langner: > I'm not sure about it. From my experience it is better to leave error > messages as simple as possible. Since when using functions in error > messages it is possible that this function will fail and then you will > get confusing error message (from inside function). for most pure Haskell functions, you can be quite certain that they don?t fail, by following simple rules (complete patterns, no use of partial functions like head or fromJust). So while this is might be true in other programming languages, here you can put trust in Haskell?s type system ? if it compiles, it works. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Thu Jan 2 14:16:32 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Thu, 2 Jan 2014 14:16:32 +0000 Subject: LLVM and dynamic linking In-Reply-To: References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com>, Message-ID: <59543203684B2244980D7E4057D5FBC148702845@DB3EX14MBXC306.europe.corp.microsoft.com> Aaron, The LLVM backend needs some care and attention I?m sure you are right about this. Could you become one of the people offering that care and attention. Who are the GHC developers? They are simply volunteers who make time to give something back to their community, and GHC relies absolutely on their commitment and expertise. So do please join in if you can; it?s clearly something you care about, and have some knowledge of. With thanks and best wishes, Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Aaron Friel Sent: 02 January 2014 03:03 To: Carter Schonwald Cc: ghc-devs at haskell.org Subject: Re: LLVM and dynamic linking Because I think it?s going to be an organizational issue and a duplication of effort if GHC is built one way but the future direction of LLVM is another. Imagine if GCC started developing a new engine and it didn?t work with one of the biggest, most regular consumers of GCC. Say, the Linux kernel, or itself. At first, the situation is optimistic - if this engine doesn?t work for the project that has the smartest, brightest GCC hackers potentially looking at it, then it should fix itself soon enough. Suppose the situation lingers though, and continues for months without fix. The new GCC backend starts to become the default, and the community around GCC advocates for end-users to use it to optimize code for their projects and it even becomes the default for some platforms, such as ARM. What I?ve described is analogous to the GHC situation - and the result is that GHC isn?t self-hosting on some platforms and the inertia that used to be behind the LLVM backend seems to have stagnated. Whereas LLVM used to be the ?new hotness?, I?ve noticed that issues like Trac #7787 no longer have a lot of eyes on them and externally it seems like GHC has accepted a bifurcated approach for development. I dramatize the situation above, but there?s some truth to it. The LLVM backend needs some care and attention and if the majority of GHC devs can?t build GHC with LLVM, then that means the smartest, brightest GHC hackers won?t have their attention turned toward fixing those problems. If a patch to GHC-HEAD broke compilation for every backend, it would be fixed in short order. If a new version of GCC did not work with GHC, I can imagine it would be only hours before the first patches came in resolving the issue. On OS X Mavericks, an incompatibility with GHC has led to a swift reaction and strong support for resolving platform issues. The attention to the LLVM backend is visibly smaller, but I don?t know enough about the people working on GHC to know if it is actually smaller. The way I am trying to change this is by making it easier for people to start using GHC (by putting images on Docker.io) and, in the process, learning about GHC?s build process and trying to make things work for my own projects. The Docker image allows anyone with a Linux kernel to build and play with GHC HEAD. The information about building GHC yourself is difficult to approach and I found it hard to get started, and I want to improve that too, so I?m learning and asking questions. From: Carter Schonwald Sent: ?Wednesday?, ?January? ?1?, ?2014 ?5?:?54? ?PM To: Aaron Friel Cc: ghc-devs at haskell.org 7.8 should have working dylib support on the llvm backend. (i believe some of the relevant patches are in head already, though Ben Gamari can opine on that) why do you want ghc to be built with llvm? (i know i've tried myself in the past, and it should be doable with 7.8 using 7.8 soon too) On Wed, Jan 1, 2014 at 5:38 PM, Aaron Friel > wrote: Replying to include the email list. You?re right, the llvm backend and the gmp licensing issues are orthogonal - or should be. The problem is I get build errors when trying to build GHC with LLVM and dynamic libraries. The result is that I get a few different choices when producing a platform image for development, with some uncomfortable tradeoffs: 1. LLVM-built GHC, dynamic libs - doesn?t build. 1. LLVM-built GHC, static libs - potential licensing oddities with me shipping a statically linked ghc binary that is now gpled. I am not a lawyer, but the situation makes me uncomfortable. 1. GCC/ASM-built GHC, dynamic libs - this is the *standard* for most platforms shipping ghc binaries, but it means that one of the biggest and most critical users of the LLVM backend is neglecting it. It also bifurcates development resources for GHC. Optimization work is duplicated and already devs are getting into the uncomfortable position of suggesting to users that they should trust GHC to build your programs in a particular way, but not itself. 1. GCC/ASM-built GHC, static libs - worst of all possible worlds. Because of this, the libgmp and llvm-backend issues aren?t entirely orthogonal. Trac ticket #7885 is exactly the issue I get when trying to compile #1. From: Carter Schonwald Sent: ?Monday?, ?December? ?30?, ?2013 ?1?:?05? ?PM To: Aaron Friel Good question but you forgot to email the mailing list too :-) Using llvm has nothing to do with Gmp. Use the native code gen (it's simper) and integer-simple. That said, standard ghc dylinks to a system copy of Gmp anyways (I think ). Building ghc as a Dylib is orthogonal. -Carter On Dec 30, 2013, at 1:58 PM, Aaron Friel > wrote: Excellent research - I?m curious if this is the right thread to inquire about the status of trying to link GHC itself dynamically. I?ve been attempting to do so with various LLVM versions (3.2, 3.3, 3.4) using snapshot builds of GHC (within the past week) from git, and I hit ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every time (even the exact same error message). I?m interested in dynamically linking GHC with LLVM to avoid the entanglement with libgmp?s license. If this is the wrong thread or if I should reply instead to the trac item, please let me know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Jan 2 15:10:24 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 02 Jan 2014 15:10:24 +0000 Subject: GHC Api In-Reply-To: <59543203684B2244980D7E4057D5FBC148702143@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148702143@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52C58160.6030800@gmail.com> On 02/01/14 07:06, Simon Peyton-Jones wrote: > Simon and othere > > Happy new year! > > When debugging Trac #8628 I wrote the following: > > main > > = do [libdir] <- getArgs > > ok <- runGhc (Just libdir) $ do > > dflags <- getSessionDynFlags -- (1) > > setSessionDynFlags dflags > > liftIO (setUnsafeGlobalDynFlags dflags) -- (2) > > setContext [IIDecl (simpleImportDecl pRELUDE_NAME)] -- (3) > > runDecls "data X = Y Int" > > runStmt ?print True? -- (4) > > return () > > There are several odd things here > > 1.Why do I have to do this ?getSessionDynFlags/setSessionDynFlags? > thing. Seems bizarre. I just copied it from some other tests in > ghc-api/. Is it necessary? If not, can we remove it from all tests? It's a sensible question given the naming of the functions. The API is definitely clunky here, but there is a purpose to these calls. setSessionDynFlags loads the package database and does the necessary processing to make packages available. We don't do that automatically, because the client might want to add their own package flags to the DynFlags between the calls to getSessionDynFlags and setSessionDynFlags. Incidentally you can find out some of this stuff from the Haddock docs, e.g. look at the docs for setSessionDynFlags. > 2.Initially I didn?t have that setUnsafeGlobalDynFlags call. But then I > got > > T8628.exe: T8628.exe: panic! (the 'impossible' happened) > > (GHC version 7.7.20131228 for i386-unknown-mingw32): > > v_unsafeGlobalDynFlags: not initialised > > which is a particularly unhelpful message. It arose because I was using > a GHC built with assertions on, and a warnPprTrace triggered. Since > this could happen to anyone, would it make sense to make this part of > runGhc and setSessionDynFlags? I'm not all that familiar with the unsafeGlobalDynFlags stuff (that's Ian's invention), but from looking at the code it looks like you wouldn't need to call this if you were calling parseDynamicFlags. It should be safe to call parseDynamicFlags with an empty set of flags to parse. > 3.Initially I didn?t have that setContext call, and got a complaint that > ?Int is not in scope?. I was expecting the Prelude to be implicitly in > scope. But I?m not sure where to fix that. Possibly part of the setup > in runGhc? I think it's sensible to require a call to setContext to bring the Prelude into scope. The client might want a different context, and setContext isn't free, so we probably don't want to initialise a default context. > 4.The runStmt should print something somewhere, but it doesn?t. Why not? I've no idea! It does look like it should print something. Cheers, Simon > What do you think? > > Simon > From aaron at frieltek.com Thu Jan 2 17:01:52 2014 From: aaron at frieltek.com (Aaron Friel) Date: Thu, 2 Jan 2014 17:01:52 +0000 Subject: LLVM and dynamic linking In-Reply-To: <59543203684B2244980D7E4057D5FBC148702845@DB3EX14MBXC306.europe.corp.microsoft.com> References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com>, , <59543203684B2244980D7E4057D5FBC148702845@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <1108fa1e60ab4f4d8e7d379b6674d07e@BN1PR05MB171.namprd05.prod.outlook.com> I am eager to learn and try to work on this :) From: Simon Peyton-Jones Sent: ?Thursday?, ?January? ?2?, ?2014 ?8?:?17? ?AM To: Aaron Friel, Carter Schonwald Cc: ghc-devs at haskell.org Aaron, The LLVM backend needs some care and attention I?m sure you are right about this. Could you become one of the people offering that care and attention. Who are the GHC developers? They are simply volunteers who make time to give something back to their community, and GHC relies absolutely on their commitment and expertise. So do please join in if you can; it?s clearly something you care about, and have some knowledge of. With thanks and best wishes, Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Aaron Friel Sent: 02 January 2014 03:03 To: Carter Schonwald Cc: ghc-devs at haskell.org Subject: Re: LLVM and dynamic linking Because I think it?s going to be an organizational issue and a duplication of effort if GHC is built one way but the future direction of LLVM is another. Imagine if GCC started developing a new engine and it didn?t work with one of the biggest, most regular consumers of GCC. Say, the Linux kernel, or itself. At first, the situation is optimistic - if this engine doesn?t work for the project that has the smartest, brightest GCC hackers potentially looking at it, then it should fix itself soon enough. Suppose the situation lingers though, and continues for months without fix. The new GCC backend starts to become the default, and the community around GCC advocates for end-users to use it to optimize code for their projects and it even becomes the default for some platforms, such as ARM. What I?ve described is analogous to the GHC situation - and the result is that GHC isn?t self-hosting on some platforms and the inertia that used to be behind the LLVM backend seems to have stagnated. Whereas LLVM used to be the ?new hotness?, I?ve noticed that issues like Trac #7787 no longer have a lot of eyes on them and externally it seems like GHC has accepted a bifurcated approach for development. I dramatize the situation above, but there?s some truth to it. The LLVM backend needs some care and attention and if the majority of GHC devs can?t build GHC with LLVM, then that means the smartest, brightest GHC hackers won?t have their attention turned toward fixing those problems. If a patch to GHC-HEAD broke compilation for every backend, it would be fixed in short order. If a new version of GCC did not work with GHC, I can imagine it would be only hours before the first patches came in resolving the issue. On OS X Mavericks, an incompatibility with GHC has led to a swift reaction and strong support for resolving platform issues. The attention to the LLVM backend is visibly smaller, but I don?t know enough about the people working on GHC to know if it is actually smaller. The way I am trying to change this is by making it easier for people to start using GHC (by putting images on Docker.io) and, in the process, learning about GHC?s build process and trying to make things work for my own projects. The Docker image allows anyone with a Linux kernel to build and play with GHC HEAD. The information about building GHC yourself is difficult to approach and I found it hard to get started, and I want to improve that too, so I?m learning and asking questions. From: Carter Schonwald Sent: ?Wednesday?, ?January? ?1?, ?2014 ?5?:?54? ?PM To: Aaron Friel Cc: ghc-devs at haskell.org 7.8 should have working dylib support on the llvm backend. (i believe some of the relevant patches are in head already, though Ben Gamari can opine on that) why do you want ghc to be built with llvm? (i know i've tried myself in the past, and it should be doable with 7.8 using 7.8 soon too) On Wed, Jan 1, 2014 at 5:38 PM, Aaron Friel > wrote: Replying to include the email list. You?re right, the llvm backend and the gmp licensing issues are orthogonal - or should be. The problem is I get build errors when trying to build GHC with LLVM and dynamic libraries. The result is that I get a few different choices when producing a platform image for development, with some uncomfortable tradeoffs: 1. LLVM-built GHC, dynamic libs - doesn?t build. 1. LLVM-built GHC, static libs - potential licensing oddities with me shipping a statically linked ghc binary that is now gpled. I am not a lawyer, but the situation makes me uncomfortable. 1. GCC/ASM-built GHC, dynamic libs - this is the *standard* for most platforms shipping ghc binaries, but it means that one of the biggest and most critical users of the LLVM backend is neglecting it. It also bifurcates development resources for GHC. Optimization work is duplicated and already devs are getting into the uncomfortable position of suggesting to users that they should trust GHC to build your programs in a particular way, but not itself. 1. GCC/ASM-built GHC, static libs - worst of all possible worlds. Because of this, the libgmp and llvm-backend issues aren?t entirely orthogonal. Trac ticket #7885 is exactly the issue I get when trying to compile #1. From: Carter Schonwald Sent: ?Monday?, ?December? ?30?, ?2013 ?1?:?05? ?PM To: Aaron Friel Good question but you forgot to email the mailing list too :-) Using llvm has nothing to do with Gmp. Use the native code gen (it's simper) and integer-simple. That said, standard ghc dylinks to a system copy of Gmp anyways (I think ). Building ghc as a Dylib is orthogonal. -Carter On Dec 30, 2013, at 1:58 PM, Aaron Friel > wrote: Excellent research - I?m curious if this is the right thread to inquire about the status of trying to link GHC itself dynamically. I?ve been attempting to do so with various LLVM versions (3.2, 3.3, 3.4) using snapshot builds of GHC (within the past week) from git, and I hit ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every time (even the exact same error message). I?m interested in dynamically linking GHC with LLVM to avoid the entanglement with libgmp?s license. If this is the wrong thread or if I should reply instead to the trac item, please let me know. -------------- next part -------------- An HTML attachment was scrubbed... URL: From coreyoconnor at gmail.com Thu Jan 2 18:10:00 2014 From: coreyoconnor at gmail.com (Corey O'Connor) Date: Thu, 2 Jan 2014 10:10:00 -0800 Subject: ticket for adding ARM backend to NCG? In-Reply-To: References: <20131223130759.463580657bd05f4bca3a725c@mega-nerd.com> Message-ID: My interest is just to get involved somehow in the NCG. Starting a new backend seemed reasonable only because I couldn't break something that didn't exist. ;-) Though cleaning up the NCG would probably be more educational for me. So if that's desired then I'll get involved there. Cheers, Corey -Corey O'Connor coreyoconnor at gmail.com http://corebotllc.com/ On Sun, Dec 22, 2013 at 7:54 PM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > I mean next year. I'm hoping to start hacking on it and a few other ncg > related tasks early January with about 1-2 evenings of regularly scheduled > work on it. > > So feel welcome to do a Ppc validate anyways in the mean time :-) > > > On Sunday, December 22, 2013, Erik de Castro Lopo wrote: > >> Carter Schonwald wrote: >> >> > I believe there are no current plans to add arm to the ncg. >> > >> > However, I'm hoping to spend a wee bit of time later this year cleaning >> up >> >> Dude, you have 7 days! Or did you mean next year :-). >> >> > the ncg, and one consequence of that that simon marlow remarked upon at >> > icfp is that would perhaps make it easier to add new targets to ncg. >> >> As soon as that NCG cleanup is ready for public consumption, please >> let me know so I can validate the PowerPC NCG. I think I am one of >> the few people who regularly builds GHC on PowerPC and even I haven't >> done it for two weeks because I just moved house. >> >> Cheers, >> Erik >> -- >> ---------------------------------------------------------------------- >> Erik de Castro Lopo >> http://www.mega-nerd.com/ >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Thu Jan 2 18:16:25 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 2 Jan 2014 19:16:25 +0100 Subject: Building GHC head with clang on Mavericks In-Reply-To: <20131129.174512.682731754748116086.kazu@iij.ad.jp> References: <20131121.155430.37558810975251514.kazu@iij.ad.jp> <529708D6.9030009@gmail.com> <20131129.174512.682731754748116086.kazu@iij.ad.jp> Message-ID: Although it is not really GHC-related, this thread is sufficiently close to the problem described in https://github.com/haskell-opengl/OpenGLRaw/issues/18: AFAICT, Mac OS X 10.9's clang doesn't really honor -traditional, so what can I do to make things work with recent Macs without breaking all other platforms? I guess the right #if in https://github.com/haskell-opengl/OpenGLRaw/blob/master/include/HsOpenGLRaw.h will do the trick, but I don't have access to a Mac. Hints are highly appreciated, the whole current Mac situation is a bit of a mystery to me... Cheers, S. From igloo at earth.li Thu Jan 2 18:29:16 2014 From: igloo at earth.li (Ian Lynagh) Date: Thu, 2 Jan 2014 18:29:16 +0000 Subject: GHC Api In-Reply-To: <52C58160.6030800@gmail.com> References: <59543203684B2244980D7E4057D5FBC148702143@DB3EX14MBXC306.europe.corp.microsoft.com> <52C58160.6030800@gmail.com> Message-ID: <20140102182916.GA5086@matrix.chaos.earth.li> On Thu, Jan 02, 2014 at 03:10:24PM +0000, Simon Marlow wrote: > On 02/01/14 07:06, Simon Peyton-Jones wrote: > > > >Happy new year! And to you :-) > > runStmt ?print True? -- (4) > > >4.The runStmt should print something somewhere, but it doesn?t. Why not? > > I've no idea! It does look like it should print something. Is this with a statically linked or dynamically linked GHC? Does doing runStmt "hFlush stdout" afterwards make it appear? Thanks Ian From hellertime at gmail.com Thu Jan 2 19:17:06 2014 From: hellertime at gmail.com (Chris Heller) Date: Thu, 2 Jan 2014 14:17:06 -0500 Subject: Starting GHC development. Message-ID: Hello GHC devs. It's been my New Year's resolution to stop being just a GHC user and become a GHC developer. To that end, I've submitted my first patch to GHC (trac #8475 -- just a simple documentation fix). Nothing too earth shattering, but I figured this would be a good way to familiarize myself with the GHC workflow. I believe I've followed the instructions for working on GHC correctly. Please let me know if I have strayed. Looking forward to many more commits in the future. Happy New Year. Chris Heller -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 2 19:22:10 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 2 Jan 2014 14:22:10 -0500 Subject: Building GHC head with clang on Mavericks In-Reply-To: References: <20131121.155430.37558810975251514.kazu@iij.ad.jp> <529708D6.9030009@gmail.com> <20131129.174512.682731754748116086.kazu@iij.ad.jp> Message-ID: Hey Sven: the simplest solution is to have users install and use GCC rather than clang. On Jan 2, 2014 1:16 PM, "Sven Panne" wrote: > Although it is not really GHC-related, this thread is sufficiently > close to the problem described in > https://github.com/haskell-opengl/OpenGLRaw/issues/18: AFAICT, Mac OS > X 10.9's clang doesn't really honor -traditional, so what can I do to > make things work with recent Macs without breaking all other > platforms? I guess the right #if in > > https://github.com/haskell-opengl/OpenGLRaw/blob/master/include/HsOpenGLRaw.h > will do the trick, but I don't have access to a Mac. Hints are highly > appreciated, the whole current Mac situation is a bit of a mystery to > me... > > Cheers, > S. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Thu Jan 2 19:37:15 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 2 Jan 2014 20:37:15 +0100 Subject: Building GHC head with clang on Mavericks In-Reply-To: References: <20131121.155430.37558810975251514.kazu@iij.ad.jp> <529708D6.9030009@gmail.com> <20131129.174512.682731754748116086.kazu@iij.ad.jp> Message-ID: 2014/1/2 Carter Schonwald : > Hey Sven: the simplest solution is to have users install and use GCC rather > than clang. Hmmm, if clang is the default what people get on Mac OS X, I consider that a non-solution... :-) Furthermore: How will the upcoming Haskell Platform release handle these problems (clang vs. GCC, my OpenGLRaw problem)? From carter.schonwald at gmail.com Thu Jan 2 20:09:32 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 2 Jan 2014 15:09:32 -0500 Subject: Building GHC head with clang on Mavericks In-Reply-To: References: <20131121.155430.37558810975251514.kazu@iij.ad.jp> <529708D6.9030009@gmail.com> <20131129.174512.682731754748116086.kazu@iij.ad.jp> Message-ID: sven,http://www.haskell.org/platform/mac.html has a wrapper script that makes clang play nice with CPP, though a simpler alternative one can be found on manuel's page here http://justtesting.org/post/64947952690/the-glasgow-haskell-compiler-ghc-on-os-x-10-9. That said, it also doesnt work for all use case, hence why i generally recommend people also have gcc installed. there are plans post 7.8 release to give ghc a haskell native CPP analogue, but thats not happening for 7.8 HP + ghc 7.6 users have to use a clang wrapper script like the above, probably http://justtesting.org/post/64947952690/the-glasgow-haskell-compiler-ghc-on-os-x-10-9is the simplest one you an recommend, because it'll worth with haskell platform and none haskell platform systems. On Thu, Jan 2, 2014 at 2:37 PM, Sven Panne wrote: > 2014/1/2 Carter Schonwald : > > Hey Sven: the simplest solution is to have users install and use GCC > rather > > than clang. > > Hmmm, if clang is the default what people get on Mac OS X, I consider > that a non-solution... :-) Furthermore: How will the upcoming Haskell > Platform release handle these problems (clang vs. GCC, my OpenGLRaw > problem)? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Thu Jan 2 21:31:47 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 2 Jan 2014 22:31:47 +0100 Subject: Building GHC head with clang on Mavericks In-Reply-To: References: <20131121.155430.37558810975251514.kazu@iij.ad.jp> <529708D6.9030009@gmail.com> <20131129.174512.682731754748116086.kazu@iij.ad.jp> Message-ID: 2014/1/2 Carter Schonwald : > sven,http://www.haskell.org/platform/mac.html has a wrapper script that > makes clang play nice with CPP, though a simpler alternative one can be > found on manuel's page [...] I've seen the wrappers before, but do they really solve the problem for OpenGLRaw (concatenation via /**/ and replacement in strings)? As I said, I don't have access to a Mac, but the mangled options don't look like if they have anything to do with that. Can somebody confirm that? From carter.schonwald at gmail.com Thu Jan 2 21:51:27 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 2 Jan 2014 16:51:27 -0500 Subject: Building GHC head with clang on Mavericks In-Reply-To: References: <20131121.155430.37558810975251514.kazu@iij.ad.jp> <529708D6.9030009@gmail.com> <20131129.174512.682731754748116086.kazu@iij.ad.jp> Message-ID: oh right, that is a problem with clang no matter what. lens hit this issue, https://github.com/ekmett/lens/issues/357 I think the solution is "don't do that" if you want to support clang being the C compiler GHC uses, though you'll have to look at the diffs on that ticket closely to see how they addressed it On Thu, Jan 2, 2014 at 4:31 PM, Sven Panne wrote: > 2014/1/2 Carter Schonwald : > > sven,http://www.haskell.org/platform/mac.html has a wrapper script that > > makes clang play nice with CPP, though a simpler alternative one can be > > found on manuel's page [...] > > I've seen the wrappers before, but do they really solve the problem > for OpenGLRaw (concatenation via /**/ and replacement in strings)? As > I said, I don't have access to a Mac, but the mangled options don't > look like if they have anything to do with that. Can somebody confirm > that? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yo.eight at gmail.com Thu Jan 2 22:01:48 2014 From: yo.eight at gmail.com (Yorick Laupa) Date: Thu, 2 Jan 2014 23:01:48 +0100 Subject: Cannot find normal object file when compiling TH code Message-ID: Hi, Oddly I can't compile code using TH with GHC HEAD. Here's what I get: cannot find normal object file './Tuple.dyn_o' while linking an interpreted expression I'm currently working on a issue so I compile the code with ghc-stage2 for convenience. I found an old ticket related to my problem ( https://ghc.haskell.org/trac/ghc/ticket/8443) but adding -XTemplateHaskell didn't work out. The code compiles with ghc 7.6.3. Here's my setup: Archlinux (3.12.6-1) Any suggestions ? --Yorick -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 2 22:25:14 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 2 Jan 2014 17:25:14 -0500 Subject: Cannot find normal object file when compiling TH code In-Reply-To: References: Message-ID: Did you build ghc with both static and dynamic libs? Starting in 7.7/HEAD, ghci uses Dylib versions of libraries, and thus TH does too. What OS and architecture is this, and what commit is your ghc build from? Last but most importantly, if you don't share the code, we can't really help isolate the problem. On Thursday, January 2, 2014, Yorick Laupa wrote: > Hi, > > Oddly I can't compile code using TH with GHC HEAD. Here's what I get: > > cannot find normal object file ?./Tuple.dyn_o? > while linking an interpreted expression > > I'm currently working on a issue so I compile the code with ghc-stage2 for > convenience. > > I found an old ticket related to my problem ( > https://ghc.haskell.org/trac/ghc/ticket/8443) but adding > -XTemplateHaskell didn't work out. > > The code compiles with ghc 7.6.3. > > Here's my setup: Archlinux (3.12.6-1) > > Any suggestions ? > > --Yorick > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yo.eight at gmail.com Thu Jan 2 22:36:18 2014 From: yo.eight at gmail.com (Yorick Laupa) Date: Thu, 2 Jan 2014 23:36:18 +0100 Subject: Cannot find normal object file when compiling TH code In-Reply-To: References: Message-ID: Hi Carter, Someone figured it out on #ghc. It seems we need to compile with -dynamic when having TH code now (https://ghc.haskell.org/trac/ghc/ticket/8180) About a snippet, I working on that ticket ( https://ghc.haskell.org/trac/ghc/ticket/7021) so it's based on the given sample: -- Tuple.hs {-# LANGUAGE ConstraintKinds, TemplateHaskell #-} module Tuple where import Language.Haskell.TH type IOable a = (Show a, Read a) foo :: IOable a => a foo = undefined test :: Q Exp test = do Just fooName <- lookupValueName "foo" info <- reify fooName runIO $ print info [| \_ -> 0 |] -- -- Main.hs {-# LANGUAGE TemplateHaskell #-} module Main where import Tuple func :: a -> Int func = $(test) main :: IO () main = print "hello" -- 2014/1/2 Carter Schonwald > Did you build ghc with both static and dynamic libs? Starting in 7.7/HEAD, > ghci uses Dylib versions of libraries, and thus TH does too. What OS and > architecture is this, and what commit is your ghc build from? > > Last but most importantly, if you don't share the code, we can't really > help isolate the problem. > > > On Thursday, January 2, 2014, Yorick Laupa wrote: > >> Hi, >> >> Oddly I can't compile code using TH with GHC HEAD. Here's what I get: >> >> cannot find normal object file './Tuple.dyn_o' >> while linking an interpreted expression >> >> I'm currently working on a issue so I compile the code with ghc-stage2 >> for convenience. >> >> I found an old ticket related to my problem ( >> https://ghc.haskell.org/trac/ghc/ticket/8443) but adding >> -XTemplateHaskell didn't work out. >> >> The code compiles with ghc 7.6.3. >> >> Here's my setup: Archlinux (3.12.6-1) >> >> Any suggestions ? >> >> --Yorick >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 2 22:38:23 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 2 Jan 2014 17:38:23 -0500 Subject: Building GHC head with clang on Mavericks In-Reply-To: References: <20131121.155430.37558810975251514.kazu@iij.ad.jp> <529708D6.9030009@gmail.com> <20131129.174512.682731754748116086.kazu@iij.ad.jp> Message-ID: it looks like their work around is using ## rather than /**/ On Thu, Jan 2, 2014 at 4:51 PM, Carter Schonwald wrote: > oh right, that is a problem with clang no matter what. > > lens hit this issue, https://github.com/ekmett/lens/issues/357 > I think the solution is "don't do that" if you want to support clang being > the C compiler GHC uses, though you'll have to look at the diffs on that > ticket closely to see how they addressed it > > > On Thu, Jan 2, 2014 at 4:31 PM, Sven Panne wrote: > >> 2014/1/2 Carter Schonwald : >> > sven,http://www.haskell.org/platform/mac.html has a wrapper script >> that >> > makes clang play nice with CPP, though a simpler alternative one can be >> > found on manuel's page [...] >> >> I've seen the wrappers before, but do they really solve the problem >> for OpenGLRaw (concatenation via /**/ and replacement in strings)? As >> I said, I don't have access to a Mac, but the mangled options don't >> look like if they have anything to do with that. Can somebody confirm >> that? >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 2 22:38:53 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 2 Jan 2014 17:38:53 -0500 Subject: Cannot find normal object file when compiling TH code In-Reply-To: References: Message-ID: would --dynamic-too work too? On Thu, Jan 2, 2014 at 5:36 PM, Yorick Laupa wrote: > Hi Carter, > > Someone figured it out on #ghc. It seems we need to compile with -dynamic > when having TH code now (https://ghc.haskell.org/trac/ghc/ticket/8180) > > About a snippet, I working on that ticket ( > https://ghc.haskell.org/trac/ghc/ticket/7021) so it's based on the given > sample: > > -- Tuple.hs > {-# LANGUAGE ConstraintKinds, TemplateHaskell #-} > > module Tuple where > > import Language.Haskell.TH > > type IOable a = (Show a, Read a) > > foo :: IOable a => a > foo = undefined > > test :: Q Exp > test = do > Just fooName <- lookupValueName "foo" > info <- reify fooName > runIO $ print info > [| \_ -> 0 |] > -- > > -- Main.hs > {-# LANGUAGE TemplateHaskell #-} > module Main where > > import Tuple > > func :: a -> Int > func = $(test) > > main :: IO () > main = print "hello" > > -- > > > 2014/1/2 Carter Schonwald > >> Did you build ghc with both static and dynamic libs? Starting in >> 7.7/HEAD, ghci uses Dylib versions of libraries, and thus TH does too. >> What OS and architecture is this, and what commit is your ghc build from? >> >> Last but most importantly, if you don't share the code, we can't really >> help isolate the problem. >> >> >> On Thursday, January 2, 2014, Yorick Laupa wrote: >> >>> Hi, >>> >>> Oddly I can't compile code using TH with GHC HEAD. Here's what I get: >>> >>> cannot find normal object file ?./Tuple.dyn_o? >>> while linking an interpreted expression >>> >>> I'm currently working on a issue so I compile the code with ghc-stage2 >>> for convenience. >>> >>> I found an old ticket related to my problem ( >>> https://ghc.haskell.org/trac/ghc/ticket/8443) but adding >>> -XTemplateHaskell didn't work out. >>> >>> The code compiles with ghc 7.6.3. >>> >>> Here's my setup: Archlinux (3.12.6-1) >>> >>> Any suggestions ? >>> >>> --Yorick >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yo.eight at gmail.com Thu Jan 2 22:44:28 2014 From: yo.eight at gmail.com (Yorick Laupa) Date: Thu, 2 Jan 2014 23:44:28 +0100 Subject: Cannot find normal object file when compiling TH code In-Reply-To: References: Message-ID: Except expected #7021 error message, it works on my machine (Archlinux x86_64) with --dynamic-too 2014/1/2 Carter Schonwald > would --dynamic-too work too? > > > On Thu, Jan 2, 2014 at 5:36 PM, Yorick Laupa wrote: > >> Hi Carter, >> >> Someone figured it out on #ghc. It seems we need to compile with -dynamic >> when having TH code now (https://ghc.haskell.org/trac/ghc/ticket/8180) >> >> About a snippet, I working on that ticket ( >> https://ghc.haskell.org/trac/ghc/ticket/7021) so it's based on the given >> sample: >> >> -- Tuple.hs >> {-# LANGUAGE ConstraintKinds, TemplateHaskell #-} >> >> module Tuple where >> >> import Language.Haskell.TH >> >> type IOable a = (Show a, Read a) >> >> foo :: IOable a => a >> foo = undefined >> >> test :: Q Exp >> test = do >> Just fooName <- lookupValueName "foo" >> info <- reify fooName >> runIO $ print info >> [| \_ -> 0 |] >> -- >> >> -- Main.hs >> {-# LANGUAGE TemplateHaskell #-} >> module Main where >> >> import Tuple >> >> func :: a -> Int >> func = $(test) >> >> main :: IO () >> main = print "hello" >> >> -- >> >> >> 2014/1/2 Carter Schonwald >> >>> Did you build ghc with both static and dynamic libs? Starting in >>> 7.7/HEAD, ghci uses Dylib versions of libraries, and thus TH does too. >>> What OS and architecture is this, and what commit is your ghc build from? >>> >>> Last but most importantly, if you don't share the code, we can't really >>> help isolate the problem. >>> >>> >>> On Thursday, January 2, 2014, Yorick Laupa wrote: >>> >>>> Hi, >>>> >>>> Oddly I can't compile code using TH with GHC HEAD. Here's what I get: >>>> >>>> cannot find normal object file './Tuple.dyn_o' >>>> while linking an interpreted expression >>>> >>>> I'm currently working on a issue so I compile the code with ghc-stage2 >>>> for convenience. >>>> >>>> I found an old ticket related to my problem ( >>>> https://ghc.haskell.org/trac/ghc/ticket/8443) but adding >>>> -XTemplateHaskell didn't work out. >>>> >>>> The code compiles with ghc 7.6.3. >>>> >>>> Here's my setup: Archlinux (3.12.6-1) >>>> >>>> Any suggestions ? >>>> >>>> --Yorick >>>> >>>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Fri Jan 3 01:32:57 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 2 Jan 2014 20:32:57 -0500 Subject: Cannot find normal object file when compiling TH code In-Reply-To: References: Message-ID: You should maybe delete your pre cabal 1.18 ~/.cabal/config file and run cabal update then merge back any important settings. I think cabal 1.18 by default builds everything with dynamic-too On Thursday, January 2, 2014, Yorick Laupa wrote: > Except expected #7021 error message, it works on my machine (Archlinux > x86_64) with --dynamic-too > > > 2014/1/2 Carter Schonwald 'cvml', 'carter.schonwald at gmail.com');>> > >> would --dynamic-too work too? >> >> >> On Thu, Jan 2, 2014 at 5:36 PM, Yorick Laupa >> > wrote: >> >>> Hi Carter, >>> >>> Someone figured it out on #ghc. It seems we need to compile with >>> -dynamic when having TH code now ( >>> https://ghc.haskell.org/trac/ghc/ticket/8180) >>> >>> About a snippet, I working on that ticket ( >>> https://ghc.haskell.org/trac/ghc/ticket/7021) so it's based on the >>> given sample: >>> >>> -- Tuple.hs >>> {-# LANGUAGE ConstraintKinds, TemplateHaskell #-} >>> >>> module Tuple where >>> >>> import Language.Haskell.TH >>> >>> type IOable a = (Show a, Read a) >>> >>> foo :: IOable a => a >>> foo = undefined >>> >>> test :: Q Exp >>> test = do >>> Just fooName <- lookupValueName "foo" >>> info <- reify fooName >>> runIO $ print info >>> [| \_ -> 0 |] >>> -- >>> >>> -- Main.hs >>> {-# LANGUAGE TemplateHaskell #-} >>> module Main where >>> >>> import Tuple >>> >>> func :: a -> Int >>> func = $(test) >>> >>> main :: IO () >>> main = print "hello" >>> >>> -- >>> >>> >>> 2014/1/2 Carter Schonwald >>> > >>> >>>> Did you build ghc with both static and dynamic libs? Starting in >>>> 7.7/HEAD, ghci uses Dylib versions of libraries, and thus TH does too. >>>> What OS and architecture is this, and what commit is your ghc build from? >>>> >>>> Last but most importantly, if you don't share the code, we can't really >>>> help isolate the problem. >>>> >>>> >>>> On Thursday, January 2, 2014, Yorick Laupa wrote: >>>> >>>>> Hi, >>>>> >>>>> Oddly I can't compile code using TH with GHC HEAD. Here's what I get: >>>>> >>>>> cannot find normal object file ?./Tuple.dyn_o? >>>>> while linking an interpreted expression >>>>> >>>>> I'm currently working on a issue so I compile the code with ghc-stage2 >>>>> for convenience. >>>>> >>>>> I found an old ticket related to my problem ( >>>>> https://ghc.haskell.org/trac/ghc/ticket/8443) but adding >>>>> -XTemplateHaskell didn't work out. >>>>> >>>>> The code compiles with ghc 7.6.3. >>>>> >>>>> Here's my setup: Archlinux (3.12.6-1) >>>>> >>>>> Any suggestions ? >>>>> >>>>> --Yorick >>>>> >>>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhpetersen at gmail.com Fri Jan 3 08:35:19 2014 From: juhpetersen at gmail.com (Jens Petersen) Date: Fri, 3 Jan 2014 17:35:19 +0900 Subject: ticket for adding ARM backend to NCG? In-Reply-To: References: <20131223130759.463580657bd05f4bca3a725c@mega-nerd.com> Message-ID: On 3 January 2014 03:10, Corey O'Connor wrote: > My interest is just to get involved somehow in the NCG. Starting a new > backend seemed reasonable only because I couldn't break something that > didn't exist. ;-) > Well a big +1 from me for armv7 NCG. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 3 10:19:02 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Fri, 3 Jan 2014 10:19:02 +0000 Subject: GHC Api In-Reply-To: <20140102182916.GA5086@matrix.chaos.earth.li> References: <59543203684B2244980D7E4057D5FBC148702143@DB3EX14MBXC306.europe.corp.microsoft.com> <52C58160.6030800@gmail.com> <20140102182916.GA5086@matrix.chaos.earth.li> Message-ID: <59543203684B2244980D7E4057D5FBC148704B22@DB3EX14MBXC306.europe.corp.microsoft.com> | Is this with a statically linked or dynamically linked GHC? I don't know. How would I find out? (It's the one built by validate.) You are asking about GHC, but I guess there's also the question of whether the test program itself is statically or dynamically linked. I don't know that either. I just said ~/5builds/HEAD-2/inplace/bin/ghc-stage2 -o T8628 T8628.hs -package ghc Why would static/dynamic linking make a difference? That's very confusing! | Does doing | runStmt "hFlush stdout" | afterwards make it appear? Yes, it does. Again, that's very confusing. Shouldn't we automatically do a hFlush, so that output is not silently discarded? Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ian | Lynagh | Sent: 02 January 2014 18:29 | To: ghc-devs at haskell.org | Subject: Re: GHC Api | | On Thu, Jan 02, 2014 at 03:10:24PM +0000, Simon Marlow wrote: | > On 02/01/14 07:06, Simon Peyton-Jones wrote: | > > | > >Happy new year! | | And to you :-) | | > > runStmt ?print True? -- (4) | > | > >4.The runStmt should print something somewhere, but it doesn?t. Why | not? | > | > I've no idea! It does look like it should print something. | | Is this with a statically linked or dynamically linked GHC? | | Does doing | runStmt "hFlush stdout" | afterwards make it appear? | | | Thanks | Ian | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From karel.gardas at centrum.cz Fri Jan 3 11:24:12 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Fri, 03 Jan 2014 12:24:12 +0100 Subject: ticket for adding ARM backend to NCG? In-Reply-To: References: <20131223130759.463580657bd05f4bca3a725c@mega-nerd.com> Message-ID: <52C69DDC.5090009@centrum.cz> Guys, I've been tinkering with ARM NCG idea for quite some time now, but honestly I've been always in doubts if it's the best way for GHC at all. I've thought that the plan was to kind of move out of NCG to LLVM based backends and I've though that although this plan may be kind of stuck now, it's still on the table. Yes, I know that GHC is volunteering effort so if someone comes and asks for an ARM NCG implementation merge it'll be probably done in some time, but I'm not sure if it's what's the most welcome at the end. Just some of my doubts about it... I would really appreciate some authoritative word about the topic from more involved GHC developers... I mean especially about NCG future... Thanks! Karel On 01/ 3/14 09:35 AM, Jens Petersen wrote: > On 3 January 2014 03:10, Corey O'Connor > wrote: > > My interest is just to get involved somehow in the NCG. Starting a > new backend seemed reasonable only because I couldn't break > something that didn't exist. ;-) > > > Well a big +1 from me for armv7 NCG. > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From svenpanne at gmail.com Fri Jan 3 11:58:53 2014 From: svenpanne at gmail.com (Sven Panne) Date: Fri, 3 Jan 2014 12:58:53 +0100 Subject: Building GHC head with clang on Mavericks In-Reply-To: References: <20131121.155430.37558810975251514.kazu@iij.ad.jp> <529708D6.9030009@gmail.com> <20131129.174512.682731754748116086.kazu@iij.ad.jp> Message-ID: 2014/1/2 Carter Schonwald : > it looks like their work around is using ## rather than /**/ Well, actually lens is bypassing the problem by using cpphs, not the C preprocessor. :-P OpenGLRaw is part of the Haskell Platform, and cpphs is not, so I can't simply depend on it. (Licensing issues IIRC?) "Don't do that" is not an option, either, at least not until the binding is auto-generated. If I see this correctly, I really have to do some preprocessor magic (slightly simplified output): ----------------------------------------------------------------------------- svenpanne at svenpanne:~$ cat preprocess.hs #define FOO(x) bla/**/x "x" #define BAR(x) bla##x #x FOO(baz) BAR(boo) svenpanne at svenpanne:~$ gcc -traditional -E -x c preprocess.hs blabaz "baz" bla##boo #boo svenpanne at svenpanne:~$ gcc -E -x c preprocess.hs bla baz "x" blaboo "boo" svenpanne at svenpanne:~$ clang -traditional -E -x c preprocess.hs bla baz "x" bla##boo #boo svenpanne at svenpanne:~$ clang -E -x c preprocess.hs bla baz "x" blaboo "boo" ----------------------------------------------------------------------------- If -traditional is not used, things are simple and consistent, and we can simply use ## and #. Alas, -traditional *is* used, so we can't use ## and # with gcc an we are out of luck with clang. This really sucks, and I consider the clang -traditional behavior a bug: How can you do concatenation/stringification with clang -traditional? One can detect clang via defined(__clang__) and the absence of -traditional via defined(__STDC__), but this doesn't really help here. Any suggestions? I am testing with a local clang 3.4 version (trunk 193323), but I am not sure if this matters. From simonpj at microsoft.com Fri Jan 3 12:37:54 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Fri, 3 Jan 2014 12:37:54 +0000 Subject: ticket for adding ARM backend to NCG? In-Reply-To: <52C69DDC.5090009@centrum.cz> References: <20131223130759.463580657bd05f4bca3a725c@mega-nerd.com> <52C69DDC.5090009@centrum.cz> Message-ID: <59543203684B2244980D7E4057D5FBC148704C6A@DB3EX14MBXC306.europe.corp.microsoft.com> | I've been tinkering with ARM NCG idea for quite some time now, but | honestly I've been always in doubts if it's the best way for GHC at all. | I've thought that the plan was to kind of move out of NCG to LLVM based | backends and I've though that although this plan may be kind of stuck | now, it's still on the table. I have not been following the ARM and LLVM threads very closely, but here's my take: * LLVM is (I hope) very much on the table. LLVM itself is a well-resourced project, and we can expect it to continue to exist. We should piggy-back on all the hard work that is going into it. * But using LLVM has some disadvantages. a) it imposes a dependency on LLVM b) it makes compilation slower c) we play some efficiency tricks (notably "tables next to code") that LLVM can't play (yet). I think. So GHC currently aims to have a built-in NCG for popular platforms, and to rely on LLVM for more esoteric platforms and also for superior optimisation. Is this still a sensible policy? Maybe you can articulate your doubts on the ARM NCG? Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Karel | Gardas | Sent: 03 January 2014 11:24 | To: Jens Petersen | Cc: ghc-devs at haskell.org | Subject: Re: ticket for adding ARM backend to NCG? | | | Guys, | | I've been tinkering with ARM NCG idea for quite some time now, but | honestly I've been always in doubts if it's the best way for GHC at all. | I've thought that the plan was to kind of move out of NCG to LLVM based | backends and I've though that although this plan may be kind of stuck | now, it's still on the table. | | Yes, I know that GHC is volunteering effort so if someone comes and asks | for an ARM NCG implementation merge it'll be probably done in some time, | but I'm not sure if it's what's the most welcome at the end. | | Just some of my doubts about it... | | I would really appreciate some authoritative word about the topic from | more involved GHC developers... I mean especially about NCG future... | | Thanks! | Karel | | On 01/ 3/14 09:35 AM, Jens Petersen wrote: | > On 3 January 2014 03:10, Corey O'Connor > wrote: | > | > My interest is just to get involved somehow in the NCG. Starting a | > new backend seemed reasonable only because I couldn't break | > something that didn't exist. ;-) | > | > | > Well a big +1 from me for armv7 NCG. | > | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From karel.gardas at centrum.cz Fri Jan 3 12:45:25 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Fri, 03 Jan 2014 13:45:25 +0100 Subject: ticket for adding ARM backend to NCG? In-Reply-To: <59543203684B2244980D7E4057D5FBC148704C6A@DB3EX14MBXC306.europe.corp.microsoft.com> References: <20131223130759.463580657bd05f4bca3a725c@mega-nerd.com> <52C69DDC.5090009@centrum.cz> <59543203684B2244980D7E4057D5FBC148704C6A@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52C6B0E5.10106@centrum.cz> On 01/ 3/14 01:37 PM, Simon Peyton-Jones wrote: > | I've been tinkering with ARM NCG idea for quite some time now, but > | honestly I've been always in doubts if it's the best way for GHC at all. > | I've thought that the plan was to kind of move out of NCG to LLVM based > | backends and I've though that although this plan may be kind of stuck > | now, it's still on the table. > > I have not been following the ARM and LLVM threads very closely, but here's my take: > > * LLVM is (I hope) very much on the table. LLVM itself is a well-resourced project, > and we can expect it to continue to exist. We should piggy-back on all the > hard work that is going into it. > > * But using LLVM has some disadvantages. > a) it imposes a dependency on LLVM > b) it makes compilation slower > c) we play some efficiency tricks (notably "tables next to code") that > LLVM can't play (yet). I think. > > So GHC currently aims to have a built-in NCG for popular platforms, and to rely on LLVM for more esoteric platforms and also for superior optimisation. This sounds indeed good. > Maybe you can articulate your doubts on the ARM NCG? My main doubt was to invest a lot of time into something which will be deprecated in near future (as ARM NCG will take some time to do) assuming GHC is switching to LLVM completely and deprecating NCG. Your policy stated above clears that. Thanks! Karel From simonpj at microsoft.com Fri Jan 3 13:27:44 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Fri, 3 Jan 2014 13:27:44 +0000 Subject: Starting GHC development. In-Reply-To: References: Message-ID: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> Chris It's been my New Year's resolution to stop being just a GHC user and become a GHC developer. Thank you. We need lots of help! The process you've followed looks right. (Edward is right to find the patch that made the change that you reverted, and ask its author.) I see you've also worked on #8602. By changing the status to patch, Austin should get to it in due course. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Chris Heller Sent: 02 January 2014 19:17 To: ghc-devs at haskell.org Subject: Starting GHC development. Hello GHC devs. It's been my New Year's resolution to stop being just a GHC user and become a GHC developer. To that end, I've submitted my first patch to GHC (trac #8475 -- just a simple documentation fix). Nothing too earth shattering, but I figured this would be a good way to familiarize myself with the GHC workflow. I believe I've followed the instructions for working on GHC correctly. Please let me know if I have strayed. Looking forward to many more commits in the future. Happy New Year. Chris Heller -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 3 13:46:20 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Fri, 3 Jan 2014 13:46:20 +0000 Subject: GHC Api In-Reply-To: <52C58160.6030800@gmail.com> References: <59543203684B2244980D7E4057D5FBC148702143@DB3EX14MBXC306.europe.corp.microsoft.com> <52C58160.6030800@gmail.com> Message-ID: <59543203684B2244980D7E4057D5FBC148704D2D@DB3EX14MBXC306.europe.corp.microsoft.com> | setSessionDynFlags loads the package database and does the necessary | processing to make packages available. We don't do that automatically, | because the client might want to add their own package flags to the | DynFlags between the calls to getSessionDynFlags and setSessionDynFlags. So it would be *OK* for runGhc to call setSessionDynFlags; but it might be a bit inefficient in the case you describe where the user adds their own package flags (which is uncommon). Correct? In that case, couldn't runGhc do the package initialisation thing, and we can perhaps provide a super-efficient variant of runGhc that doesn't do so for the reason you state? That would make the common case simple. | I'm not all that familiar with the unsafeGlobalDynFlags stuff (that's | Ian's invention), but from looking at the code it looks like you | wouldn't need to call this if you were calling parseDynamicFlags. It | should be safe to call parseDynamicFlags with an empty set of flags to | parse. True but weird. The point is that, instead of parsing a string, runGhc creates a fresh empty DynFlags (in inigGhcMonad actually). Since this is an alternative to parsing a string, it should set the static thing too, just as the string-parsing route does (in parseDynamicFlagsFull, as you point out). I'll do this unless you or Ian object. | I think it's sensible to require a call to setContext to bring the | Prelude into scope. The client might want a different context, and | setContext isn't free, so we probably don't want to initialise a default | context. This is very similar to the first point above. Maybe runGhc can do common thing (initialise packages, import Prelude), with a variant that doesn't? What do others think? Simon | -----Original Message----- | From: Simon Marlow [mailto:marlowsd at gmail.com] | Sent: 02 January 2014 15:10 | To: Simon Peyton-Jones | Cc: ghc-devs | Subject: Re: GHC Api | | On 02/01/14 07:06, Simon Peyton-Jones wrote: | > Simon and othere | > | > Happy new year! | > | > When debugging Trac #8628 I wrote the following: | > | > main | > | > = do [libdir] <- getArgs | > | > ok <- runGhc (Just libdir) $ do | > | > dflags <- getSessionDynFlags -- (1) | > | > setSessionDynFlags dflags | > | > liftIO (setUnsafeGlobalDynFlags dflags) -- (2) | > | > setContext [IIDecl (simpleImportDecl pRELUDE_NAME)] -- (3) | > | > runDecls "data X = Y Int" | > | > runStmt "print True" -- (4) | > | > return () | > | > There are several odd things here | > | > 1.Why do I have to do this "getSessionDynFlags/setSessionDynFlags" | > thing. Seems bizarre. I just copied it from some other tests in | > ghc-api/. Is it necessary? If not, can we remove it from all tests? | | It's a sensible question given the naming of the functions. The API is | definitely clunky here, but there is a purpose to these calls. | setSessionDynFlags loads the package database and does the necessary | processing to make packages available. We don't do that automatically, | because the client might want to add their own package flags to the | DynFlags between the calls to getSessionDynFlags and setSessionDynFlags. | Incidentally you can find out some of this stuff from the Haddock docs, | e.g. look at the docs for setSessionDynFlags. | | > 2.Initially I didn't have that setUnsafeGlobalDynFlags call. But then | > I got | > | > T8628.exe: T8628.exe: panic! (the 'impossible' happened) | > | > (GHC version 7.7.20131228 for i386-unknown-mingw32): | > | > v_unsafeGlobalDynFlags: not initialised | > | > which is a particularly unhelpful message. It arose because I was | > using a GHC built with assertions on, and a warnPprTrace triggered. | > Since this could happen to anyone, would it make sense to make this | > part of runGhc and setSessionDynFlags? | | I'm not all that familiar with the unsafeGlobalDynFlags stuff (that's | Ian's invention), but from looking at the code it looks like you | wouldn't need to call this if you were calling parseDynamicFlags. It | should be safe to call parseDynamicFlags with an empty set of flags to | parse. | | > 3.Initially I didn't have that setContext call, and got a complaint | > that "Int is not in scope". I was expecting the Prelude to be | > implicitly in scope. But I'm not sure where to fix that. Possibly | > part of the setup in runGhc? | | I think it's sensible to require a call to setContext to bring the | Prelude into scope. The client might want a different context, and | setContext isn't free, so we probably don't want to initialise a default | context. | | > 4.The runStmt should print something somewhere, but it doesn't. Why | not? | | I've no idea! It does look like it should print something. | | Cheers, | Simon | | > What do you think? | > | > Simon | > From tkn.akio at gmail.com Fri Jan 3 14:20:38 2014 From: tkn.akio at gmail.com (Akio Takano) Date: Fri, 3 Jan 2014 23:20:38 +0900 Subject: Extending fold/build fusion Message-ID: Hi, I have been thinking about how foldl' can be turned into a good consumer, and I came up with something that I thought would work. So I'd like to ask for opinions from the ghc devs: if this idea looks good, if it is a known bad idea, if there is a better way to do it, etc. The main idea is to have an extended version of foldr: -- | A mapping between @a@ and @b at . data Wrap a b = Wrap (a -> b) (b -> a) foldrW :: (forall e. Wrap (f e) (e -> b -> b)) -> (a -> b -> b) -> b -> [a] -> b foldrW (Wrap wrap unwrap) f z0 list0 = wrap go list0 z0 where go = unwrap $ \list z' -> case list of [] -> z' x:xs -> f x $ wrap go xs z' This allows the user to apply an arbitrary "worker-wrapper" transformation to the loop. Using this, foldl' can be defined as newtype Simple b e = Simple { runSimple :: e -> b -> b } foldl' :: (b -> a -> b) -> b -> [a] -> b foldl' f initial xs = foldrW (Wrap wrap unwrap) g id xs initial where wrap (Simple s) e k a = k $ s e a unwrap u = Simple $ \e -> u e id g x next acc = next $! f acc x The wrap and unwrap functions here ensure that foldl' gets compiled into a loop that returns a value of 'b', rather than a function 'b -> b', effectively un-CPS-transforming the loop. I put preliminary code and some more explanation on Github: https://github.com/takano-akio/ww-fusion Thank you, Takano Akio -------------- next part -------------- An HTML attachment was scrubbed... URL: From hellertime at gmail.com Fri Jan 3 15:56:44 2014 From: hellertime at gmail.com (Chris Heller) Date: Fri, 3 Jan 2014 10:56:44 -0500 Subject: Starting GHC development. In-Reply-To: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: > > Thank you. We need lots of help! > I can't expect to jump right in and just become one with 20 years of development, but I do plan on whacking away at the low hanging fruit until it all starts making sense. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggreif at gmail.com Fri Jan 3 17:33:08 2014 From: ggreif at gmail.com (Gabor Greif) Date: Fri, 3 Jan 2014 18:33:08 +0100 Subject: Type-level reasoning ability lost for TypeLits? Message-ID: Hi devs, with recent iterations of GHC.TypeLits (HEAD) I am struggling to get something simple working. I have > data Number nat = KnownNat nat => Number !(Proxy nat) and want to write > addNumbers :: Number a -> Number b -> Maybe (Number (a + b)) Unfortunately I cannot find a way to create the necessary KnownNat (a + b) constraint. Declaring the function thus > addNumbers :: KnownNat (a + b) => Number a -> Number b -> Maybe (Number (a + b)) only dodges the problem around. Also I am wondering where the ability to perform a type equality check went. I.e. I cannot find the relevant functionality to obtain > sameNumber :: Number a -> Number b -> Maybe (a :~: b) I guess there should be some TestEquality instance (for Proxy Nat?, is this possible at all), but I cannot find it. Same applies for Symbols. Any hints? Thanks and cheers, Gabor From fuuzetsu at fuuzetsu.co.uk Fri Jan 3 18:43:33 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 03 Jan 2014 18:43:33 +0000 Subject: Starting GHC development. In-Reply-To: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52C704D5.4050606@fuuzetsu.co.uk> On 03/01/14 13:27, Simon Peyton-Jones wrote: > [snip] > Thank you. We need lots of help! > [snip] While I hate to interrupt this thread, I think this is a good chance to mention something. I think the big issue for joining GHC development is the lack of communication on the mailing list. There are many topics where a person has a problem with GHC tree (can't validate/build, some tests are failing), posts to GHC devs seeking help and never gets a reply. This is very discouraging and often makes it outright impossible to contribute. An easy example is the failing tests one: unfortunately some tests are known to fail, but they are only known to fail to existing GHC devs. A new person tries to validate clean tree, gets test failures, asks for help on GHC devs, doesn't get any, gives up. Is there any better way to get through than ghc-devs? Even myself I'd love to get started but if I can't get help even getting the ?clean? tree to a state where I'm confident it's not a problem with my machine, how am I to write patches for anything? A more serious example is that the work I did over summer on Haddock still hasn't been pushed in. Why? Because neither Simon Hengel nor myself can ensure that we haven't broken anything as neither of use gets a clean validate. I have in fact asked for help recently with this but to no avail and I do know Simon also sought help in the past to no avail. I have also tried to join the development quite a few months in the past now but due to failing tests on validate and lack of help, I had to give up on that. Please guys, try to increase responsiveness to posts on this list. It's very easy to scroll down in your mail client and see just how many threads never got a single reply. -- Mateusz K. From ggreif at gmail.com Fri Jan 3 18:50:54 2014 From: ggreif at gmail.com (Gabor Greif) Date: Fri, 3 Jan 2014 19:50:54 +0100 Subject: Starting GHC development. In-Reply-To: <52C704D5.4050606@fuuzetsu.co.uk> References: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> <52C704D5.4050606@fuuzetsu.co.uk> Message-ID: On 1/3/14, Mateusz Kowalczyk wrote: > On 03/01/14 13:27, Simon Peyton-Jones wrote: >> [snip] >> Thank you. We need lots of help! >> [snip] > > While I hate to interrupt this thread, I think this is a good chance to > mention something. > > I think the big issue for joining GHC development is the lack of > communication on the mailing list. There are many topics where a person > has a problem with GHC tree (can't validate/build, some tests are > failing), posts to GHC devs seeking help and never gets a reply. This is > very discouraging and often makes it outright impossible to contribute. > > An easy example is the failing tests one: unfortunately some tests are > known to fail, but they are only known to fail to existing GHC devs. A > new person tries to validate clean tree, gets test failures, asks for > help on GHC devs, doesn't get any, gives up. We should explicitly say somewhere that pinging for an answer is okay. Sometimes the key persons (for a potential answer) are out of town or too busy, and the question gets buried. Repeating the answer a few days later raises awareness and has higher chance to succeed. This is how other technical lists (e.g. LLVM's) work. Cheers, Gabor > > Is there any better way to get through than ghc-devs? Even myself I'd > love to get started but if I can't get help even getting the ?clean? > tree to a state where I'm confident it's not a problem with my machine, > how am I to write patches for anything? A more serious example is that > the work I did over summer on Haddock still hasn't been pushed in. Why? > Because neither Simon Hengel nor myself can ensure that we haven't > broken anything as neither of use gets a clean validate. I have in fact > asked for help recently with this but to no avail and I do know Simon > also sought help in the past to no avail. I have also tried to join the > development quite a few months in the past now but due to failing tests > on validate and lack of help, I had to give up on that. > > Please guys, try to increase responsiveness to posts on this list. It's > very easy to scroll down in your mail client and see just how many > threads never got a single reply. > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From iavor.diatchki at gmail.com Fri Jan 3 18:51:35 2014 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Fri, 3 Jan 2014 10:51:35 -0800 Subject: Type-level reasoning ability lost for TypeLits? In-Reply-To: References: Message-ID: Hi Gabor, On Fri, Jan 3, 2014 at 9:33 AM, Gabor Greif wrote: > Hi devs, > > with recent iterations of GHC.TypeLits (HEAD) I am struggling to get > something simple working. I have > > > data Number nat = KnownNat nat => Number !(Proxy nat) > > and want to write > > > addNumbers :: Number a -> Number b -> Maybe (Number (a + b)) > > Unfortunately I cannot find a way to create the necessary KnownNat (a > + b) constraint. > > Indeed, there is no way to construct `KnownNumber (a + b)` from `(Known a, Known b)`. This is not something that we lost, it was just never implemented. We could make something like it work, I think, but it'd make things a bit more complex: the representation of `KnownNumber` dictionaries would have to be expressions, rather than a simple constant. I'd be inclined to leave this as is for now---let's see what we can do with the current system, before we add more functionality. Declaring the function thus > > > addNumbers :: KnownNat (a + b) => Number a -> Number b -> Maybe (Number > (a + b)) > > only dodges the problem around. > Dodging problems is good! :-) I don't fully understand from the type what the function is supposed to do, but I'd write something like this: addNumbers :: (KnownNat a, KnownNat b) => (Integer -> Integer -> Bool) -- ^ Some constraint? Proxy a -> Proxy b -> Maybe (Proxy (a + b)) addNumber p x y = do guard (p (natVal x) (natVal y)) return Proxy > > Also I am wondering where the ability to perform a type equality check > went. I.e. I cannot find the relevant functionality to obtain > > > sameNumber :: Number a -> Number b -> Maybe (a :~: b) > > I guess there should be some TestEquality instance (for Proxy Nat?, is > this possible at all), but I cannot find it. Same applies for Symbols. > > Ah yes, I thought that this was supposed to be added to some other library, but I guess that never happened. It was implemented like this, if you need it right now. sameNumber :: (KnownNat a, KnownNat b) => Proxy a -> Proxy b -> Maybe (a :~: b) sameNumber x y | natVal x == natVal y = Just (unsafeCoerce Refl) | otherwise = Nothing This doesn't fit the pattern for the `TestEquality` class (due to the constraints on the parameters), so perhaps I'll add it back to GHC.TypeLits. -Iavor -------------- next part -------------- An HTML attachment was scrubbed... URL: From robstewart57 at gmail.com Fri Jan 3 19:06:57 2014 From: robstewart57 at gmail.com (Rob Stewart) Date: Fri, 3 Jan 2014 19:06:57 +0000 Subject: ticket for adding ARM backend to NCG? In-Reply-To: <59543203684B2244980D7E4057D5FBC148704C6A@DB3EX14MBXC306.europe.corp.microsoft.com> References: <20131223130759.463580657bd05f4bca3a725c@mega-nerd.com> <52C69DDC.5090009@centrum.cz> <59543203684B2244980D7E4057D5FBC148704C6A@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: On 3 January 2014 12:37, Simon Peyton-Jones wrote: > * But using LLVM has some disadvantages. > c) we play some efficiency tricks (notably "tables next to code") that > LLVM can't play (yet). I think. In fact, this could well be implemented in the GHC 7.10, as this has been committed in LLVM on 15th September: http://www.haskell.org/pipermail/ghc-devs/2013-September/002565.html Implementing "tables next to code" in the LLVM IR generation may be something to get one's teeth into in time for 7.10 ? Carter: was this discussed further on #haskell-llvm ? -- Rob > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Karel > | Gardas > | Sent: 03 January 2014 11:24 > | To: Jens Petersen > | Cc: ghc-devs at haskell.org > | Subject: Re: ticket for adding ARM backend to NCG? > | > | > | Guys, > | > | I've been tinkering with ARM NCG idea for quite some time now, but > | honestly I've been always in doubts if it's the best way for GHC at all. > | I've thought that the plan was to kind of move out of NCG to LLVM based > | backends and I've though that although this plan may be kind of stuck > | now, it's still on the table. > | > | Yes, I know that GHC is volunteering effort so if someone comes and asks > | for an ARM NCG implementation merge it'll be probably done in some time, > | but I'm not sure if it's what's the most welcome at the end. > | > | Just some of my doubts about it... > | > | I would really appreciate some authoritative word about the topic from > | more involved GHC developers... I mean especially about NCG future... > | > | Thanks! > | Karel > | > | On 01/ 3/14 09:35 AM, Jens Petersen wrote: > | > On 3 January 2014 03:10, Corey O'Connor | > > wrote: > | > > | > My interest is just to get involved somehow in the NCG. Starting a > | > new backend seemed reasonable only because I couldn't break > | > something that didn't exist. ;-) > | > > | > > | > Well a big +1 from me for armv7 NCG. > | > > | > > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > http://www.haskell.org/mailman/listinfo/ghc-devs > | > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From fuuzetsu at fuuzetsu.co.uk Fri Jan 3 19:15:01 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 03 Jan 2014 19:15:01 +0000 Subject: Starting GHC development. In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> <52C704D5.4050606@fuuzetsu.co.uk> Message-ID: <52C70C35.7000207@fuuzetsu.co.uk> On 03/01/14 18:50, Gabor Greif wrote: > On 1/3/14, Mateusz Kowalczyk wrote: >> On 03/01/14 13:27, Simon Peyton-Jones wrote: >>> [snip] >>> Thank you. We need lots of help! >>> [snip] >> >> While I hate to interrupt this thread, I think this is a good chance to >> mention something. >> >> I think the big issue for joining GHC development is the lack of >> communication on the mailing list. There are many topics where a person >> has a problem with GHC tree (can't validate/build, some tests are >> failing), posts to GHC devs seeking help and never gets a reply. This is >> very discouraging and often makes it outright impossible to contribute. >> >> An easy example is the failing tests one: unfortunately some tests are >> known to fail, but they are only known to fail to existing GHC devs. A >> new person tries to validate clean tree, gets test failures, asks for >> help on GHC devs, doesn't get any, gives up. > > We should explicitly say somewhere that pinging for an answer is okay. > Sometimes the key persons (for a potential answer) are out of town or > too busy, and the question gets buried. > > Repeating the answer a few days later raises awareness and has higher > chance to succeed. This is how other technical lists (e.g. LLVM's) > work. > > Cheers, > > Gabor > While bumping the thread might help, I don't think people missing it is always the case. Refer to Carter's recent e-mail about something very important: when is 7.8 finally happening. It was pinged 9 days later by Kazu and still no replies! In the end he had to make another thread nearly half a month after his initial one and directly CC some people to get any output? I think it's more about ?I'm not 100% sure here so I won't say anything? which is terrible for newcomers because to them it seems like everyone ignored their thread. For a newcomer, even ?did you try make maintainer-clean? might be helpful. At least they don't feel ignored. -- Mateusz K. From igloo at earth.li Fri Jan 3 19:34:23 2014 From: igloo at earth.li (Ian Lynagh) Date: Fri, 3 Jan 2014 19:34:23 +0000 Subject: GHC Api In-Reply-To: <59543203684B2244980D7E4057D5FBC148704B22@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148702143@DB3EX14MBXC306.europe.corp.microsoft.com> <52C58160.6030800@gmail.com> <20140102182916.GA5086@matrix.chaos.earth.li> <59543203684B2244980D7E4057D5FBC148704B22@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <20140103193422.GA15810@matrix.chaos.earth.li> On Fri, Jan 03, 2014 at 10:19:02AM +0000, Simon Peyton-Jones wrote: > | Is this with a statically linked or dynamically linked GHC? > > I don't know. How would I find out? (It's the one built by validate.) > > You are asking about GHC, but I guess there's also the question of whether the test program itself is statically or dynamically linked. Oh, yes, sorry, I was thinking this was in ghci for some reason. You're right that it's the test program we need to know about. > Why would static/dynamic linking make a difference? That's very confusing! With dynamic linking, there will be one shared copy of base, and in particular one shared stdout buffer. The runtime will flush that buffer when the program exits. With static linking, you'll be loading a second copy of base in which the statement is evalauted, and that base will have a separate stdout buffer. GHCi flushes this when appropriate by calling flushInterpBuffers. Thanks Ian From ggreif at gmail.com Fri Jan 3 20:04:43 2014 From: ggreif at gmail.com (Gabor Greif) Date: Fri, 3 Jan 2014 21:04:43 +0100 Subject: Type-level reasoning ability lost for TypeLits? In-Reply-To: References: Message-ID: On 1/3/14, Iavor Diatchki wrote: > Hi Gabor, Hi Iavor, thanks for replying promptly! > > > On Fri, Jan 3, 2014 at 9:33 AM, Gabor Greif wrote: > >> Hi devs, >> >> with recent iterations of GHC.TypeLits (HEAD) I am struggling to get >> something simple working. I have >> >> > data Number nat = KnownNat nat => Number !(Proxy nat) >> >> and want to write >> >> > addNumbers :: Number a -> Number b -> Maybe (Number (a + b)) >> >> Unfortunately I cannot find a way to create the necessary KnownNat (a >> + b) constraint. >> >> > Indeed, there is no way to construct `KnownNumber (a + b)` from `(Known a, > Known b)`. This is not something that we lost, it was just never > implemented. We could make something like it work, I think, but it'd make > things a bit more complex: the representation of `KnownNumber` dictionaries > would have to be expressions, rather than a simple constant. Edwardkmettian dictionary-coercion tricks might help, but: > instance (KnownNat a, KnownNat b) => KnownNat (a + b) testTypeNats.lhs:1:40: Illegal type synonym family application in instance: a + b In the instance declaration for 'KnownNat (a + b)' So we have more fundamental problems here. Why is this illegal? > > I'd be inclined to leave this as is for now---let's see what we can do with > the current system, before we add more functionality. Okay, I'll ponder a bit how such a thing would look like. > > > Declaring the function thus >> >> > addNumbers :: KnownNat (a + b) => Number a -> Number b -> Maybe (Number >> (a + b)) >> >> only dodges the problem around. >> > > Dodging problems is good! :-) I don't fully understand from the type what > the function is supposed to do, but I'd write something like this: > > addNumbers :: (KnownNat a, KnownNat b) > => (Integer -> Integer -> Bool) -- ^ Some constraint? > Proxy a -> Proxy b -> Maybe (Proxy (a + b)) > addNumber p x y = > do guard (p (natVal x) (natVal y)) > return Proxy I souped this up thus: > {-# LANGUAGE TypeSynonymInstances, TypeOperators, GADTs #-} > import GHC.TypeLits > import Data.Proxy > import Control.Monad > data Number nat = KnownNat nat => Number !(Proxy nat) > addNumbers :: Number a -> Number b -> Maybe (Number (a + b)) > (Number a at Proxy) `addNumbers` (Number b at Proxy) > = case addNumber (\_ _-> True) a b of Just p -> Just $ Number p > addNumber :: (KnownNat a, KnownNat b) > => (Integer -> Integer -> Bool) -- ^ Some constraint? > -> Proxy a -> Proxy b -> Maybe (Proxy (a + b)) > addNumber p x y = > do guard (p (natVal x) (natVal y)) > return Proxy And I get an error where I wrap the Number constructor around the resulting proxy: testTypeNats.lhs:11:60: Could not deduce (KnownNat (a + b)) arising from a use of 'Number' from the context (KnownNat a) bound by a pattern with constructor Number :: forall (nat :: Nat). KnownNat nat => Proxy nat -> Number nat, in an equation for 'addNumbers' at testTypeNats.lhs:10:4-17 or from (KnownNat b) bound by a pattern with constructor Number :: forall (nat :: Nat). KnownNat nat => Proxy nat -> Number nat, in an equation for 'addNumbers' at testTypeNats.lhs:10:34-47 In the second argument of '($)', namely 'Number p' In the expression: Just $ Number p In a case alternative: Just p -> Just $ Number p Getting the sum proxy is not the problem. > > > > >> >> Also I am wondering where the ability to perform a type equality check >> went. I.e. I cannot find the relevant functionality to obtain >> >> > sameNumber :: Number a -> Number b -> Maybe (a :~: b) >> >> I guess there should be some TestEquality instance (for Proxy Nat?, is >> this possible at all), but I cannot find it. Same applies for Symbols. >> >> > Ah yes, I thought that this was supposed to be added to some other library, > but I guess that never happened. It was implemented like this, if you need > it right now. > > sameNumber :: (KnownNat a, KnownNat b) > => Proxy a -> Proxy b -> Maybe (a :~: b) > sameNumber x y > | natVal x == natVal y = Just (unsafeCoerce Refl) > | otherwise = Nothing > > This doesn't fit the pattern for the `TestEquality` class (due to the > constraints on the parameters), so perhaps I'll add it back to > GHC.TypeLits. Yeah, this would be helpful! It does not matter whether the TestEquality interface is there, as I can define that for my Number data type. But I don't want to sprinkle my code with unsafeCoerce! (Btw, these functions should be named: sameNat, sameSymbol.) Thanks again, Gabor > > -Iavor > From howard_b_golden at yahoo.com Fri Jan 3 20:57:18 2014 From: howard_b_golden at yahoo.com (Howard B. Golden) Date: Fri, 3 Jan 2014 12:57:18 -0800 (PST) Subject: Idea for improving communication between devs and potential devs Message-ID: <1388782638.65533.YahooMailNeo@web164004.mail.gq1.yahoo.com> Hi, I'd like to get involved in developing, but I recognize the learning curve involved. To get started I'd like to improve the Trac wiki documentation. Part of this would include additional documentation of less-documented parts of the compiler and RTS. In addition, I'd like to start some sort of "what's new" that boils down the GHC Dev mailing list discussion as LWN does for the Linux kernel mailing list. I don't imagine that I can do this all by myself, but I hope this idea would resonate with others looking to get started as well. This is meant to be more frequent and more detailed than what HCAR does for GHC now, though I don't expect anyone can do it weekly. Please let me know what you think about this idea. I'm open to any suggestions for improving it also. Howard B. Golden Northridge, CA, USA From hellertime at gmail.com Fri Jan 3 21:11:40 2014 From: hellertime at gmail.com (Chris Heller) Date: Fri, 3 Jan 2014 16:11:40 -0500 Subject: Idea for improving communication between devs and potential devs Message-ID: I think a weekly summary like what LWN provides would be very valuable. Perhaps there is an opportunity to piggy-back this on the work of the Haskell Weekly News project ( http://contemplatecode.blogspot.com/search/label/HWN). -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From howard_b_golden at yahoo.com Fri Jan 3 21:28:18 2014 From: howard_b_golden at yahoo.com (Howard B. Golden) Date: Fri, 3 Jan 2014 13:28:18 -0800 (PST) Subject: Idea for improving communication between devs and potential devs Message-ID: <1388784498.37611.YahooMailNeo@web164006.mail.gq1.yahoo.com> Chris, Thanks for the pointer to HWN. I wasn't aware of it before. I can certainly send things to the author which may be of interest to readers. In addition I like incorporating the updates into the GHC Devs wiki to make it easier to find them that way. We each have our preferred way of keeping up-to-date. Howard From carter.schonwald at gmail.com Fri Jan 3 21:42:02 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 3 Jan 2014 16:42:02 -0500 Subject: ticket for adding ARM backend to NCG? In-Reply-To: References: <20131223130759.463580657bd05f4bca3a725c@mega-nerd.com> <52C69DDC.5090009@centrum.cz> <59543203684B2244980D7E4057D5FBC148704C6A@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: yes, but the conclusion was its unclear if it makes sense, also thats orthogonal to whether or not someone decides to do an arm NCG :) On Fri, Jan 3, 2014 at 2:06 PM, Rob Stewart wrote: > On 3 January 2014 12:37, Simon Peyton-Jones wrote: > > > * But using LLVM has some disadvantages. > > c) we play some efficiency tricks (notably "tables next to code") that > > LLVM can't play (yet). I think. > > In fact, this could well be implemented in the GHC 7.10, as this has > been committed in LLVM on 15th September: > http://www.haskell.org/pipermail/ghc-devs/2013-September/002565.html > > Implementing "tables next to code" in the LLVM IR generation may be > something to get one's teeth into in time for 7.10 ? > > Carter: was this discussed further on #haskell-llvm ? > > -- > Rob > > > > | -----Original Message----- > > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > Karel > > | Gardas > > | Sent: 03 January 2014 11:24 > > | To: Jens Petersen > > | Cc: ghc-devs at haskell.org > > | Subject: Re: ticket for adding ARM backend to NCG? > > | > > | > > | Guys, > > | > > | I've been tinkering with ARM NCG idea for quite some time now, but > > | honestly I've been always in doubts if it's the best way for GHC at > all. > > | I've thought that the plan was to kind of move out of NCG to LLVM based > > | backends and I've though that although this plan may be kind of stuck > > | now, it's still on the table. > > | > > | Yes, I know that GHC is volunteering effort so if someone comes and > asks > > | for an ARM NCG implementation merge it'll be probably done in some > time, > > | but I'm not sure if it's what's the most welcome at the end. > > | > > | Just some of my doubts about it... > > | > > | I would really appreciate some authoritative word about the topic from > > | more involved GHC developers... I mean especially about NCG future... > > | > > | Thanks! > > | Karel > > | > > | On 01/ 3/14 09:35 AM, Jens Petersen wrote: > > | > On 3 January 2014 03:10, Corey O'Connor > | > > wrote: > > | > > > | > My interest is just to get involved somehow in the NCG. Starting > a > > | > new backend seemed reasonable only because I couldn't break > > | > something that didn't exist. ;-) > > | > > > | > > > | > Well a big +1 from me for armv7 NCG. > > | > > > | > > > | > _______________________________________________ > > | > ghc-devs mailing list > > | > ghc-devs at haskell.org > > | > http://www.haskell.org/mailman/listinfo/ghc-devs > > | > > | _______________________________________________ > > | ghc-devs mailing list > > | ghc-devs at haskell.org > > | http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.dead.shall.rise at gmail.com Fri Jan 3 21:53:02 2014 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Fri, 3 Jan 2014 22:53:02 +0100 Subject: ticket for adding ARM backend to NCG? In-Reply-To: References: <20131223130759.463580657bd05f4bca3a725c@mega-nerd.com> <52C69DDC.5090009@centrum.cz> <59543203684B2244980D7E4057D5FBC148704C6A@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Hi, On Fri, Jan 3, 2014 at 8:06 PM, Rob Stewart wrote: > On 3 January 2014 12:37, Simon Peyton-Jones wrote: > >> * But using LLVM has some disadvantages. >> c) we play some efficiency tricks (notably "tables next to code") that >> LLVM can't play (yet). I think. > > In fact, this could well be implemented in the GHC 7.10, as this has > been committed in LLVM on 15th September: > http://www.haskell.org/pipermail/ghc-devs/2013-September/002565.html >From my reading of the documentation for this feature it seems like for GHC to take advantage of it LLVM also needs to implement global symbol offsets [1]. I've emailed the author of the function prefix data patch, but he didn't respond. [1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-April/061511.html -- () ascii ribbon campaign - against html e-mail /\ www.asciiribbon.org - against proprietary attachments From hvriedel at gmail.com Fri Jan 3 22:24:45 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 03 Jan 2014 23:24:45 +0100 Subject: Idea for improving communication between devs and potential devs In-Reply-To: <1388782638.65533.YahooMailNeo@web164004.mail.gq1.yahoo.com> (Howard B. Golden's message of "Fri, 3 Jan 2014 12:57:18 -0800 (PST)") References: <1388782638.65533.YahooMailNeo@web164004.mail.gq1.yahoo.com> Message-ID: <87k3eg3dqq.fsf@gmail.com> On 2014-01-03 at 21:57:18 +0100, Howard B. Golden wrote: > In addition, I'd like to start some sort of "what's new" that boils > down the GHC Dev mailing list discussion as LWN does for the Linux > kernel mailing list. maybe https://ghc.haskell.org/trac/ghc/blog could be revived by that...? From yo.eight at gmail.com Fri Jan 3 23:13:40 2014 From: yo.eight at gmail.com (Yorick Laupa) Date: Sat, 4 Jan 2014 00:13:40 +0100 Subject: Tuple predicates in Template Haskell Message-ID: Hi, I try to make my way through #7021 [1]. Unfortunately, there is nothing in the ticket about what should be expected from the code given as example. I came with an implementation and I would like feedback from you guys. So, considering this snippet: -- {-# LANGUAGE ConstraintKinds #-} type IOable a = (Show a, Read a) foo :: IOable a => a foo = undefined -- This is what I got now when pretty-printing TH.Info after reify "foo" call: VarI Tuple.foo (ForallT [PlainTV a_1627398594] [TupleP 2 [AppT (ConT GHC.Show.Show) (VarT a_1627398594),AppT (ConT GHC.Read.Read) (VarT a_1627398594)]] (VarT a_1627398594)) Nothing (Fixity 9 InfixL) Does that sound right to you ? Thanks for your time -- Yorick [1] https://ghc.haskell.org/trac/ghc/ticket/7021 -------------- next part -------------- An HTML attachment was scrubbed... URL: From howard_b_golden at yahoo.com Fri Jan 3 23:21:48 2014 From: howard_b_golden at yahoo.com (Howard B. Golden) Date: Fri, 3 Jan 2014 15:21:48 -0800 (PST) Subject: Idea for improving communication between devs and potential devs In-Reply-To: <87k3eg3dqq.fsf@gmail.com> References: <1388782638.65533.YahooMailNeo@web164004.mail.gq1.yahoo.com> <87k3eg3dqq.fsf@gmail.com> Message-ID: <1388791308.69375.YahooMailNeo@web164003.mail.gq1.yahoo.com> Herbert, A revived blog would be great as well, if the Simons and other devs?have time to write it. I certainly don't know enough to write it myself, but I can collate what others are talking about and maybe agreeing about on the mailing list. I think what I can produce would work better as wiki entries, rather than a blog, so it can have both a topical and chronological access path, but I am open to the blog approach as well if others will write content too. Howard ----- Original Message ----- From: Herbert Valerio Riedel To: Howard B. Golden Cc: "ghc-devs at haskell.org" Sent: Friday, January 3, 2014 2:24 PM Subject: Re: Idea for improving communication between devs and potential devs On 2014-01-03 at 21:57:18 +0100, Howard B. Golden wrote: > In addition, I'd like to start some sort of "what's new" that boils > down the GHC Dev mailing list discussion as LWN does for the Linux > kernel mailing list. maybe ? https://ghc.haskell.org/trac/ghc/blog could be revived by that...? From ggreif at gmail.com Fri Jan 3 23:25:42 2014 From: ggreif at gmail.com (Gabor Greif) Date: Sat, 4 Jan 2014 00:25:42 +0100 Subject: [commit: packages/base] master: Add functions to compare Nat and Symbol types for equality. (c5c8c4d) In-Reply-To: <20140103231144.D8ED52406B@ghc.haskell.org> References: <20140103231144.D8ED52406B@ghc.haskell.org> Message-ID: Iavor, this is great! Just out of curiosity, you import TestEquality but never reference it. Is this an oversight, should I nuke it? Cheers, Gabor On 1/4/14, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/base > > On branch : master > Link : > http://ghc.haskell.org/trac/ghc/changeset/c5c8c4dfbdc8493bcfaa804751eff2a9a41cc07a/base > >>--------------------------------------------------------------- > > commit c5c8c4dfbdc8493bcfaa804751eff2a9a41cc07a > Author: Iavor S. Diatchki > Date: Fri Jan 3 15:11:34 2014 -0800 > > Add functions to compare Nat and Symbol types for equality. > > >>--------------------------------------------------------------- > > c5c8c4dfbdc8493bcfaa804751eff2a9a41cc07a > GHC/TypeLits.hs | 23 ++++++++++++++++++++++- > 1 file changed, 22 insertions(+), 1 deletion(-) > > diff --git a/GHC/TypeLits.hs b/GHC/TypeLits.hs > index f3ba70e..129beb3 100644 > --- a/GHC/TypeLits.hs > +++ b/GHC/TypeLits.hs > @@ -26,6 +26,8 @@ module GHC.TypeLits > , KnownSymbol, symbolVal > , SomeNat(..), SomeSymbol(..) > , someNatVal, someSymbolVal > + , sameNat, sameSymbol > + > > -- * Functions on type nats > , type (<=), type (<=?), type (+), type (*), type (^), type (-) > @@ -40,7 +42,8 @@ import GHC.Read(Read(..)) > import GHC.Prim(magicDict) > import Data.Maybe(Maybe(..)) > import Data.Proxy(Proxy(..)) > -import Data.Type.Equality(type (==)) > +import Data.Type.Equality(type (==), TestEquality(..), (:~:)(Refl)) > +import Unsafe.Coerce(unsafeCoerce) > > -- | (Kind) This is the kind of type-level natural numbers. > data Nat > @@ -167,6 +170,23 @@ type family (m :: Nat) ^ (n :: Nat) :: Nat > type family (m :: Nat) - (n :: Nat) :: Nat > > > +-------------------------------------------------------------------------------- > + > +-- | We either get evidence that this function was instantiated with the > +-- same type-level numbers, or 'Nothing'. > +sameNat :: (KnownNat a, KnownNat b) => > + Proxy a -> Proxy b -> Maybe (a :~: b) > +sameNat x y > + | natVal x == natVal y = Just (unsafeCoerce Refl) > + | otherwise = Nothing > + > +-- | We either get evidence that this function was instantiated with the > +-- same type-level symbols, or 'Nothing'. > +sameSymbol :: (KnownSymbol a, KnownSymbol b) => > + Proxy a -> Proxy b -> Maybe (a :~: b) > +sameSymbol x y > + | symbolVal x == symbolVal y = Just (unsafeCoerce Refl) > + | otherwise = Nothing > > -------------------------------------------------------------------------------- > -- PRIVATE: > @@ -187,3 +207,4 @@ withSSymbol :: (KnownSymbol a => Proxy a -> b) > -> SSymbol a -> Proxy a -> b > withSSymbol f x y = magicDict (WrapS f) x y > > + > > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-commits > From carter.schonwald at gmail.com Sat Jan 4 00:08:03 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 3 Jan 2014 19:08:03 -0500 Subject: Idea for improving communication between devs and potential devs In-Reply-To: <1388791308.69375.YahooMailNeo@web164003.mail.gq1.yahoo.com> References: <1388782638.65533.YahooMailNeo@web164004.mail.gq1.yahoo.com> <87k3eg3dqq.fsf@gmail.com> <1388791308.69375.YahooMailNeo@web164003.mail.gq1.yahoo.com> Message-ID: Great. We're open to ideas, and I think everyone will be happy to help make this work. One possible model worth emulating would be some sort of ghc and related projects analogue of this week In rust. http://cmr.github.io/blog/2013/10/28/this-week-in-rust/ eg summarizing what's been committed that week that may be interesting etc. On Friday, January 3, 2014, Howard B. Golden wrote: > Herbert, > > A revived blog would be great as well, if the Simons and other devs have > time to write it. I certainly don't know enough to write it myself, but I > can collate what others are talking about and maybe agreeing about on the > mailing list. I think what I can produce would work better as wiki entries, > rather than a blog, so it can have both a topical and chronological access > path, but I am open to the blog approach as well if others will write > content too. > > Howard > > > ----- Original Message ----- > From: Herbert Valerio Riedel > > To: Howard B. Golden > > Cc: "ghc-devs at haskell.org " > > > Sent: Friday, January 3, 2014 2:24 PM > Subject: Re: Idea for improving communication between devs and potential > devs > > On 2014-01-03 at 21:57:18 +0100, Howard B. Golden wrote: > > > > In addition, I'd like to start some sort of "what's new" that boils > > down the GHC Dev mailing list discussion as LWN does for the Linux > > kernel mailing list. > > maybe > > https://ghc.haskell.org/trac/ghc/blog > > could be revived by that...? > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sat Jan 4 01:03:21 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 04 Jan 2014 02:03:21 +0100 Subject: Test suite regressions Message-ID: <1388797401.18630.24.camel@kirk> Hi, travis-ci reports test suite failures. Unfortunately, the builds still sometimes timeout, so I cannot pin-point the precise change, but someone pushing today broke Unexpected failures: ghci/scripts T8639 [bad stdout] (ghci) polykinds T7594 [stderr mismatch] (normal) https://s3.amazonaws.com/archive.travis-ci.org/jobs/16345412/log.txt If you pushed today, please check if you might have broken them. And please validate your changes before pushing! (I have some scripts that make clean validating mostly hassle-free, based on a dedicated build host where I push to "validate/some-name", and after lunch I come back and see if the branch was renamed to "validated/some-name" or "broken/some-name" ? I can share them if you are interested. Although I believe that we would benefit from a central, official solution.) Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From iavor.diatchki at gmail.com Sat Jan 4 01:44:20 2014 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Fri, 3 Jan 2014 17:44:20 -0800 Subject: [commit: packages/base] master: Add functions to compare Nat and Symbol types for equality. (c5c8c4d) In-Reply-To: References: <20140103231144.D8ED52406B@ghc.haskell.org> Message-ID: Hi, oh yes, I was going to add the instance and then I realized it doesn't work. Please feel free to fix. Thanks! -Iavor On Fri, Jan 3, 2014 at 3:25 PM, Gabor Greif wrote: > Iavor, > > this is great! Just out of curiosity, you import TestEquality but > never reference it. Is this an oversight, should I nuke it? > > Cheers, > > Gabor > > On 1/4/14, git at git.haskell.org wrote: > > Repository : ssh://git at git.haskell.org/base > > > > On branch : master > > Link : > > > http://ghc.haskell.org/trac/ghc/changeset/c5c8c4dfbdc8493bcfaa804751eff2a9a41cc07a/base > > > >>--------------------------------------------------------------- > > > > commit c5c8c4dfbdc8493bcfaa804751eff2a9a41cc07a > > Author: Iavor S. Diatchki > > Date: Fri Jan 3 15:11:34 2014 -0800 > > > > Add functions to compare Nat and Symbol types for equality. > > > > > >>--------------------------------------------------------------- > > > > c5c8c4dfbdc8493bcfaa804751eff2a9a41cc07a > > GHC/TypeLits.hs | 23 ++++++++++++++++++++++- > > 1 file changed, 22 insertions(+), 1 deletion(-) > > > > diff --git a/GHC/TypeLits.hs b/GHC/TypeLits.hs > > index f3ba70e..129beb3 100644 > > --- a/GHC/TypeLits.hs > > +++ b/GHC/TypeLits.hs > > @@ -26,6 +26,8 @@ module GHC.TypeLits > > , KnownSymbol, symbolVal > > , SomeNat(..), SomeSymbol(..) > > , someNatVal, someSymbolVal > > + , sameNat, sameSymbol > > + > > > > -- * Functions on type nats > > , type (<=), type (<=?), type (+), type (*), type (^), type (-) > > @@ -40,7 +42,8 @@ import GHC.Read(Read(..)) > > import GHC.Prim(magicDict) > > import Data.Maybe(Maybe(..)) > > import Data.Proxy(Proxy(..)) > > -import Data.Type.Equality(type (==)) > > +import Data.Type.Equality(type (==), TestEquality(..), (:~:)(Refl)) > > +import Unsafe.Coerce(unsafeCoerce) > > > > -- | (Kind) This is the kind of type-level natural numbers. > > data Nat > > @@ -167,6 +170,23 @@ type family (m :: Nat) ^ (n :: Nat) :: Nat > > type family (m :: Nat) - (n :: Nat) :: Nat > > > > > > > +-------------------------------------------------------------------------------- > > + > > +-- | We either get evidence that this function was instantiated with the > > +-- same type-level numbers, or 'Nothing'. > > +sameNat :: (KnownNat a, KnownNat b) => > > + Proxy a -> Proxy b -> Maybe (a :~: b) > > +sameNat x y > > + | natVal x == natVal y = Just (unsafeCoerce Refl) > > + | otherwise = Nothing > > + > > +-- | We either get evidence that this function was instantiated with the > > +-- same type-level symbols, or 'Nothing'. > > +sameSymbol :: (KnownSymbol a, KnownSymbol b) => > > + Proxy a -> Proxy b -> Maybe (a :~: b) > > +sameSymbol x y > > + | symbolVal x == symbolVal y = Just (unsafeCoerce Refl) > > + | otherwise = Nothing > > > > > -------------------------------------------------------------------------------- > > -- PRIVATE: > > @@ -187,3 +207,4 @@ withSSymbol :: (KnownSymbol a => Proxy a -> b) > > -> SSymbol a -> Proxy a -> b > > withSSymbol f x y = magicDict (WrapS f) x y > > > > + > > > > _______________________________________________ > > ghc-commits mailing list > > ghc-commits at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-commits > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggreif at gmail.com Sat Jan 4 03:48:50 2014 From: ggreif at gmail.com (Gabor Greif) Date: Sat, 4 Jan 2014 04:48:50 +0100 Subject: [commit: packages/base] master: Add functions to compare Nat and Symbol types for equality. (c5c8c4d) In-Reply-To: References: <20140103231144.D8ED52406B@ghc.haskell.org> Message-ID: On 1/4/14, Iavor Diatchki wrote: > Hi, > oh yes, I was going to add the instance and then I realized it doesn't > work. Please feel free to fix. > Thanks! Done: http://ghc.haskell.org/trac/ghc/changeset/b62f687e23d90c2ff4536e4e7788e5d9acb2b66c/base Cheers, Gabor > -Iavor > > > On Fri, Jan 3, 2014 at 3:25 PM, Gabor Greif wrote: > >> Iavor, >> >> this is great! Just out of curiosity, you import TestEquality but >> never reference it. Is this an oversight, should I nuke it? >> >> Cheers, >> >> Gabor >> >> On 1/4/14, git at git.haskell.org wrote: >> > Repository : ssh://git at git.haskell.org/base >> > >> > On branch : master >> > Link : >> > >> http://ghc.haskell.org/trac/ghc/changeset/c5c8c4dfbdc8493bcfaa804751eff2a9a41cc07a/base >> > >> >>--------------------------------------------------------------- >> > >> > commit c5c8c4dfbdc8493bcfaa804751eff2a9a41cc07a >> > Author: Iavor S. Diatchki >> > Date: Fri Jan 3 15:11:34 2014 -0800 >> > >> > Add functions to compare Nat and Symbol types for equality. >> > >> > >> >>--------------------------------------------------------------- >> > >> > c5c8c4dfbdc8493bcfaa804751eff2a9a41cc07a >> > GHC/TypeLits.hs | 23 ++++++++++++++++++++++- >> > 1 file changed, 22 insertions(+), 1 deletion(-) >> > >> > diff --git a/GHC/TypeLits.hs b/GHC/TypeLits.hs >> > index f3ba70e..129beb3 100644 >> > --- a/GHC/TypeLits.hs >> > +++ b/GHC/TypeLits.hs >> > @@ -26,6 +26,8 @@ module GHC.TypeLits >> > , KnownSymbol, symbolVal >> > , SomeNat(..), SomeSymbol(..) >> > , someNatVal, someSymbolVal >> > + , sameNat, sameSymbol >> > + >> > >> > -- * Functions on type nats >> > , type (<=), type (<=?), type (+), type (*), type (^), type (-) >> > @@ -40,7 +42,8 @@ import GHC.Read(Read(..)) >> > import GHC.Prim(magicDict) >> > import Data.Maybe(Maybe(..)) >> > import Data.Proxy(Proxy(..)) >> > -import Data.Type.Equality(type (==)) >> > +import Data.Type.Equality(type (==), TestEquality(..), (:~:)(Refl)) >> > +import Unsafe.Coerce(unsafeCoerce) >> > >> > -- | (Kind) This is the kind of type-level natural numbers. >> > data Nat >> > @@ -167,6 +170,23 @@ type family (m :: Nat) ^ (n :: Nat) :: Nat >> > type family (m :: Nat) - (n :: Nat) :: Nat >> > >> > >> > >> +-------------------------------------------------------------------------------- >> > + >> > +-- | We either get evidence that this function was instantiated with >> > the >> > +-- same type-level numbers, or 'Nothing'. >> > +sameNat :: (KnownNat a, KnownNat b) => >> > + Proxy a -> Proxy b -> Maybe (a :~: b) >> > +sameNat x y >> > + | natVal x == natVal y = Just (unsafeCoerce Refl) >> > + | otherwise = Nothing >> > + >> > +-- | We either get evidence that this function was instantiated with >> > the >> > +-- same type-level symbols, or 'Nothing'. >> > +sameSymbol :: (KnownSymbol a, KnownSymbol b) => >> > + Proxy a -> Proxy b -> Maybe (a :~: b) >> > +sameSymbol x y >> > + | symbolVal x == symbolVal y = Just (unsafeCoerce Refl) >> > + | otherwise = Nothing >> > >> > >> -------------------------------------------------------------------------------- >> > -- PRIVATE: >> > @@ -187,3 +207,4 @@ withSSymbol :: (KnownSymbol a => Proxy a -> b) >> > -> SSymbol a -> Proxy a -> b >> > withSSymbol f x y = magicDict (WrapS f) x y >> > >> > + >> > >> > _______________________________________________ >> > ghc-commits mailing list >> > ghc-commits at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-commits >> > >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > From andrew.gibiansky at gmail.com Sat Jan 4 04:06:06 2014 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Fri, 3 Jan 2014 23:06:06 -0500 Subject: Fwd: Changing GHC Error Message Wrapping In-Reply-To: References: Message-ID: Hello, I'd like to change how the error messages from GHC get wrapped. I am using the following code: flip gcatch handler $ do runStmt "let f (x, y, z, w, e, r, d , ax, b ,c,ex ,g ,h) = (x :: Int) + y + z" RunToCompletion runStmt "f (1, 2, 3)" RunToCompletion return () where handler :: SomeException -> Ghc () handler e = liftIO $ putStrLn $ "Exception:\n" ++ show e The output I am getting looks like this: [image: Inline image 1] I would like the types to not wrap at all, or wrap at some very long length along the lines of 200-300 characters. I have seen the `pprUserLength` and `pprCols` fields in DynFlags, but they don't seem to do anything. What should I do? Thanks! Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Jan 4 06:05:48 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 4 Jan 2014 01:05:48 -0500 Subject: Changing GHC Error Message Wrapping In-Reply-To: References: Message-ID: hey andrew, your image link isn't working (i'm using gmail) On Fri, Jan 3, 2014 at 11:06 PM, Andrew Gibiansky < andrew.gibiansky at gmail.com> wrote: > > > Hello, > > I'd like to change how the error messages from GHC get wrapped. > > I am using the following code: > > flip gcatch handler $ do > runStmt "let f (x, y, z, w, e, r, d , ax, b ,c,ex ,g ,h) = (x :: Int) > + y + z" RunToCompletion > runStmt "f (1, 2, 3)" RunToCompletion > return () > where > handler :: SomeException -> Ghc () > handler e = > liftIO $ putStrLn $ "Exception:\n" ++ show e > > The output I am getting looks like this: > > [image: Inline image 1] > > I would like the types to not wrap at all, or wrap at some very long > length along the lines of 200-300 characters. > > I have seen the `pprUserLength` and `pprCols` fields in DynFlags, but they > don't seem to do anything. > > What should I do? > > Thanks! > Andrew > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mle+hs at mega-nerd.com Sat Jan 4 07:55:07 2014 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Sat, 4 Jan 2014 18:55:07 +1100 Subject: Changing GHC Error Message Wrapping In-Reply-To: References: Message-ID: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> Carter Schonwald wrote: > hey andrew, your image link isn't working (i'm using gmail) I think the list software filters out image attachments. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From marlowsd at gmail.com Sat Jan 4 09:42:46 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Sat, 04 Jan 2014 09:42:46 +0000 Subject: GHC Api In-Reply-To: <59543203684B2244980D7E4057D5FBC148704D2D@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148702143@DB3EX14MBXC306.europe.corp.microsoft.com> <52C58160.6030800@gmail.com> <59543203684B2244980D7E4057D5FBC148704D2D@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52C7D796.1060806@gmail.com> On 03/01/14 13:46, Simon Peyton-Jones wrote: > | setSessionDynFlags loads the package database and does the necessary > | processing to make packages available. We don't do that automatically, > | because the client might want to add their own package flags to the > | DynFlags between the calls to getSessionDynFlags and setSessionDynFlags. > > So it would be *OK* for runGhc to call setSessionDynFlags; but it might be a bit inefficient in the case you describe where the user adds their own package flags (which is uncommon). Correct? > > In that case, couldn't runGhc do the package initialisation thing, and we can perhaps provide a super-efficient variant of runGhc that doesn't do so for the reason you state? That would make the common case simple. > > > > | I'm not all that familiar with the unsafeGlobalDynFlags stuff (that's > | Ian's invention), but from looking at the code it looks like you > | wouldn't need to call this if you were calling parseDynamicFlags. It > | should be safe to call parseDynamicFlags with an empty set of flags to > | parse. > > True but weird. The point is that, instead of parsing a string, runGhc creates a fresh empty DynFlags (in inigGhcMonad actually). Since this is an alternative to parsing a string, it should set the static thing too, just as the string-parsing route does (in parseDynamicFlagsFull, as you point out). I haven't looked into this in detail, but clients that need to parse command line flags will be doing *both* runGhc and parseDynamicFlags, so it's not really an alternative to parsing a string. Still, perhaps it would be fine to call setUnsafeGlobalDynFlags twice. > I'll do this unless you or Ian object. > > | I think it's sensible to require a call to setContext to bring the > | Prelude into scope. The client might want a different context, and > | setContext isn't free, so we probably don't want to initialise a default > | context. > > This is very similar to the first point above. Maybe runGhc can do common thing (initialise packages, import Prelude), with a variant that doesn't? > > What do others think? We could certainly add another function to the API that packages up runGhc, setSessionDynFlags and setContext. It's not clear to me that this should be what runGhc does, though; it seems less likely to cause problems if we just add a new function to the API and let people migrate slowly. In the case of setContext, there are clients that don't need an interactive context at all: Haddock, for example. I do think that if we're going to package up "common" stuff we should look at what really is common, by surveying a few existing clients on Hackage. Cheers, Simon > Simon > > | -----Original Message----- > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | Sent: 02 January 2014 15:10 > | To: Simon Peyton-Jones > | Cc: ghc-devs > | Subject: Re: GHC Api > | > | On 02/01/14 07:06, Simon Peyton-Jones wrote: > | > Simon and othere > | > > | > Happy new year! > | > > | > When debugging Trac #8628 I wrote the following: > | > > | > main > | > > | > = do [libdir] <- getArgs > | > > | > ok <- runGhc (Just libdir) $ do > | > > | > dflags <- getSessionDynFlags -- (1) > | > > | > setSessionDynFlags dflags > | > > | > liftIO (setUnsafeGlobalDynFlags dflags) -- (2) > | > > | > setContext [IIDecl (simpleImportDecl pRELUDE_NAME)] -- (3) > | > > | > runDecls "data X = Y Int" > | > > | > runStmt "print True" -- (4) > | > > | > return () > | > > | > There are several odd things here > | > > | > 1.Why do I have to do this "getSessionDynFlags/setSessionDynFlags" > | > thing. Seems bizarre. I just copied it from some other tests in > | > ghc-api/. Is it necessary? If not, can we remove it from all tests? > | > | It's a sensible question given the naming of the functions. The API is > | definitely clunky here, but there is a purpose to these calls. > | setSessionDynFlags loads the package database and does the necessary > | processing to make packages available. We don't do that automatically, > | because the client might want to add their own package flags to the > | DynFlags between the calls to getSessionDynFlags and setSessionDynFlags. > | Incidentally you can find out some of this stuff from the Haddock docs, > | e.g. look at the docs for setSessionDynFlags. > | > | > 2.Initially I didn't have that setUnsafeGlobalDynFlags call. But then > | > I got > | > > | > T8628.exe: T8628.exe: panic! (the 'impossible' happened) > | > > | > (GHC version 7.7.20131228 for i386-unknown-mingw32): > | > > | > v_unsafeGlobalDynFlags: not initialised > | > > | > which is a particularly unhelpful message. It arose because I was > | > using a GHC built with assertions on, and a warnPprTrace triggered. > | > Since this could happen to anyone, would it make sense to make this > | > part of runGhc and setSessionDynFlags? > | > | I'm not all that familiar with the unsafeGlobalDynFlags stuff (that's > | Ian's invention), but from looking at the code it looks like you > | wouldn't need to call this if you were calling parseDynamicFlags. It > | should be safe to call parseDynamicFlags with an empty set of flags to > | parse. > | > | > 3.Initially I didn't have that setContext call, and got a complaint > | > that "Int is not in scope". I was expecting the Prelude to be > | > implicitly in scope. But I'm not sure where to fix that. Possibly > | > part of the setup in runGhc? > | > | I think it's sensible to require a call to setContext to bring the > | Prelude into scope. The client might want a different context, and > | setContext isn't free, so we probably don't want to initialise a default > | context. > | > | > 4.The runStmt should print something somewhere, but it doesn't. Why > | not? > | > | I've no idea! It does look like it should print something. > | > | Cheers, > | Simon > | > | > What do you think? > | > > | > Simon > | > > From marlowsd at gmail.com Sat Jan 4 09:47:28 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Sat, 04 Jan 2014 09:47:28 +0000 Subject: ticket for adding ARM backend to NCG? In-Reply-To: <59543203684B2244980D7E4057D5FBC148704C6A@DB3EX14MBXC306.europe.corp.microsoft.com> References: <20131223130759.463580657bd05f4bca3a725c@mega-nerd.com> <52C69DDC.5090009@centrum.cz> <59543203684B2244980D7E4057D5FBC148704C6A@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52C7D8B0.7030603@gmail.com> On 03/01/14 12:37, Simon Peyton-Jones wrote: > | I've been tinkering with ARM NCG idea for quite some time now, but > | honestly I've been always in doubts if it's the best way for GHC at all. > | I've thought that the plan was to kind of move out of NCG to LLVM based > | backends and I've though that although this plan may be kind of stuck > | now, it's still on the table. > > I have not been following the ARM and LLVM threads very closely, but here's my take: > > * LLVM is (I hope) very much on the table. LLVM itself is a well-resourced project, > and we can expect it to continue to exist. We should piggy-back on all the > hard work that is going into it. > > * But using LLVM has some disadvantages. > a) it imposes a dependency on LLVM > b) it makes compilation slower Correct > c) we play some efficiency tricks (notably "tables next to code") that > LLVM can't play (yet). I think. Actually we have to generate tables-next-to-code from LLVM too, because the LLVM and NCG backends must be compatible (you can choose to use LLVM on a module-by-module basis using -fllvm). So tables-next-to-code is currently done using a post-processing step on the asm generated by LLVM. Cheers, Simon > So GHC currently aims to have a built-in NCG for popular platforms, and to rely on LLVM for more esoteric platforms and also for superior optimisation. > > Is this still a sensible policy? > > Maybe you can articulate your doubts on the ARM NCG? > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Karel > | Gardas > | Sent: 03 January 2014 11:24 > | To: Jens Petersen > | Cc: ghc-devs at haskell.org > | Subject: Re: ticket for adding ARM backend to NCG? > | > | > | Guys, > | > | I've been tinkering with ARM NCG idea for quite some time now, but > | honestly I've been always in doubts if it's the best way for GHC at all. > | I've thought that the plan was to kind of move out of NCG to LLVM based > | backends and I've though that although this plan may be kind of stuck > | now, it's still on the table. > | > | Yes, I know that GHC is volunteering effort so if someone comes and asks > | for an ARM NCG implementation merge it'll be probably done in some time, > | but I'm not sure if it's what's the most welcome at the end. > | > | Just some of my doubts about it... > | > | I would really appreciate some authoritative word about the topic from > | more involved GHC developers... I mean especially about NCG future... > | > | Thanks! > | Karel > | > | On 01/ 3/14 09:35 AM, Jens Petersen wrote: > | > On 3 January 2014 03:10, Corey O'Connor | > > wrote: > | > > | > My interest is just to get involved somehow in the NCG. Starting a > | > new backend seemed reasonable only because I couldn't break > | > something that didn't exist. ;-) > | > > | > > | > Well a big +1 from me for armv7 NCG. > | > > | > > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > http://www.haskell.org/mailman/listinfo/ghc-devs > | > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From marlowsd at gmail.com Sat Jan 4 09:54:31 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Sat, 04 Jan 2014 09:54:31 +0000 Subject: ticket for adding ARM backend to NCG? In-Reply-To: References: <20131223130759.463580657bd05f4bca3a725c@mega-nerd.com> Message-ID: <52C7DA57.7000405@gmail.com> On 03/01/14 08:35, Jens Petersen wrote: > On 3 January 2014 03:10, Corey O'Connor > wrote: > > My interest is just to get involved somehow in the NCG. Starting a > new backend seemed reasonable only because I couldn't break > something that didn't exist. ;-) > > > Well a big +1 from me for armv7 NCG. I've been thinking about doing an ARM NCG, mainly for fun and to learn ARM. But in reality I'm not likely to get around to this any time soon. To give you an idea of the work involved, it took me around a week of hacking to do the x86_64 NCG, and that was largely based on the existing x86 one. Rather than wading into an ARM NCG directly, it would pay off to first refactor the existing NCG infrastructure to make it much easier to add new targets, as Carter mentioned. We should have machine descriptions rather than the existing way that involves writing lots of special-purpose code for each target. Cheers, Simon From marlowsd at gmail.com Sat Jan 4 09:59:26 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Sat, 04 Jan 2014 09:59:26 +0000 Subject: panic when compiling SHA In-Reply-To: <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> Message-ID: <52C7DB7E.1030408@gmail.com> On 28/12/13 03:58, Ben Lippmeier wrote: > > On 27/12/2013, at 12:07 PM, Kazu Yamamoto (????) wrote: > >> Hi, >> >> When I tried to build the SHA library with GHC head on on 32bit Linux, >> GHC head got panic. GHC 7.4.2 can build SHA on the same machine. >> >> Configuring SHA-1.6.1... >> Building SHA-1.6.1... >> Failed to install SHA-1.6.1 >> Last 10 lines of the build log ( /home/kazu/work/rpf/.cabal-sandbox/logs/SHA-1.6.1.log ): >> Preprocessing library SHA-1.6.1... >> [1 of 1] Compiling Data.Digest.Pure.SHA ( Data/Digest/Pure/SHA.hs, dist/dist-sandbox-ef3aaa11/build/Data/Digest/Pure/SHA.o ) >> ghc: panic! (the 'impossible' happened) >> (GHC version 7.7.20131202 for i386-unknown-linux): >> regSpill: out of spill slots! >> regs to spill = 1129 >> slots left = 677 > > There are only a fixed number of register spill slots, and when > they're all used the compiler can't dynamically allocate more of > them. Not true any more in 7.8+ with the linear allocator. I think it might still be true for the graph allocator, which is sadly suffering from a little bitrot and probably doesn't generate very good code with the new code generator. So, avoiding -fregs-graph should work around this with 7.8. Cheers, Simon > This SHA benchmark is pathological in that the intermediate code expands to have many variables with long, overlapping live ranges. The underlying problem is really that the inliner and/or other optimisations have gone crazy and made a huge intermediate program. We *could* give it more spill slots, to make it compile, but the generated code would be horrible. > > Try turning down the optimisation level, reduce inliner keenness, or reduce SpecConstr flags. > > Ben. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From marlowsd at gmail.com Sat Jan 4 10:00:21 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Sat, 04 Jan 2014 10:00:21 +0000 Subject: Normal for make install to (re?)build libraries with stage1 compiler? In-Reply-To: <1fafffe71b9a4adb91a449e8c503bb46@BN1PR05MB171.namprd05.prod.outlook.com> References: <1fafffe71b9a4adb91a449e8c503bb46@BN1PR05MB171.namprd05.prod.outlook.com> Message-ID: <52C7DBB5.9060807@gmail.com> On 24/12/13 22:13, Aaron Friel wrote: > Still working on getting my own development environment configured, I am > seeing make install perform a lot of rebuilds of libraries: > > > "inplace/bin/ghc-stage1" -hisuf hi -osuf o -hcsuf hc -static -H64m > -O0 -fasm -package-name base-4.7.0.0 -hide-all-packages -i > -ilibraries/base/. -ilibraries/base/dist-install/build > -ilibraries/base/dist-install/build/autogen > -Ilibraries/base/dist-install/build > -Ilibraries/base/dist-install/build/autogen -Ilibraries/base/include > -optP-DOPTIMISE_INTEGER_GCD_LCM -optP-include > -optPlibraries/base/dist-install/build/autogen/cabal_macros.h -package > ghc-prim-0.3.1.0 -package integer-gmp-0.5.1.0 -package rts-1.0 > -package-name base -XHaskell2010 -O2 -fllvm -no-user-package-db > -rtsopts -odir libraries/base/dist-install/build -hidir > libraries/base/dist-install/build -stubdir > libraries/base/dist-install/build -split-objs -c > libraries/base/./GHC/IO/Encoding/Types.hs -o > libraries/base/dist-install/build/GHC/IO/Encoding/Types.o > > However, GHC has already been built to stage 2. Why is GHC > inplace/bin/stage1 being invoked here - and hasn?t this library already > been built by make? If this happens it is a bug, please open a ticket. Cheers, Simon From marlowsd at gmail.com Sat Jan 4 10:18:42 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Sat, 04 Jan 2014 10:18:42 +0000 Subject: Interface loading and dynamic linking In-Reply-To: <8738lj4ff9.fsf@gmail.com> References: <87wqiwpz6m.fsf@gmail.com> <20131222195852.GA28658@matrix.chaos.earth.li> <87bo084ezo.fsf@gmail.com> <20131223120444.GA6405@matrix.chaos.earth.li> <8738lj4ff9.fsf@gmail.com> Message-ID: <52C7E002.3070108@gmail.com> On 23/12/13 17:59, Ben Gamari wrote: > Ian Lynagh writes: > >> You shouldn't need dynamic-by-default. It should Just Work in HEAD, both >> unregisterised and registerised. >> > Just to clarify, how does one configure GHCi to use dynamic linking now? You set DYNAMIC_GHC_PROGRAMS=YES (which is the default on supported platforms). This causes GHC itself to be built with -dynamic, which in turn causes GHCi to look for and to load the dynamic versions of packages. Cheers, Simon > Should I interpret your message to mean that it is already configured > this way? Where in the tree is this configured? > > To be perfectly clear, I want to ensure that dynamic linking is always > preferred over linking static objects with the RTS linker. Will this > happen as things stand? How does GHCi decide how to load a library? Is > this the role of GhcDynamic? > > I'm still not really sure why `DYNAMIC_BY_DEFAULT` should be causing the > problems I'm observing. It seems to me that it is functionally equivalent > to passing the `-dynamic` flag as they both simply add `WayDyn` to DynFlag's > `ways` list. Do you have any idea where they might differ? > > Cheers, > > - Ben > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From kazu at iij.ad.jp Sat Jan 4 12:22:36 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Sat, 04 Jan 2014 21:22:36 +0900 (JST) Subject: panic when compiling SHA In-Reply-To: <52C7DB7E.1030408@gmail.com> References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> Message-ID: <20140104.212236.2151539280544564973.kazu@iij.ad.jp> Hi, >> There are only a fixed number of register spill slots, and when >> they're all used the compiler can't dynamically allocate more of >> them. > > Not true any more in 7.8+ with the linear allocator. I think it might > still be true for the graph allocator, which is sadly suffering from a > little bitrot and probably doesn't generate very good code with the > new code generator. > > So, avoiding -fregs-graph should work around this with 7.8. I confirmed that removing -fregs-graph should work around this with 7.8. --Kazu From andrew.gibiansky at gmail.com Sat Jan 4 17:30:15 2014 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Sat, 4 Jan 2014 12:30:15 -0500 Subject: Changing GHC Error Message Wrapping In-Reply-To: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> References: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> Message-ID: Apologize for the broken image formatting. With the code I posted above, I get the following output: Couldn't match expected type `(GHC.Types.Int, GHC.Types.Int, GHC.Types.Int, t0, t10, t20, t30, t40, t50, t60, t70, t80, t90)' with actual type `(t1, t2, t3)' I would like the types to be on the same line, or at least wrapped to a larger number of columns. Does anyone know how to do this, or where in the GHC source this wrapping is done? Thanks! Andrew On Sat, Jan 4, 2014 at 2:55 AM, Erik de Castro Lopo wrote: > Carter Schonwald wrote: > > > hey andrew, your image link isn't working (i'm using gmail) > > I think the list software filters out image attachments. > > Erik > -- > ---------------------------------------------------------------------- > Erik de Castro Lopo > http://www.mega-nerd.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvr at gnu.org Sat Jan 4 23:26:52 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Sun, 05 Jan 2014 00:26:52 +0100 Subject: High-level Cmm code and stack allocation Message-ID: <87fvp3coqr.fsf@gnu.org> Hello, According to Note [Syntax of .cmm files], | There are two ways to write .cmm code: | | (1) High-level Cmm code delegates the stack handling to GHC, and | never explicitly mentions Sp or registers. | | (2) Low-level Cmm manages the stack itself, and must know about | calling conventions. | | Whether you want high-level or low-level Cmm is indicated by the | presence of an argument list on a procedure. However, while working on integer-gmp I've been noticing in integer-gmp/cbits/gmp-wrappers.cmm that even though all Cmm procedures have been converted to high-level Cmm, they still reference the 'Sp' register, e.g. #define GMP_TAKE1_RET1(name,mp_fun) \ name (W_ ws1, P_ d1) \ { \ W_ mp_tmp1; \ W_ mp_result1; \ \ again: \ STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ MAYBE_GC(again); \ \ mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ ... \ So is this valid high-level Cmm code? What's the proper way to allocate Stack (and/or Heap) memory from high-level Cmm code? Cheers, hvr From carter.schonwald at gmail.com Sun Jan 5 00:15:53 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 4 Jan 2014 19:15:53 -0500 Subject: High-level Cmm code and stack allocation In-Reply-To: <87fvp3coqr.fsf@gnu.org> References: <87fvp3coqr.fsf@gnu.org> Message-ID: hey Herbert, I generally start with looking at the primops.cmm file for examples https://github.com/ghc/ghc/blob/master/rts/PrimOps.cmm#L572-L588 otoh, the comments in cmmparse.y indicate that's not quite "kosher"? or maybe the comments are a lie? https://github.com/ghc/ghc/blob/master/compiler/cmm/CmmParse.y#L24-L28 On Sat, Jan 4, 2014 at 6:26 PM, Herbert Valerio Riedel wrote: > Hello, > > According to Note [Syntax of .cmm files], > > | There are two ways to write .cmm code: > | > | (1) High-level Cmm code delegates the stack handling to GHC, and > | never explicitly mentions Sp or registers. > | > | (2) Low-level Cmm manages the stack itself, and must know about > | calling conventions. > | > | Whether you want high-level or low-level Cmm is indicated by the > | presence of an argument list on a procedure. > > However, while working on integer-gmp I've been noticing in > integer-gmp/cbits/gmp-wrappers.cmm that even though all Cmm procedures > have been converted to high-level Cmm, they still reference the 'Sp' > register, e.g. > > > #define GMP_TAKE1_RET1(name,mp_fun) \ > name (W_ ws1, P_ d1) \ > { \ > W_ mp_tmp1; \ > W_ mp_result1; \ > \ > again: \ > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ > MAYBE_GC(again); \ > \ > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ > ... \ > > > So is this valid high-level Cmm code? What's the proper way to allocate > Stack (and/or Heap) memory from high-level Cmm code? > > Cheers, > hvr > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvr at gnu.org Sun Jan 5 00:27:18 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Sun, 05 Jan 2014 01:27:18 +0100 Subject: High-level Cmm code and stack allocation In-Reply-To: <87fvp3coqr.fsf@gnu.org> (Herbert Valerio Riedel's message of "Sun, 05 Jan 2014 00:26:52 +0100") References: <87fvp3coqr.fsf@gnu.org> Message-ID: <8761pzcly1.fsf@gmail.com> On 2014-01-05 at 00:26:52 +0100, Herbert Valerio Riedel wrote: [...] > So is this valid high-level Cmm code? What's the proper way to allocate > Stack (and/or Heap) memory from high-level Cmm code? PS: ...are function calls supposed to work as advertised in https://github.com/ghc/ghc/blob/master/compiler/cmm/CmmParse.y#L76 ? I've tried using `(ret1,ret2) = call stg_fun (arg1,arg2);` in a Cmm file, but I get a parser error on `call` with GHC HEAD; only when leave out the return value assignement, i.e. when I use only `call stg_fun (arg1,arg2);`, it gets parsed succesfully. Is this a bug in the CmmParser? Cheers, hvr From carter.schonwald at gmail.com Sun Jan 5 00:32:07 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 4 Jan 2014 19:32:07 -0500 Subject: High-level Cmm code and stack allocation In-Reply-To: <8761pzcly1.fsf@gmail.com> References: <87fvp3coqr.fsf@gnu.org> <8761pzcly1.fsf@gmail.com> Message-ID: i'm inclined to assume that its a parser error. instead of (v)= call fun(args...argn);, did you try v = call fun(args1...n) ; ? On Sat, Jan 4, 2014 at 7:27 PM, Herbert Valerio Riedel wrote: > On 2014-01-05 at 00:26:52 +0100, Herbert Valerio Riedel wrote: > > [...] > > > So is this valid high-level Cmm code? What's the proper way to allocate > > Stack (and/or Heap) memory from high-level Cmm code? > > PS: ...are function calls supposed to work as advertised in > > https://github.com/ghc/ghc/blob/master/compiler/cmm/CmmParse.y#L76 > > ? > > I've tried using `(ret1,ret2) = call stg_fun (arg1,arg2);` in a Cmm > file, but I get a parser error on `call` with GHC HEAD; only when leave > out the return value assignement, i.e. when I use only `call stg_fun > (arg1,arg2);`, it gets parsed succesfully. Is this a bug in the > CmmParser? > > Cheers, > hvr > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benl at ouroborus.net Sun Jan 5 10:46:46 2014 From: benl at ouroborus.net (Ben Lippmeier) Date: Sun, 5 Jan 2014 21:46:46 +1100 Subject: panic when compiling SHA In-Reply-To: <20140104.212236.2151539280544564973.kazu@iij.ad.jp> References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> Message-ID: On 04/01/2014, at 23:22 , Kazu Yamamoto (????) wrote: > Hi, > >>> There are only a fixed number of register spill slots, and when >>> they're all used the compiler can't dynamically allocate more of >>> them. >> >> Not true any more in 7.8+ with the linear allocator. I think it might >> still be true for the graph allocator, which is sadly suffering from a >> little bitrot and probably doesn't generate very good code with the >> new code generator. >> >> So, avoiding -fregs-graph should work around this with 7.8. > > I confirmed that removing -fregs-graph should work around this with > 7.8. Ok, my mistake. We originally added -fregs-graph when compiling that module because both allocators had a fixed stack size, but the graph allocator did a better job of allocation and avoided overflowing the stack. Note that removing the flag isn't a "solution" to the underlying problem of the intermediate code being awful. Switching to the linear allocator just permits compilation of core code that was worse than before. Now it needs to spill more registers when compiling the same source code. Ben. From hvriedel at gmail.com Sun Jan 5 11:46:04 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 05 Jan 2014 12:46:04 +0100 Subject: High-level Cmm code and stack allocation In-Reply-To: (Carter Schonwald's message of "Sat, 4 Jan 2014 19:15:53 -0500") References: <87fvp3coqr.fsf@gnu.org> Message-ID: <87bnzqabyb.fsf@gmail.com> On 2014-01-05 at 01:15:53 +0100, Carter Schonwald wrote: > hey Herbert, > I generally start with looking at the primops.cmm file for examples > https://github.com/ghc/ghc/blob/master/rts/PrimOps.cmm#L572-L588 stg_decodeFloatzuIntzh ( F_ arg ) { W_ p, mp_tmp1, W_ mp_tmp_w; STK_CHK_GEN_N (WDS(2)); mp_tmp1 = Sp - WDS(1); mp_tmp_w = Sp - WDS(2); ccall __decodeFloat_Int(mp_tmp1 "ptr", mp_tmp_w "ptr", arg); return (W_[mp_tmp1], W_[mp_tmp_w]); } that function in particular is compiled to [stg_decodeFloatzuIntzh() // [F1] { info_tbl: [] stack_info: arg_space: 8 updfr_space: Just 8 } {offset cc: _c0::F32 = F1; goto c4; c4: if ((old + 0) - 2 * 8 < SpLim) goto c6; else goto c7; c6: I64[(young + 8)] = c5; call stg_gc_noregs() returns to c5, args: 8, res: 8, upd: 8; c5: goto c4; c7: _c2::I64 = Sp - 1 * 8; _c3::I64 = Sp - 2 * 8; _c8::I64 = __decodeFloat_Int; _c9::I64 = _c2::I64; _ca::I64 = _c3::I64; _cb::F32 = _c0::F32; call "ccall" arg hints: [PtrHint, PtrHint,] result hints: [] (_c8::I64)(_c9::I64, _ca::I64, _cb::F32); R2 = I64[_c3::I64]; R1 = I64[_c2::I64]; call (P64[(old + 8)])(R2, R1) args: 8, res: 0, upd: 8; } }] But I see no effort to adjust Sp (i.e. `Sp = Sp - 16`) right before the call to __decodeFloat_Int; how is it ensured that __decodeFloat_Int doesn't use the locations Sp-8 and Sp-16 for as its local stack? > otoh, the comments in cmmparse.y indicate that's not quite "kosher"? or > maybe the comments are a lie? > https://github.com/ghc/ghc/blob/master/compiler/cmm/CmmParse.y#L24-L28 From gergo at erdi.hu Sun Jan 5 12:16:28 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Sun, 5 Jan 2014 20:16:28 +0800 (SGT) Subject: Pattern synonyms for 7.8? Message-ID: Hi, When I started working on pattern synonyms (#5144) back in August, it seemed the GHC 7.8 freeze was imminent, so I was planning for a first version in 7.10/8.0 (whatever it will be called). However, since not much has happened re: 7.8 since then (at least not much publicly visible), and on the other hand, my implementation of pattern synonyms is ready, I am now starting to wonder if it could be squeezed into 7.8. What are your thoughts on this? Thanks, Gergo From hvriedel at gmail.com Sun Jan 5 12:48:01 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 05 Jan 2014 13:48:01 +0100 Subject: High-level Cmm code and stack allocation In-Reply-To: (Carter Schonwald's message of "Sat, 4 Jan 2014 19:32:07 -0500") References: <87fvp3coqr.fsf@gnu.org> <8761pzcly1.fsf@gmail.com> Message-ID: <877gaea932.fsf@gmail.com> On 2014-01-05 at 01:32:07 +0100, Carter Schonwald wrote: > i'm inclined to assume that its a parser error. > instead of (v)= call fun(args...argn);, did you try v = call fun(args1...n) > ; ? I've looked more closely at the parser, and the relevant productions... | 'call' expr '(' exprs0 ')' ';' { doCall $2 [] $4 } | '(' formals ')' '=' 'call' expr '(' exprs0 ')' ';' { doCall $6 $2 $8 } ...actually require the return values to be newly declared registers, therefore the following works: foo() { W_ arg1, arg2; arg1 = 1; arg2 = 2; (W_ ret1, W_ ret2) = call stg_fun (arg1,arg2); return (ret2, ret1); } Cheers, hvr From karel.gardas at centrum.cz Sun Jan 5 17:14:08 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Sun, 5 Jan 2014 18:14:08 +0100 Subject: [PATCH] platformFromTriple: fix to recognize Solaris triple (i386-pc-solaris2.11) Message-ID: <1388942048-16010-1-git-send-email-karel.gardas@centrum.cz> --- Cabal/Distribution/System.hs | 2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/Cabal/Distribution/System.hs b/Cabal/Distribution/System.hs index a18e491..4fc76f6 100644 --- a/Cabal/Distribution/System.hs +++ b/Cabal/Distribution/System.hs @@ -89,7 +89,7 @@ osAliases Compat Windows = ["mingw32", "win32"] osAliases _ OSX = ["darwin"] osAliases _ IOS = ["ios"] osAliases Permissive FreeBSD = ["kfreebsdgnu"] -osAliases Permissive Solaris = ["solaris2"] +osAliases _ Solaris = ["solaris2"] osAliases _ _ = [] instance Text OS where -- 1.7.3.2 From hvriedel at gmail.com Sun Jan 5 17:29:18 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 05 Jan 2014 18:29:18 +0100 Subject: [PATCH] platformFromTriple: fix to recognize Solaris triple (i386-pc-solaris2.11) In-Reply-To: <1388942048-16010-1-git-send-email-karel.gardas@centrum.cz> (Karel Gardas's message of "Sun, 5 Jan 2014 18:14:08 +0100") References: <1388942048-16010-1-git-send-email-karel.gardas@centrum.cz> Message-ID: <8738l29w29.fsf@gmail.com> Hello Karel, Please submit this fix at the upstream Cabal project at https://github.com/haskell/cabal/issues When it's merged upstream we can sync up GHC's in-tree copy of the Cabal library to Cabal upstream. Thanks, hvr On 2014-01-05 at 18:14:08 +0100, Karel Gardas wrote: > --- > Cabal/Distribution/System.hs | 2 +- > 1 files changed, 1 insertions(+), 1 deletions(-) > > diff --git a/Cabal/Distribution/System.hs b/Cabal/Distribution/System.hs > index a18e491..4fc76f6 100644 > --- a/Cabal/Distribution/System.hs > +++ b/Cabal/Distribution/System.hs > @@ -89,7 +89,7 @@ osAliases Compat Windows = ["mingw32", "win32"] > osAliases _ OSX = ["darwin"] > osAliases _ IOS = ["ios"] > osAliases Permissive FreeBSD = ["kfreebsdgnu"] > -osAliases Permissive Solaris = ["solaris2"] > +osAliases _ Solaris = ["solaris2"] > osAliases _ _ = [] > > instance Text OS where -- "Elegance is not optional" -- Richard O'Keefe From karel.gardas at centrum.cz Sun Jan 5 17:40:22 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Sun, 05 Jan 2014 18:40:22 +0100 Subject: [PATCH] platformFromTriple: fix to recognize Solaris triple (i386-pc-solaris2.11) In-Reply-To: <8738l29w29.fsf@gmail.com> References: <1388942048-16010-1-git-send-email-karel.gardas@centrum.cz> <8738l29w29.fsf@gmail.com> Message-ID: <52C99906.1080104@centrum.cz> Herbert, thanks for the note, the issue is here: https://github.com/haskell/cabal/issues/1641 Karel On 01/ 5/14 06:29 PM, Herbert Valerio Riedel wrote: > Hello Karel, > > Please submit this fix at the upstream Cabal project > at https://github.com/haskell/cabal/issues > > When it's merged upstream we can sync up GHC's in-tree copy of the Cabal > library to Cabal upstream. > > Thanks, > hvr > > On 2014-01-05 at 18:14:08 +0100, Karel Gardas wrote: >> --- >> Cabal/Distribution/System.hs | 2 +- >> 1 files changed, 1 insertions(+), 1 deletions(-) >> >> diff --git a/Cabal/Distribution/System.hs b/Cabal/Distribution/System.hs >> index a18e491..4fc76f6 100644 >> --- a/Cabal/Distribution/System.hs >> +++ b/Cabal/Distribution/System.hs >> @@ -89,7 +89,7 @@ osAliases Compat Windows = ["mingw32", "win32"] >> osAliases _ OSX = ["darwin"] >> osAliases _ IOS = ["ios"] >> osAliases Permissive FreeBSD = ["kfreebsdgnu"] >> -osAliases Permissive Solaris = ["solaris2"] >> +osAliases _ Solaris = ["solaris2"] >> osAliases _ _ = [] >> >> instance Text OS where > From hellertime at gmail.com Mon Jan 6 02:43:46 2014 From: hellertime at gmail.com (Chris Heller) Date: Sun, 5 Jan 2014 21:43:46 -0500 Subject: Optimisation flags at -O0 Message-ID: I wanted to understand better what `-fspec-constr` does. So I compiled the User Guide example with `-O0 -fspec-constr` to isolate the effects of call-pattern specialization, and nothing else (I used ghc-core to pretty-print the resulting Core syntax). It appears I get the same output wether I use `-fspec-constr` or not. Does this mean that compiling with `-O0` even explicitly enabled optimizations are turned off? If that is the case, how does one test an isolated optimization? -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From kazu at iij.ad.jp Mon Jan 6 03:08:34 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Mon, 06 Jan 2014 12:08:34 +0900 (JST) Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> Message-ID: <20140106.120834.989663188831409811.kazu@iij.ad.jp> Ben, > Note that removing the flag isn't a "solution" to the underlying > problem of the intermediate code being awful. Switching to the > linear allocator just permits compilation of core code that was > worse than before. Now it needs to spill more registers when > compiling the same source code. So, would you reopen #5361 by yourself? https://ghc.haskell.org/trac/ghc/ticket/5361 --Kazu From eir at cis.upenn.edu Mon Jan 6 04:30:19 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Sun, 5 Jan 2014 23:30:19 -0500 Subject: Tuple predicates in Template Haskell In-Reply-To: References: Message-ID: Hello Yorick, Thanks for taking this one on! First off, this kind of question/post is appropriate for putting right into the ticket itself. Posting a comment to the ticket makes it more likely that you'll get a response and saves your thoughts for posterity. Now, on to your question: That seems somewhat reasonable, but I think your work could go a little further. It looks like you've introduced TupleP as a new constructor for Pred. This, I believe, would work. But, I think it would be better to have a way of using *any* type as a predicate in TH, as allowed by ConstraintKinds. Perhaps one way to achieve this is to make Pred a synonym of Type, or there could be a TypeP constructor for Pred. In any case, I would recommend writing a wiki page up with a proposed new TH syntax for predicates and then posting a link to the proposal on the #7021 ticket. Then, it will be easier to debate the merits of any particular approach. Once again, thanks! Richard On Jan 3, 2014, at 6:13 PM, Yorick Laupa wrote: > Hi, > > I try to make my way through #7021 [1]. Unfortunately, there is nothing in the ticket about what should be expected from the code given as example. > > I came with an implementation and I would like feedback from you guys. So, considering this snippet: > > -- > {-# LANGUAGE ConstraintKinds #-} > > type IOable a = (Show a, Read a) > > foo :: IOable a => a > foo = undefined > -- > > This is what I got now when pretty-printing TH.Info after reify "foo" call: > > VarI Tuple.foo (ForallT [PlainTV a_1627398594] [TupleP 2 [AppT (ConT GHC.Show.Show) (VarT a_1627398594),AppT (ConT GHC.Read.Read) (VarT a_1627398594)]] (VarT a_1627398594)) Nothing (Fixity 9 InfixL) > > Does that sound right to you ? > > Thanks for your time > > -- Yorick > > [1] https://ghc.haskell.org/trac/ghc/ticket/7021 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From benl at ouroborus.net Mon Jan 6 08:20:11 2014 From: benl at ouroborus.net (Ben Lippmeier) Date: Mon, 6 Jan 2014 19:20:11 +1100 Subject: panic when compiling SHA In-Reply-To: <20140106.120834.989663188831409811.kazu@iij.ad.jp> References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> Message-ID: On 06/01/2014, at 14:08 , Kazu Yamamoto (????) wrote: > Ben, > >> Note that removing the flag isn't a "solution" to the underlying >> problem of the intermediate code being awful. Switching to the >> linear allocator just permits compilation of core code that was >> worse than before. Now it needs to spill more registers when >> compiling the same source code. > > So, would you reopen #5361 by yourself? > > https://ghc.haskell.org/trac/ghc/ticket/5361 Not if we just have this one test. I'd be keen to blame excessive use of inline pragmas in the SHA library itself, or excessive optimisation flags. It's not really a bug in GHC until there are two tests that exhibit the same problem. Ben. From simonpj at microsoft.com Mon Jan 6 08:43:00 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Mon, 6 Jan 2014 08:43:00 +0000 Subject: panic when compiling SHA In-Reply-To: References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> Message-ID: <59543203684B2244980D7E4057D5FBC148707206@DB3EX14MBXC306.europe.corp.microsoft.com> | Note that removing the flag isn't a "solution" to the underlying problem | of the intermediate code being awful. Switching to the linear allocator | just permits compilation of core code that was worse than before. Now it | needs to spill more registers when compiling the same source code. In what way is the intermediate code awful? How could it be fixed? Worth opening a ticket for that issue? At the moment it's invisible because the issue appears superficially to be about register allocation. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ben | Lippmeier | Sent: 05 January 2014 10:47 | To: Kazu Yamamoto (????) | Cc: ghc-devs at haskell.org | Subject: Re: panic when compiling SHA | | | On 04/01/2014, at 23:22 , Kazu Yamamoto (????) | wrote: | | > Hi, | > | >>> There are only a fixed number of register spill slots, and when | >>> they're all used the compiler can't dynamically allocate more of | >>> them. | >> | >> Not true any more in 7.8+ with the linear allocator. I think it | >> might still be true for the graph allocator, which is sadly suffering | >> from a little bitrot and probably doesn't generate very good code | >> with the new code generator. | >> | >> So, avoiding -fregs-graph should work around this with 7.8. | > | > I confirmed that removing -fregs-graph should work around this with | > 7.8. | | Ok, my mistake. We originally added -fregs-graph when compiling that | module because both allocators had a fixed stack size, but the graph | allocator did a better job of allocation and avoided overflowing the | stack. | | Note that removing the flag isn't a "solution" to the underlying problem | of the intermediate code being awful. Switching to the linear allocator | just permits compilation of core code that was worse than before. Now it | needs to spill more registers when compiling the same source code. | | Ben. | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Mon Jan 6 10:38:50 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Mon, 6 Jan 2014 10:38:50 +0000 Subject: Starting GHC development. In-Reply-To: <52C70C35.7000207@fuuzetsu.co.uk> References: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> <52C704D5.4050606@fuuzetsu.co.uk> <52C70C35.7000207@fuuzetsu.co.uk> Message-ID: <59543203684B2244980D7E4057D5FBC1487073C5@DB3EX14MBXC306.europe.corp.microsoft.com> Friends Happy new year! About GHC 7.8, I apologise for the lack of communication. I was on holiday from 10 Dec, and replied when I got back after Christmas. I had a good conversation with Austin last Friday, and I expect an email from him to ghc-devs sometime today, to say what tasks remain, and when he expects to be able to cut a release candidate. More generally, I very much sympathise with Mateusz's original observation that it's discouraging to email ghc-devs about something, and get no reply. It's a classic problem with a large open-source project, run entirely by volunteers. There is literally no one whose day job is to maintain GHC, except Austin, and even he is not superman. So if you email and get no reply it may just be that no one knows the answer, at least not without digging. What's the solution? Certainly ping after a few days, as Gabor says. (Sometimes people wait to see if someone knows The Answer, and a ping will then elicit some "well maybe you could try this" half-answers.) Failing that, just roll up your sleeves and start digging! Maybe you will become the expert in that area and will soon be answering questions yourself. If anyone has better ideas, do contribute them. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Mateusz Kowalczyk | Sent: 03 January 2014 19:15 | Cc: ghc-devs at haskell.org | Subject: Re: Starting GHC development. | | On 03/01/14 18:50, Gabor Greif wrote: | > On 1/3/14, Mateusz Kowalczyk wrote: | >> On 03/01/14 13:27, Simon Peyton-Jones wrote: | >>> [snip] | >>> Thank you. We need lots of help! | >>> [snip] | >> | >> While I hate to interrupt this thread, I think this is a good chance | >> to mention something. | >> | >> I think the big issue for joining GHC development is the lack of | >> communication on the mailing list. There are many topics where a | >> person has a problem with GHC tree (can't validate/build, some tests | >> are failing), posts to GHC devs seeking help and never gets a reply. | >> This is very discouraging and often makes it outright impossible to | contribute. | >> | >> An easy example is the failing tests one: unfortunately some tests | >> are known to fail, but they are only known to fail to existing GHC | >> devs. A new person tries to validate clean tree, gets test failures, | >> asks for help on GHC devs, doesn't get any, gives up. | > | > We should explicitly say somewhere that pinging for an answer is okay. | > Sometimes the key persons (for a potential answer) are out of town or | > too busy, and the question gets buried. | > | > Repeating the answer a few days later raises awareness and has higher | > chance to succeed. This is how other technical lists (e.g. LLVM's) | > work. | > | > Cheers, | > | > Gabor | > | | While bumping the thread might help, I don't think people missing it is | always the case. Refer to Carter's recent e-mail about something very | important: when is 7.8 finally happening. It was pinged 9 days later by | Kazu and still no replies! In the end he had to make another thread | nearly half a month after his initial one and directly CC some people to | get any output... | | I think it's more about 'I'm not 100% sure here so I won't say anything' | which is terrible for newcomers because to them it seems like everyone | ignored their thread. For a newcomer, even 'did you try make maintainer- | clean' might be helpful. At least they don't feel ignored. | | -- | Mateusz K. | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Mon Jan 6 10:40:12 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Mon, 6 Jan 2014 10:40:12 +0000 Subject: Idea for improving communication between devs and potential devs In-Reply-To: <1388782638.65533.YahooMailNeo@web164004.mail.gq1.yahoo.com> References: <1388782638.65533.YahooMailNeo@web164004.mail.gq1.yahoo.com> Message-ID: <59543203684B2244980D7E4057D5FBC1487073E6@DB3EX14MBXC306.europe.corp.microsoft.com> Howard Thanks... improving the wiki would be a great contribution. If you point me to new material that you write, I'd be happy to review it. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Howard | B. Golden | Sent: 03 January 2014 20:57 | To: ghc-devs at haskell.org | Subject: Idea for improving communication between devs and potential | devs | | Hi, | | I'd like to get involved in developing, but I recognize the learning | curve involved. To get started I'd like to improve the Trac wiki | documentation. Part of this would include additional documentation of | less-documented parts of the compiler and RTS. In addition, I'd like to | start some sort of "what's new" that boils down the GHC Dev mailing list | discussion as LWN does for the Linux kernel mailing list. I don't | imagine that I can do this all by myself, but I hope this idea would | resonate with others looking to get started as well. This is meant to be | more frequent and more detailed than what HCAR does for GHC now, though | I don't expect anyone can do it weekly. | | Please let me know what you think about this idea. I'm open to any | suggestions for improving it also. | | Howard B. Golden | Northridge, CA, USA | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From kvanberendonck at gmail.com Mon Jan 6 12:21:56 2014 From: kvanberendonck at gmail.com (Kyle Van Berendonck) Date: Mon, 6 Jan 2014 23:21:56 +1100 Subject: Starting GHC development. Message-ID: The build/test system could be scaring away potential developers too. Not to complain or anything, but I used to try building GHC (form scratch) every 2 months or so and It would (usually) completely fail to build on Windows and OSX, and when it does, there would be some problem with the makefile where it either wouldn't rebuild properly and required a dist-clean or it wouldn't handle anything above -j1 properly. I'm (hoping) that these issues have been resolved by now. The test suite is also really scary. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 6 12:42:24 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Mon, 6 Jan 2014 12:42:24 +0000 Subject: Pattern synonyms for 7.8? In-Reply-To: References: Message-ID: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> | When I started working on pattern synonyms (#5144) back in August, it | seemed the GHC 7.8 freeze was imminent, so I was planning for a first | version in 7.10/8.0 (whatever it will be called). However, since not | much has happened re: 7.8 since then (at least not much publicly | visible), and on the other hand, my implementation of pattern synonyms | is ready, I am now starting to wonder if it could be squeezed into 7.8. | What are your thoughts on this? I'd be interested in others' thoughts on this. Because the implementation is now pretty solid, it's mostly a non-technical question. Is it better to include a feature in the release whose design might change a bit in the light of experience, or to put it only in HEAD for a while? We could put it in the release with warnings saying "the exact details, esp of syntax, might change, but do try it". I'd be ok with that, and we've done it before. What do other people think? Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Dr. | ERDI Gergo | Sent: 05 January 2014 12:16 | To: GHC Devs | Subject: Pattern synonyms for 7.8? | | Hi, | | When I started working on pattern synonyms (#5144) back in August, it | seemed the GHC 7.8 freeze was imminent, so I was planning for a first | version in 7.10/8.0 (whatever it will be called). However, since not | much has happened re: 7.8 since then (at least not much publicly | visible), and on the other hand, my implementation of pattern synonyms | is ready, I am now starting to wonder if it could be squeezed into 7.8. | What are your thoughts on this? | | Thanks, | Gergo | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Mon Jan 6 12:44:58 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Mon, 6 Jan 2014 12:44:58 +0000 Subject: Changing GHC Error Message Wrapping In-Reply-To: References: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> Message-ID: <59543203684B2244980D7E4057D5FBC14870765F@DB3EX14MBXC306.europe.corp.microsoft.com> I think it's line 705 in types/TypeRep.lhs pprTcApp p pp tc tys | isTupleTyCon tc && tyConArity tc == length tys = pprPromotionQuote tc <> tupleParens (tupleTyConSort tc) (sep (punctuate comma (map (pp TopPrec) tys))) If you change 'sep' to 'fsep', you'll get behaviour more akin to paragraph-filling (hence the "f"). Give it a try. You'll get validation failure from the testsuite, but you can see whether you think the result is better or worse. In general, should multi-line tuples be printed with many elements per line, or just one? Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Andrew Gibiansky Sent: 04 January 2014 17:30 To: Erik de Castro Lopo Cc: ghc-devs at haskell.org Subject: Re: Changing GHC Error Message Wrapping Apologize for the broken image formatting. With the code I posted above, I get the following output: Couldn't match expected type `(GHC.Types.Int, GHC.Types.Int, GHC.Types.Int, t0, t10, t20, t30, t40, t50, t60, t70, t80, t90)' with actual type `(t1, t2, t3)' I would like the types to be on the same line, or at least wrapped to a larger number of columns. Does anyone know how to do this, or where in the GHC source this wrapping is done? Thanks! Andrew On Sat, Jan 4, 2014 at 2:55 AM, Erik de Castro Lopo > wrote: Carter Schonwald wrote: > hey andrew, your image link isn't working (i'm using gmail) I think the list software filters out image attachments. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggreif at gmail.com Mon Jan 6 13:00:03 2014 From: ggreif at gmail.com (Gabor Greif) Date: Mon, 6 Jan 2014 14:00:03 +0100 Subject: Pattern synonyms for 7.8? In-Reply-To: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: On 1/6/14, Simon Peyton-Jones wrote: > | When I started working on pattern synonyms (#5144) back in August, it > | seemed the GHC 7.8 freeze was imminent, so I was planning for a first > | version in 7.10/8.0 (whatever it will be called). However, since not > | much has happened re: 7.8 since then (at least not much publicly > | visible), and on the other hand, my implementation of pattern synonyms > | is ready, I am now starting to wonder if it could be squeezed into 7.8. > | What are your thoughts on this? > > I'd be interested in others' thoughts on this. Because the implementation > is now pretty solid, it's mostly a non-technical question. Is it better to > include a feature in the release whose design might change a bit in the > light of experience, or to put it only in HEAD for a while? > > We could put it in the release with warnings saying "the exact details, esp > of syntax, might change, but do try it". I'd be ok with that, and we've > done it before. > > What do other people think? As long as the code additions dont't carry the danger of bugs in the solid areas, I'd second this motion. I'd love to start playing around with pattern synonyms. Cheers, Gabor > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Dr. > | ERDI Gergo > | Sent: 05 January 2014 12:16 > | To: GHC Devs > | Subject: Pattern synonyms for 7.8? > | > | Hi, > | > | When I started working on pattern synonyms (#5144) back in August, it > | seemed the GHC 7.8 freeze was imminent, so I was planning for a first > | version in 7.10/8.0 (whatever it will be called). However, since not > | much has happened re: 7.8 since then (at least not much publicly > | visible), and on the other hand, my implementation of pattern synonyms > | is ready, I am now starting to wonder if it could be squeezed into 7.8. > | What are your thoughts on this? > | > | Thanks, > | Gergo > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Mon Jan 6 13:42:27 2014 From: simonpj at microsoft.com (Simon Peyton-Jones) Date: Mon, 6 Jan 2014 13:42:27 +0000 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <59543203684B2244980D7E4057D5FBC14870773B@DB3EX14MBXC306.europe.corp.microsoft.com> | > What do other people think? | | As long as the code additions dont't carry the danger of bugs in the | solid areas, I'd second this motion. I'd love to start playing around | with pattern synonyms. Significant changes *always* carry the risk of bugs in solid areas. But compiling Hackage (which I hope someone will try with the 7.8 RC) should flush most of them out. Simon | | Cheers, | | Gabor | | > | > Simon | > | > | -----Original Message----- | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Dr. | > | ERDI Gergo | > | Sent: 05 January 2014 12:16 | > | To: GHC Devs | > | Subject: Pattern synonyms for 7.8? | > | | > | Hi, | > | | > | When I started working on pattern synonyms (#5144) back in August, | > | it seemed the GHC 7.8 freeze was imminent, so I was planning for a | > | first version in 7.10/8.0 (whatever it will be called). However, | > | since not much has happened re: 7.8 since then (at least not much | > | publicly visible), and on the other hand, my implementation of | > | pattern synonyms is ready, I am now starting to wonder if it could | be squeezed into 7.8. | > | What are your thoughts on this? | > | | > | Thanks, | > | Gergo | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | http://www.haskell.org/mailman/listinfo/ghc-devs | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | > From mail at joachim-breitner.de Mon Jan 6 14:20:17 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 06 Jan 2014 14:20:17 +0000 Subject: Test suite regressions In-Reply-To: <1388797401.18630.24.camel@kirk> References: <1388797401.18630.24.camel@kirk> Message-ID: <1389018017.2952.10.camel@kirk> Hi, I fixed those, travis is green again: https://travis-ci.org/nomeata/ghc-complete/builds Greetings, Joachim Am Samstag, den 04.01.2014, 02:03 +0100 schrieb Joachim Breitner: > Hi, > > travis-ci reports test suite failures. Unfortunately, the builds still > sometimes timeout, so I cannot pin-point the precise change, but someone > pushing today broke > > Unexpected failures: > ghci/scripts T8639 [bad stdout] (ghci) > polykinds T7594 [stderr mismatch] (normal) > https://s3.amazonaws.com/archive.travis-ci.org/jobs/16345412/log.txt > > If you pushed today, please check if you might have broken them. And > please validate your changes before pushing! > > (I have some scripts that make clean validating mostly hassle-free, > based on a dedicated build host where I push to "validate/some-name", > and after lunch I come back and see if the branch was renamed to > "validated/some-name" or "broken/some-name" ? I can share them if you > are interested. Although I believe that we would benefit from a central, > official solution.) > > Greetings, > Joachim > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Mon Jan 6 13:17:57 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 06 Jan 2014 13:17:57 +0000 Subject: Pattern synonyms for 7.8? In-Reply-To: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <1389014277.2952.9.camel@kirk> Hi, Am Montag, den 06.01.2014, 12:42 +0000 schrieb Simon Peyton-Jones: > We could put it in the release with warnings saying "the exact > details, esp of syntax, might change, but do try it". I'd be ok with > that, and we've done it before. > > What do other people think? This feature may be so good that people will use it in, say, released libraries, disregarding the warning. But it seems that any possible syntax change will only affect those who define pattern synonyms, and not those who use them, and hence only cause work for those disregarding the warning, I?m in favor of inclusion. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From eir at cis.upenn.edu Mon Jan 6 20:32:13 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Mon, 6 Jan 2014 15:32:13 -0500 Subject: Pattern synonyms for 7.8? In-Reply-To: <1389014277.2952.9.camel@kirk> References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> Message-ID: <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> +1 for inclusion. This is a nicely opt-in feature, and so (barring any regressions) only those intrepid people who want it will be affected. Richard On Jan 6, 2014, at 8:17 AM, Joachim Breitner wrote: > Hi, > > Am Montag, den 06.01.2014, 12:42 +0000 schrieb Simon Peyton-Jones: >> We could put it in the release with warnings saying "the exact >> details, esp of syntax, might change, but do try it". I'd be ok with >> that, and we've done it before. >> >> What do other people think? > > This feature may be so good that people will use it in, say, released > libraries, disregarding the warning. > > But it seems that any possible syntax change will only affect those who > define pattern synonyms, and not those who use them, and hence only > cause work for those disregarding the warning, I?m in favor of > inclusion. > > Greetings, > Joachim > > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C > Debian Developer: nomeata at debian.org > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Mon Jan 6 20:43:26 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 6 Jan 2014 15:43:26 -0500 Subject: Pattern synonyms for 7.8? In-Reply-To: <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> Message-ID: as long as we clearly communicate that there may be refinements / breaking changes subsequently, i'm all for it, unless merging it in slows down 7.8 hitting RC . (its taken long enough for RC to happen... don't want to drag it out further) On Mon, Jan 6, 2014 at 3:32 PM, Richard Eisenberg wrote: > +1 for inclusion. This is a nicely opt-in feature, and so (barring any > regressions) only those intrepid people who want it will be affected. > > Richard > > On Jan 6, 2014, at 8:17 AM, Joachim Breitner wrote: > > > Hi, > > > > Am Montag, den 06.01.2014, 12:42 +0000 schrieb Simon Peyton-Jones: > >> We could put it in the release with warnings saying "the exact > >> details, esp of syntax, might change, but do try it". I'd be ok with > >> that, and we've done it before. > >> > >> What do other people think? > > > > This feature may be so good that people will use it in, say, released > > libraries, disregarding the warning. > > > > But it seems that any possible syntax change will only affect those who > > define pattern synonyms, and not those who use them, and hence only > > cause work for those disregarding the warning, I?m in favor of > > inclusion. > > > > Greetings, > > Joachim > > > > > > -- > > Joachim ?nomeata? Breitner > > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C > > Debian Developer: nomeata at debian.org > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Mon Jan 6 19:22:48 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 06 Jan 2014 11:22:48 -0800 Subject: Optimisation flags at -O0 In-Reply-To: References: Message-ID: <1389036040-sup-7831@sabre> All -f flags have a 'no' form, as in '-fno-spec-constr', so you can manually toggle a single optimization on/off. Some optimizations apply even at -O0, see optLevelFlags in compiler/main/DynFlags.hs Edward Excerpts from Chris Heller's message of 2014-01-05 18:43:46 -0800: > I wanted to understand better what `-fspec-constr` does. > > So I compiled the User Guide example with `-O0 -fspec-constr` to isolate > the effects of call-pattern specialization, and nothing else (I used > ghc-core to pretty-print the resulting Core syntax). > > It appears I get the same output wether I use `-fspec-constr` or not. > > Does this mean that compiling with `-O0` even explicitly enabled > optimizations are turned off? > > If that is the case, how does one test an isolated optimization? > > -Chris From awick at galois.com Mon Jan 6 22:26:38 2014 From: awick at galois.com (Adam Wick) Date: Mon, 6 Jan 2014 14:26:38 -0800 Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> Message-ID: <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> On Jan 6, 2014, at 12:20 AM, Ben Lippmeier wrote: > On 06/01/2014, at 14:08 , Kazu Yamamoto (????) wrote: >> Ben, >> >>> Note that removing the flag isn't a "solution" to the underlying >>> problem of the intermediate code being awful. Switching to the >>> linear allocator just permits compilation of core code that was >>> worse than before. Now it needs to spill more registers when >>> compiling the same source code. >> >> So, would you reopen #5361 by yourself? >> >> https://ghc.haskell.org/trac/ghc/ticket/5361 > > Not if we just have this one test. I'd be keen to blame excessive use of inline pragmas in the SHA library itself, or excessive optimisation flags. It's not really a bug in GHC until there are two tests that exhibit the same problem. The SHA library uses SPECIALIZE, INLINE, and bang patterns in fairly standard ways. There?s nothing too exotic in there, I just basically sprinkled hints in places I thought would be useful, and then backed those up with benchmarking. If GHC simply emitted rotten code in this case, I?d agree: wait for more examples, and put the onus on the developer to make it work better. However, right now, GHC crashes on valid input. Which is a bug. So I?d argue that the ticket should be re-opened. I suppose, alternatively, the documentation on SPECIALIZE, INLINE, and bang patterns could be changed to note that using them is not officially supported. If the problem is pretty fundamental, then perhaps instead of panicking and dying, GHC should instead default back to a worse register allocator. Perhaps it could print a warning when that happens, but that?s optional. That would be an easier way to fix this bug if there are deeper algorithmic problems, or if fixing it for SHA would simply move the failure line a little further down the field. (Obviously this route opens a performance regression on my end, but hey, that?s my problem.) - Adam -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2199 bytes Desc: not available URL: From karel.gardas at centrum.cz Mon Jan 6 23:39:31 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Tue, 7 Jan 2014 00:39:31 +0100 Subject: [PATCH] get rid of "Just" string in __GLASGOW_HASKELL_LLVM__ define for invoked GCC The patch fixes invoked GCC command line -D parameter from -D__GLASGOW_HASKELL_LLVM__=Just to correct -D__GLASGOW_HASKELL_LLVM__=, e.g. -D__GLASGOW_HASKELL_LLVM__=Just 32 fixed to -D__GLASGOW_HASKELL_LLVM__=32 for LLVM 3.2 Message-ID: <1389051571-8184-1-git-send-email-karel.gardas@centrum.cz> --- compiler/main/DriverPipeline.hs | 4 +++- 1 files changed, 3 insertions(+), 1 deletions(-) diff --git a/compiler/main/DriverPipeline.hs b/compiler/main/DriverPipeline.hs index 337778e..f789d44 100644 --- a/compiler/main/DriverPipeline.hs +++ b/compiler/main/DriverPipeline.hs @@ -2086,7 +2086,9 @@ doCpp dflags raw input_fn output_fn = do getBackendDefs :: DynFlags -> IO [String] getBackendDefs dflags | hscTarget dflags == HscLlvm = do llvmVer <- figureLlvmVersion dflags - return [ "-D__GLASGOW_HASKELL_LLVM__="++show llvmVer ] + return $ case llvmVer of + Just n -> [ "-D__GLASGOW_HASKELL_LLVM__="++show n ] + _ -> [] getBackendDefs _ = return [] -- 1.7.3.2 From andrew.gibiansky at gmail.com Tue Jan 7 03:03:46 2014 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Mon, 6 Jan 2014 22:03:46 -0500 Subject: Changing GHC Error Message Wrapping In-Reply-To: <59543203684B2244980D7E4057D5FBC14870765F@DB3EX14MBXC306.europe.corp.microsoft.com> References: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> <59543203684B2244980D7E4057D5FBC14870765F@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Thanks Simon. In general I think multiline tuples should have many elements per line, but honestly the tuple case was a very specific example. If possible, I'd like to change the *overall* wrapping for *all* error messages - how does `sep` know when to break lines? there's clearly a numeric value for the number of columns somewhere, but where is it, and is it user-adjustable? For now I am just hacking around this by special-casing some error messages and "un-doing" the line wrapping by parsing the messages and joining lines back together. Thanks, Andrew On Mon, Jan 6, 2014 at 7:44 AM, Simon Peyton-Jones wrote: > I think it?s line 705 in types/TypeRep.lhs > > > > pprTcApp p pp tc tys > > | isTupleTyCon tc && tyConArity tc == length tys > > = pprPromotionQuote tc <> > > tupleParens (tupleTyConSort tc) (sep (punctuate comma (map (pp > TopPrec) tys))) > > > > If you change ?sep? to ?fsep?, you?ll get behaviour more akin to > paragraph-filling (hence the ?f?). Give it a try. You?ll get validation > failure from the testsuite, but you can see whether you think the result is > better or worse. In general, should multi-line tuples be printed with many > elements per line, or just one? > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Andrew > Gibiansky > *Sent:* 04 January 2014 17:30 > *To:* Erik de Castro Lopo > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Changing GHC Error Message Wrapping > > > > Apologize for the broken image formatting. > > > > With the code I posted above, I get the following output: > > > > Couldn't match expected type `(GHC.Types.Int, > > GHC.Types.Int, > > GHC.Types.Int, > > t0, > > t10, > > t20, > > t30, > > t40, > > t50, > > t60, > > t70, > > t80, > > t90)' > > with actual type `(t1, t2, t3)' > > > > I would like the types to be on the same line, or at least wrapped to a > larger number of columns. > > > > Does anyone know how to do this, or where in the GHC source this wrapping > is done? > > > > Thanks! > > Andrew > > > > On Sat, Jan 4, 2014 at 2:55 AM, Erik de Castro Lopo > wrote: > > Carter Schonwald wrote: > > > hey andrew, your image link isn't working (i'm using gmail) > > I think the list software filters out image attachments. > > Erik > -- > ---------------------------------------------------------------------- > Erik de Castro Lopo > http://www.mega-nerd.com/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Tue Jan 7 03:12:38 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 07 Jan 2014 03:12:38 +0000 Subject: Validating with Haddock In-Reply-To: <52BF0209.6020000@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> Message-ID: <52CB70A6.90105@fuuzetsu.co.uk> On 28/12/13 16:53, Mateusz Kowalczyk wrote: > Greetings, > > I'm trying to validate HEAD and I care that Haddock is built alongside > it (so --no-haddock is not an option). I get the following errors listed > at the bottom of this e-mail. How can I validate so that it all builds? > > From what I understand, to validate I should: > * Have a stable compiler in my PATH (7.6.3) > * go to top level directory > * run ?sh validate? > > Am I missing steps? > > == Start post-build package check > Timestamp 2013-12-28 05:00:55 UTC for > /home/shana/.ghc/i386-linux-7.7.20131227/package.conf.d/package.cache > Timestamp 2013-12-28 05:00:55 UTC for > /home/shana/.ghc/i386-linux-7.7.20131227/package.conf.d (same as cache) > using cache: > /home/shana/.ghc/i386-linux-7.7.20131227/package.conf.d/package.cache > Timestamp 2013-12-28 05:22:27 UTC for > /home/shana/programming/ghc/inplace/lib/package.conf.d/package.cache > Timestamp 2013-12-28 05:22:27 UTC for > /home/shana/programming/ghc/inplace/lib/package.conf.d (same as cache) > using cache: > /home/shana/programming/ghc/inplace/lib/package.conf.d/package.cache > There are problems in package xhtml-3000.2.1: > dependency "base-4.7.0.0-578628bf142f9304d05ce5581b5f8d76" doesn't exist > There are problems in package ghc-paths-0.1.0.9: > dependency "base-4.7.0.0-578628bf142f9304d05ce5581b5f8d76" doesn't exist > > The following packages are broken, either because they have a problem > listed above, or because they depend on a broken package. > xhtml-3000.2.1 > ghc-paths-0.1.0.9 > Ping. I need GHC to validate. Here's what I'm trying to achieve: as you might know, I worked on Haddock over summer, rewriting the whole parser, adding tests, fixing bugs, adding features. As Haddock ships with GHC however (and is technically a GHC HQ package), we can not merge it without making sure that GHC can build and validate with the changes. This has been a problem for me and Simon Hengel for quite a while. We now have a branch with preliminary changes on https://github.com/sol/haddock/tree/new-parser . We can not even begin to try to merge the new features if the parser they are built upon is not merged. With the recent calls to push out a 7.8 release candidate, I think we're running out of time to get this in (or is it too late already?). It is not the first time we've been asking for help here! Can someone say what are the steps I should take to get an OK from the GHC HQ that we can push new-parser onto master? If we miss 7.8, the next opportunity will be 7.10, because to get a new Haddock version you also need a new compiler, which people only get during stable releases. There's still a lot of work to be done on Haddock and I think it's understandable that I don't want to do work on what effectively is an ?outdated version?. I'm fine with changes being rejected because they are deemed not good enough for some specific reason, but I'd hate the changes to not make it because I can't get a confirmation from GHC HQ that it's safe to do so. Thanks, hope to hear from someone soon. -- Mateusz K. From carter.schonwald at gmail.com Tue Jan 7 04:17:17 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 6 Jan 2014 23:17:17 -0500 Subject: Validating with Haddock In-Reply-To: <52CB70A6.90105@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> Message-ID: Well said points. 1) perhaps opening a ticket on ghc trac for your problem is a good next step. That way folks who are better at reading trac than email can help! 2) if the pattern synonyms branch gets merged in, you'll have to upstream the associated changes to haddock too right? On Monday, January 6, 2014, Mateusz Kowalczyk wrote: > On 28/12/13 16:53, Mateusz Kowalczyk wrote: > > Greetings, > > > > I'm trying to validate HEAD and I care that Haddock is built alongside > > it (so --no-haddock is not an option). I get the following errors listed > > at the bottom of this e-mail. How can I validate so that it all builds? > > > > From what I understand, to validate I should: > > * Have a stable compiler in my PATH (7.6.3) > > * go to top level directory > > * run ?sh validate? > > > > Am I missing steps? > > > > == Start post-build package check > > Timestamp 2013-12-28 05:00:55 UTC for > > /home/shana/.ghc/i386-linux-7.7.20131227/package.conf.d/package.cache > > Timestamp 2013-12-28 05:00:55 UTC for > > /home/shana/.ghc/i386-linux-7.7.20131227/package.conf.d (same as cache) > > using cache: > > /home/shana/.ghc/i386-linux-7.7.20131227/package.conf.d/package.cache > > Timestamp 2013-12-28 05:22:27 UTC for > > /home/shana/programming/ghc/inplace/lib/package.conf.d/package.cache > > Timestamp 2013-12-28 05:22:27 UTC for > > /home/shana/programming/ghc/inplace/lib/package.conf.d (same as cache) > > using cache: > > /home/shana/programming/ghc/inplace/lib/package.conf.d/package.cache > > There are problems in package xhtml-3000.2.1: > > dependency "base-4.7.0.0-578628bf142f9304d05ce5581b5f8d76" doesn't > exist > > There are problems in package ghc-paths-0.1.0.9: > > dependency "base-4.7.0.0-578628bf142f9304d05ce5581b5f8d76" doesn't > exist > > > > The following packages are broken, either because they have a problem > > listed above, or because they depend on a broken package. > > xhtml-3000.2.1 > > ghc-paths-0.1.0.9 > > > > Ping. I need GHC to validate. Here's what I'm trying to achieve: as you > might know, I worked on Haddock over summer, rewriting the whole parser, > adding tests, fixing bugs, adding features. As Haddock ships with GHC > however (and is technically a GHC HQ package), we can not merge it > without making sure that GHC can build and validate with the changes. > > This has been a problem for me and Simon Hengel for quite a while. We > now have a branch with preliminary changes on > https://github.com/sol/haddock/tree/new-parser . We can not even begin > to try to merge the new features if the parser they are built upon is > not merged. With the recent calls to push out a 7.8 release candidate, I > think we're running out of time to get this in (or is it too late > already?). It is not the first time we've been asking for help here! > > Can someone say what are the steps I should take to get an OK from the > GHC HQ that we can push new-parser onto master? If we miss 7.8, the next > opportunity will be 7.10, because to get a new Haddock version you also > need a new compiler, which people only get during stable releases. > There's still a lot of work to be done on Haddock and I think it's > understandable that I don't want to do work on what effectively is an > ?outdated version?. I'm fine with changes being rejected because they > are deemed not good enough for some specific reason, but I'd hate the > changes to not make it because I can't get a confirmation from GHC HQ > that it's safe to do so. > > Thanks, hope to hear from someone soon. > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Tue Jan 7 05:23:30 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 07 Jan 2014 05:23:30 +0000 Subject: Validating with Haddock In-Reply-To: References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> Message-ID: <52CB8F52.7000502@fuuzetsu.co.uk> On 07/01/14 04:17, Carter Schonwald wrote: > Well said points. > 1) perhaps opening a ticket on ghc trac for your problem is a good next > step. That way folks who are better at reading trac than email can help! I'll do so tomorrow if I don't get any replies with tips. > 2) if the pattern synonyms branch gets merged in, you'll have to upstream > the associated changes to haddock too right? It's not a show-stopper if Haddock can't document something. In fact there are many things it can't document already (GADT type constructors are an easy one). If someone writes the pattern synonym stuff for existing Haddock it's not a problem. The proposed changes from new-parser don't touch the parts that pattern synonyms would and if they did, it'd be easy to merge. Usually GHC HQ folk patch up Haddock when they change API so that it can still compile but everything extra tends to be a ?if we can get it to document the new bleeding-edge feature, then great, if not, someone will make a ticket later?. I actually attempted to make Haddock work with some extra stuff that it currently can't document but because it so heavily depends on GHC, I need my GHC tree validating for that too. -- Mateusz K. From carter.schonwald at gmail.com Tue Jan 7 06:07:46 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 7 Jan 2014 01:07:46 -0500 Subject: Validating with Haddock In-Reply-To: <52CB8F52.7000502@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <52CB8F52.7000502@fuuzetsu.co.uk> Message-ID: i'd really recommend asking on #ghc and filing a ticket on trac preemptively. Different people reply better on different channels On Tue, Jan 7, 2014 at 12:23 AM, Mateusz Kowalczyk wrote: > On 07/01/14 04:17, Carter Schonwald wrote: > > Well said points. > > 1) perhaps opening a ticket on ghc trac for your problem is a good next > > step. That way folks who are better at reading trac than email can help! > > I'll do so tomorrow if I don't get any replies with tips. > > > 2) if the pattern synonyms branch gets merged in, you'll have to upstream > > the associated changes to haddock too right? > > It's not a show-stopper if Haddock can't document something. In fact > there are many things it can't document already (GADT type constructors > are an easy one). If someone writes the pattern synonym stuff for > existing Haddock it's not a problem. The proposed changes from > new-parser don't touch the parts that pattern synonyms would and if they > did, it'd be easy to merge. Usually GHC HQ folk patch up Haddock when > they change API so that it can still compile but everything extra tends > to be a ?if we can get it to document the new bleeding-edge feature, > then great, if not, someone will make a ticket later?. > > I actually attempted to make Haddock work with some extra stuff that it > currently can't document but because it so heavily depends on GHC, I > need my GHC tree validating for that too. > > -- > Mateusz K. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Tue Jan 7 07:25:28 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 07 Jan 2014 07:25:28 +0000 Subject: Alex unicode trick Message-ID: <52CBABE8.4040001@fuuzetsu.co.uk> Greetings, When looking at the GHC lexer (Lexer.x), there's: > $unispace = \x05 -- Trick Alex into handling Unicode. See alexGetChar. > $whitechar = [\ \n\r\f\v $unispace] > $white_no_nl = $whitechar # \n > $tab = \t Scrolling down to alexGetChar and alexGetChar', we see the comments: > -- backwards compatibility for Alex 2.x > alexGetChar :: AlexInput -> Maybe (Char,AlexInput) > > -- This version does not squash unicode characters, it is used when > -- lexing strings. > alexGetChar' :: AlexInput -> Maybe (Char,AlexInput) What's the reason for these? I was under the impression that since 3.0, Alex has natively supported unicode. Is it just dead code? Could all the hex $uni* functions be removed? If not, why not? -- Mateusz K. From carter.schonwald at gmail.com Tue Jan 7 07:36:22 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 7 Jan 2014 02:36:22 -0500 Subject: Alex unicode trick In-Reply-To: <52CBABE8.4040001@fuuzetsu.co.uk> References: <52CBABE8.4040001@fuuzetsu.co.uk> Message-ID: you're probably right, this could be regarded as dead code for ghc 7.8 (esp since alex and happy must be the recent versions to even build ghc HEAD ! ) On Tue, Jan 7, 2014 at 2:25 AM, Mateusz Kowalczyk wrote: > Greetings, > > When looking at the GHC lexer (Lexer.x), there's: > > > $unispace = \x05 -- Trick Alex into handling Unicode. See alexGetChar. > > $whitechar = [\ \n\r\f\v $unispace] > > $white_no_nl = $whitechar # \n > > $tab = \t > > Scrolling down to alexGetChar and alexGetChar', we see the comments: > > > > -- backwards compatibility for Alex 2.x > > alexGetChar :: AlexInput -> Maybe (Char,AlexInput) > > > > -- This version does not squash unicode characters, it is used when > > -- lexing strings. > > alexGetChar' :: AlexInput -> Maybe (Char,AlexInput) > > What's the reason for these? I was under the impression that since > 3.0, Alex has natively supported unicode. Is it just dead code? Could > all the hex $uni* functions be removed? If not, why not? > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kr.angelov at gmail.com Tue Jan 7 08:26:49 2014 From: kr.angelov at gmail.com (Krasimir Angelov) Date: Tue, 7 Jan 2014 09:26:49 +0100 Subject: Alex unicode trick In-Reply-To: References: <52CBABE8.4040001@fuuzetsu.co.uk> Message-ID: Hi, I was recenly looking at this code to see how the lexer decides that a character is a letter, space, etc. The problem is that with Unicode there are hundreds of thousands of characters that are declared to be alphanumeric. Even if they are compressed into a regular expression with a list of ranges there will be still ~390 ranges. The GHC lexer avoids hardcoding this ranges by calling isSpace, isAlpha, etc and then converting this result to a code. Ideally it would be nice if Alex had a predefined macroses corresponding to the Unicode categories, but for now you have to either hard code the ranges with huge regular expressions or use the workaround used in GHC. Is there any other solution? Regards, Krasimir 2014/1/7 Carter Schonwald : > you're probably right, this could be regarded as dead code for ghc 7.8 (esp > since alex and happy must be the recent versions to even build ghc HEAD ! ) > > > On Tue, Jan 7, 2014 at 2:25 AM, Mateusz Kowalczyk > wrote: >> >> Greetings, >> >> When looking at the GHC lexer (Lexer.x), there's: >> >> > $unispace = \x05 -- Trick Alex into handling Unicode. See >> > alexGetChar. >> > $whitechar = [\ \n\r\f\v $unispace] >> > $white_no_nl = $whitechar # \n >> > $tab = \t >> >> Scrolling down to alexGetChar and alexGetChar', we see the comments: >> >> >> > -- backwards compatibility for Alex 2.x >> > alexGetChar :: AlexInput -> Maybe (Char,AlexInput) >> > >> > -- This version does not squash unicode characters, it is used when >> > -- lexing strings. >> > alexGetChar' :: AlexInput -> Maybe (Char,AlexInput) >> >> What's the reason for these? I was under the impression that since >> 3.0, Alex has natively supported unicode. Is it just dead code? Could >> all the hex $uni* functions be removed? If not, why not? >> >> -- >> Mateusz K. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Tue Jan 7 09:14:01 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Jan 2014 09:14:01 +0000 Subject: Changing GHC Error Message Wrapping In-Reply-To: References: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> <59543203684B2244980D7E4057D5FBC14870765F@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <59543203684B2244980D7E4057D5FBC148707DFC@DB3EX14MBXC306.europe.corp.microsoft.com> -dppr-cols=N changes the width of the output page; you could try a large number there. There isn't a setting meaning "infinity", sadly. Simon From: Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] Sent: 07 January 2014 03:04 To: Simon Peyton Jones Cc: Erik de Castro Lopo; ghc-devs at haskell.org Subject: Re: Changing GHC Error Message Wrapping Thanks Simon. In general I think multiline tuples should have many elements per line, but honestly the tuple case was a very specific example. If possible, I'd like to change the *overall* wrapping for *all* error messages - how does `sep` know when to break lines? there's clearly a numeric value for the number of columns somewhere, but where is it, and is it user-adjustable? For now I am just hacking around this by special-casing some error messages and "un-doing" the line wrapping by parsing the messages and joining lines back together. Thanks, Andrew On Mon, Jan 6, 2014 at 7:44 AM, Simon Peyton-Jones > wrote: I think it's line 705 in types/TypeRep.lhs pprTcApp p pp tc tys | isTupleTyCon tc && tyConArity tc == length tys = pprPromotionQuote tc <> tupleParens (tupleTyConSort tc) (sep (punctuate comma (map (pp TopPrec) tys))) If you change 'sep' to 'fsep', you'll get behaviour more akin to paragraph-filling (hence the "f"). Give it a try. You'll get validation failure from the testsuite, but you can see whether you think the result is better or worse. In general, should multi-line tuples be printed with many elements per line, or just one? Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Andrew Gibiansky Sent: 04 January 2014 17:30 To: Erik de Castro Lopo Cc: ghc-devs at haskell.org Subject: Re: Changing GHC Error Message Wrapping Apologize for the broken image formatting. With the code I posted above, I get the following output: Couldn't match expected type `(GHC.Types.Int, GHC.Types.Int, GHC.Types.Int, t0, t10, t20, t30, t40, t50, t60, t70, t80, t90)' with actual type `(t1, t2, t3)' I would like the types to be on the same line, or at least wrapped to a larger number of columns. Does anyone know how to do this, or where in the GHC source this wrapping is done? Thanks! Andrew On Sat, Jan 4, 2014 at 2:55 AM, Erik de Castro Lopo > wrote: Carter Schonwald wrote: > hey andrew, your image link isn't working (i'm using gmail) I think the list software filters out image attachments. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jan 7 09:33:33 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Jan 2014 09:33:33 +0000 Subject: Validating with Haddock In-Reply-To: <52CB70A6.90105@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> Message-ID: <59543203684B2244980D7E4057D5FBC148707E57@DB3EX14MBXC306.europe.corp.microsoft.com> I get a different bunch of "post-build package check" complaints. Does anyone else have a clue what is going on? I do not. Mine are reproduced below. They appear to be non-fatal warnings. I bet it's because I have HADDOCK_DOCS=NO, but if so that should surely suppress all these warnings? It would be great if someone could figure out what the post-build package check is doing and why it isn't working for Mateusz. Simon == Start post-testsuite package check Timestamp Mon Jan 6 17:45:05 GMT 2014 for /5playpen/simonpj/HEAD-2/inplace/lib/package.conf.d/package.cache Timestamp Mon Jan 6 17:45:05 GMT 2014 for /5playpen/simonpj/HEAD-2/inplace/lib/package.conf.d (same as cache) using cache: /5playpen/simonpj/HEAD-2/inplace/lib/package.conf.d/package.cache Warning: haddock-interfaces: /5playpen/simonpj/HEAD-2/libraries/dph/dph-lifted-vseg/dist-install/doc/html/dph-lifted-vseg/dph-lifted-vseg.haddock doesn't exist or isn't a file Warning: haddock-interfaces: /5playpen/simonpj/HEAD-2/libraries/dph/dph-lifted-copy/dist-install/doc/html/dph-lifted-copy/dph-lifted-copy.haddock doesn't exist or isn't a file Warning: haddock-interfaces: /5playpen/simonpj/HEAD-2/libraries/dph/dph-lifted-boxed/dist-install/doc/html/dph-lifted-boxed/dph-lifted-boxed.haddock doesn't exist or isn't a file ...etc | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Mateusz Kowalczyk | Sent: 07 January 2014 03:13 | To: ghc-devs at haskell.org | Subject: Re: Validating with Haddock | | On 28/12/13 16:53, Mateusz Kowalczyk wrote: | > Greetings, | > | > I'm trying to validate HEAD and I care that Haddock is built alongside | > it (so --no-haddock is not an option). I get the following errors | > listed at the bottom of this e-mail. How can I validate so that it all | builds? | > | > From what I understand, to validate I should: | > * Have a stable compiler in my PATH (7.6.3) | > * go to top level directory | > * run 'sh validate' | > | > Am I missing steps? | > | > == Start post-build package check | > Timestamp 2013-12-28 05:00:55 UTC for | > /home/shana/.ghc/i386-linux-7.7.20131227/package.conf.d/package.cache | > Timestamp 2013-12-28 05:00:55 UTC for | > /home/shana/.ghc/i386-linux-7.7.20131227/package.conf.d (same as | > cache) using cache: | > /home/shana/.ghc/i386-linux-7.7.20131227/package.conf.d/package.cache | > Timestamp 2013-12-28 05:22:27 UTC for | > /home/shana/programming/ghc/inplace/lib/package.conf.d/package.cache | > Timestamp 2013-12-28 05:22:27 UTC for | > /home/shana/programming/ghc/inplace/lib/package.conf.d (same as cache) | > using cache: | > /home/shana/programming/ghc/inplace/lib/package.conf.d/package.cache | > There are problems in package xhtml-3000.2.1: | > dependency "base-4.7.0.0-578628bf142f9304d05ce5581b5f8d76" doesn't | > exist There are problems in package ghc-paths-0.1.0.9: | > dependency "base-4.7.0.0-578628bf142f9304d05ce5581b5f8d76" doesn't | > exist | > | > The following packages are broken, either because they have a problem | > listed above, or because they depend on a broken package. | > xhtml-3000.2.1 | > ghc-paths-0.1.0.9 | > | | Ping. I need GHC to validate. Here's what I'm trying to achieve: as you | might know, I worked on Haddock over summer, rewriting the whole parser, | adding tests, fixing bugs, adding features. As Haddock ships with GHC | however (and is technically a GHC HQ package), we can not merge it | without making sure that GHC can build and validate with the changes. | | This has been a problem for me and Simon Hengel for quite a while. We | now have a branch with preliminary changes on | https://github.com/sol/haddock/tree/new-parser . We can not even begin | to try to merge the new features if the parser they are built upon is | not merged. With the recent calls to push out a 7.8 release candidate, I | think we're running out of time to get this in (or is it too late | already?). It is not the first time we've been asking for help here! | | Can someone say what are the steps I should take to get an OK from the | GHC HQ that we can push new-parser onto master? If we miss 7.8, the next | opportunity will be 7.10, because to get a new Haddock version you also | need a new compiler, which people only get during stable releases. | There's still a lot of work to be done on Haddock and I think it's | understandable that I don't want to do work on what effectively is an | 'outdated version'. I'm fine with changes being rejected because they | are deemed not good enough for some specific reason, but I'd hate the | changes to not make it because I can't get a confirmation from GHC HQ | that it's safe to do so. | | Thanks, hope to hear from someone soon. | | -- | Mateusz K. | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Tue Jan 7 09:41:22 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Jan 2014 09:41:22 +0000 Subject: Validating with Haddock In-Reply-To: <52CB70A6.90105@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> Message-ID: <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> | Ping. I need GHC to validate. Here's what I'm trying to achieve: as you | might know, I worked on Haddock over summer, rewriting the whole parser, | adding tests, fixing bugs, adding features. As Haddock ships with GHC | however (and is technically a GHC HQ package), we can not merge it | without making sure that GHC can build and validate with the changes. | | This has been a problem for me and Simon Hengel for quite a while. We | now have a branch with preliminary changes on | https://github.com/sol/haddock/tree/new-parser . We can not even begin | to try to merge the new features if the parser they are built upon is | not merged. With the recent calls to push out a 7.8 release candidate, I | think we're running out of time to get this in (or is it too late | already?). It is not the first time we've been asking for help here! Actually I didn't know that you were asking to get something into 7.8. Haddock is maintained by David Waern and Simon Marlow, so the question of when to merge your changes into the main Haddock HEAD is up to them, not GHC HQ. We'll simply ship whatever Haddock we have when we cut the release candidate. (I know there is still some fuzz about when that will be; Austin is figuring that out now.) I'm copying David and Simon. Simon From benl at ouroborus.net Tue Jan 7 09:59:26 2014 From: benl at ouroborus.net (Ben Lippmeier) Date: Tue, 7 Jan 2014 20:59:26 +1100 Subject: panic when compiling SHA In-Reply-To: <59543203684B2244980D7E4057D5FBC148707206@DB3EX14MBXC306.europe.corp.microsoft.com> References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <59543203684B2244980D7E4057D5FBC148707206@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <2E9BAE47-AE0B-4189-89BC-A01FF8DE499B@ouroborus.net> On 06/01/2014, at 19:43 , Simon Peyton-Jones wrote: > | Note that removing the flag isn't a "solution" to the underlying problem > | of the intermediate code being awful. Switching to the linear allocator > | just permits compilation of core code that was worse than before. Now it > | needs to spill more registers when compiling the same source code. > > In what way is the intermediate code awful? Because the error message from the register allocator tells us that there are over 1000 live variables at a particular point the assembly code, but the "biggest" SHA hashing algorithm (SHA-3) should only need to maintain 25 words of state (says Wikipedia). > How could it be fixed? Someone that cares enough about the SHA library would need to understand why it's producing the intermediate code it does. My gentle suggestion is that when a library developer starts adding INLINE pragmas to their program it becomes their job to understand why the intermediate code is how it is. > Worth opening a ticket for that issue? At the moment it's invisible because the issue appears superficially to be about register allocation. I'd open a ticket against the SHA library saying the choice of optimisation flags / pragmas is probably causing code explosion during compilation. If the developer then decides this is really a problem in GHC I'd want some description of what core transforms they need to happen to achieve good performance. The strategy of "inline everything and hope for the best" is understandable (I've used it!) but only gets you so far... The bug report is like someone saying "GHC can't compile my 100MB core program". You can either open a ticket against GHC, or ask "why have you got a 100MB core program?" Ben. From fuuzetsu at fuuzetsu.co.uk Tue Jan 7 10:08:26 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 07 Jan 2014 10:08:26 +0000 Subject: Validating with Haddock In-Reply-To: <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52CBD21A.1020900@fuuzetsu.co.uk> On 07/01/14 09:41, Simon Peyton Jones wrote: > | Ping. I need GHC to validate. Here's what I'm trying to achieve: as you > | might know, I worked on Haddock over summer, rewriting the whole parser, > | adding tests, fixing bugs, adding features. As Haddock ships with GHC > | however (and is technically a GHC HQ package), we can not merge it > | without making sure that GHC can build and validate with the changes. > | > | This has been a problem for me and Simon Hengel for quite a while. We > | now have a branch with preliminary changes on > | https://github.com/sol/haddock/tree/new-parser . We can not even begin > | to try to merge the new features if the parser they are built upon is > | not merged. With the recent calls to push out a 7.8 release candidate, I > | think we're running out of time to get this in (or is it too late > | already?). It is not the first time we've been asking for help here! > > Actually I didn't know that you were asking to get something into 7.8. Haddock is maintained by David Waern and Simon Marlow, so the question of when to merge your changes into the main Haddock HEAD is up to them, not GHC HQ. We'll simply ship whatever Haddock we have when we cut the release candidate. (I know there is still some fuzz about when that will be; Austin is figuring that out now.) > > I'm copying David and Simon. > > Simon > David stepped down and Simon Marlow has a long time ago too! It is now Simon Hengel who maintains it. The issue is that Simon could not get the tree to validate properly on his machine either so I'm here seeking help so that we can push up with confidence. Simon has e-mailed Austin on 22/11/2013 about it but we have been unable to verify that all is fine. Is it by now too late for 7.8? I'm afraid Simon H is away without much access to technology until the 20th. Un-CC'ing Simon M and David W and CC'ing Simon H. -- Mateusz K. From simonpj at microsoft.com Tue Jan 7 10:20:33 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Jan 2014 10:20:33 +0000 Subject: Validating with Haddock In-Reply-To: <52CBD21A.1020900@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> Message-ID: <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> | David stepped down and Simon Marlow has a long time ago too! It is now | Simon Hengel who maintains it. OK, well perhaps you can immediately push a change to haddock.cabal to reflect this? That's how we know. | Is it by now too late for 7.8? I'm afraid Simon H is away without much | access to technology until the 20th. Realistically that would push 7.8 RC to the end of Jan. Why would that be better than pushing to head just after the 7.8 release? Will 7.8 users see a big improvement if it was in? What do others think? S From benl at ouroborus.net Tue Jan 7 10:27:10 2014 From: benl at ouroborus.net (Ben Lippmeier) Date: Tue, 7 Jan 2014 21:27:10 +1100 Subject: panic when compiling SHA In-Reply-To: <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> Message-ID: On 07/01/2014, at 9:26 , Adam Wick wrote: >> Not if we just have this one test. I'd be keen to blame excessive use of inline pragmas in the SHA library itself, or excessive optimisation flags. It's not really a bug in GHC until there are two tests that exhibit the same problem. > > The SHA library uses SPECIALIZE, INLINE, and bang patterns in fairly standard ways. There?s nothing too exotic in there, I just basically sprinkled hints in places I thought would be useful, and then backed those up with benchmarking. Ahh. It's the "sprinkled hints in places I thought would be useful" which is what I'm concerned about. If you just add pragmas without understanding their effect on the core program then it'll bite further down the line. Did you compare the object code size as well as wall clock speedup? > If GHC simply emitted rotten code in this case, I?d agree: wait for more examples, and put the onus on the developer to make it work better. However, right now, GHC crashes on valid input. Which is a bug. So I?d argue that the ticket should be re-opened. I suppose, alternatively, the documentation on SPECIALIZE, INLINE, and bang patterns could be changed to note that using them is not officially supported. Sadly, "valid input" isn't a well defined concept in practice. You could write a "valid" 10GB Haskell source file that obeyed the Haskell standard grammar, but I wouldn't expect that to compile either. You could also write small (< 1k) source programs that trigger complexity problems in Hindley-Milner style type inference. You could also use compile-time meta programming (like Template Haskell) to generate intermediate code that is well formed but much too big to compile. The fact that a program obeys a published grammar is not sufficient to expect it to compile with a particular implementation (sorry to say). > If the problem is pretty fundamental, then perhaps instead of panicking and dying, GHC should instead default back to a worse register allocator. Perhaps it could print a warning when that happens, but that?s optional. That would be an easier way to fix this bug if there are deeper algorithmic problems, or if fixing it for SHA would simply move the failure line a little further down the field. (Obviously this route opens a performance regression on my end, but hey, that?s my problem.) Adding an INLINE pragma is akin to using compile-time meta programming. I suspect your meta programming is more broken than GHC in this case, but I'd be happy to be proven otherwise. Right now the panic from the register allocator is all the feedback you've got that something is wrong, and the SHA library is the only one I've seen that causes this problem. See above discussion about "valid input". Ben. From fuuzetsu at fuuzetsu.co.uk Tue Jan 7 10:29:16 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 07 Jan 2014 10:29:16 +0000 Subject: Validating with Haddock In-Reply-To: <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52CBD6FC.1080405@fuuzetsu.co.uk> On 07/01/14 10:20, Simon Peyton Jones wrote: > | David stepped down and Simon Marlow has a long time ago too! It is now > | Simon Hengel who maintains it. > > OK, well perhaps you can immediately push a change to haddock.cabal to reflect this? That's how we know. I will try later but I think I don't have permissions. I can at best push to Simon's branch where he would periodically push to the GHC hosted repository (or perhaps it would get pulled from, I do not know). > > | Is it by now too late for 7.8? I'm afraid Simon H is away without much > | access to technology until the 20th. > > Realistically that would push 7.8 RC to the end of Jan. Why would that be better than pushing to head just after the 7.8 release? Will 7.8 users see a big improvement if it was in? What do others think? The changes were mostly there for user benefit. The markup can now be escaped much better. If we can validate what's on Simon's new-parser branch reasonably quickly, we might even be able to push in the new features: new mark up, nested paragraphs, better lists, headers? I'm trying to push for 7.8 because Haddock ships with GHC and 7.8 is the stable release that everyone will be using in couple of months time. If the changes don't get into 7.8, we'll have to wait for the next stable release for the users to benefit. Is this incorrect? I was always under the impression that the only Haddock releases we can reasonably make are with stable GHC releases. Of course, anyone can compile HEAD and generate the docs for their own viewing but for example, Hackage will run stable compiler and all the docs will still be using old Haddock. I'd love to hear that I'm wrong about this and that Haddock releases separate from GHC are possible but I don't think that's the case. > > S > -- Mateusz K. From gergo at erdi.hu Tue Jan 7 11:50:33 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Tue, 7 Jan 2014 19:50:33 +0800 (SGT) Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> Message-ID: On Mon, 6 Jan 2014, Carter Schonwald wrote: > as long as we clearly communicate that there may be refinements / breaking changes > subsequently, i'm all for it, unless merging it in slows down 7.8 hitting RC . ?(its > taken long enough for RC to happen... don't want to drag it out further) If that helps, I've updated the version at https://github.com/gergoerdi/ghc (and the two sister repos https://github.com/gergoerdi/ghc-testsuite and https://github.com/gergoerdi/ghc-haddock) to be based on top of master as of today. Bye, Gergo -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' Elvis is dead and I don't feel so good either. From jan.stolarek at p.lodz.pl Tue Jan 7 12:11:12 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Tue, 7 Jan 2014 13:11:12 +0100 Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> Message-ID: <201401071311.12056.jan.stolarek@p.lodz.pl> > GHC crashes on valid input. Which is a bug. As Ben pointed out it is conceivable that compiler will not be able handle a correct program. But as a user I would expect GHC to detect such situations (if possible) and display an error message, not crash with a panic (which clearly says this is a bug and should be reported). Janek From benno.fuenfstueck+ghc at gmail.com Tue Jan 7 13:55:44 2014 From: benno.fuenfstueck+ghc at gmail.com (=?ISO-8859-1?Q?Benno_F=FCnfst=FCck?=) Date: Tue, 7 Jan 2014 14:55:44 +0100 Subject: GHC API: Using runGhc twice or from multiple threads? Message-ID: Hello, is the following safe to do? main = do runGhc libdir $ do ... runGhc libdir $ do ... Or will this cause trouble? Is there state that is shared between the two calls? And what about this one: main = do forkIO $ runGhc libdir $ do ... forkIO $ runGhc libdir $ do ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Tue Jan 7 14:38:55 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 07 Jan 2014 14:38:55 +0000 Subject: Alex unicode trick In-Reply-To: References: <52CBABE8.4040001@fuuzetsu.co.uk> Message-ID: <52CC117F.8010006@gmail.com> Krasimir is right, it would be hard to use Alex's built-in Unicode support because we have to automatically generate the character classes from the Unicode spec somehow. Probably Alex ought to include these as built-in macros, but right now it doesn't. Even if we did have access to the right regular expressions, I'm slightly concerned that the generated state machine might be enormous. Cheers, Simon On 07/01/2014 08:26, Krasimir Angelov wrote: > Hi, > > I was recenly looking at this code to see how the lexer decides that a > character is a letter, space, etc. The problem is that with Unicode > there are hundreds of thousands of characters that are declared to be > alphanumeric. Even if they are compressed into a regular expression > with a list of ranges there will be still ~390 ranges. The GHC lexer > avoids hardcoding this ranges by calling isSpace, isAlpha, etc and > then converting this result to a code. Ideally it would be nice if > Alex had a predefined macroses corresponding to the Unicode > categories, but for now you have to either hard code the ranges with > huge regular expressions or use the workaround used in GHC. Is there > any other solution? > > Regards, > Krasimir > > > 2014/1/7 Carter Schonwald : >> you're probably right, this could be regarded as dead code for ghc 7.8 (esp >> since alex and happy must be the recent versions to even build ghc HEAD ! ) >> >> >> On Tue, Jan 7, 2014 at 2:25 AM, Mateusz Kowalczyk >> wrote: >>> >>> Greetings, >>> >>> When looking at the GHC lexer (Lexer.x), there's: >>> >>>> $unispace = \x05 -- Trick Alex into handling Unicode. See >>>> alexGetChar. >>>> $whitechar = [\ \n\r\f\v $unispace] >>>> $white_no_nl = $whitechar # \n >>>> $tab = \t >>> >>> Scrolling down to alexGetChar and alexGetChar', we see the comments: >>> >>> >>>> -- backwards compatibility for Alex 2.x >>>> alexGetChar :: AlexInput -> Maybe (Char,AlexInput) >>>> >>>> -- This version does not squash unicode characters, it is used when >>>> -- lexing strings. >>>> alexGetChar' :: AlexInput -> Maybe (Char,AlexInput) >>> >>> What's the reason for these? I was under the impression that since >>> 3.0, Alex has natively supported unicode. Is it just dead code? Could >>> all the hex $uni* functions be removed? If not, why not? >>> >>> -- >>> Mateusz K. >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From austin at well-typed.com Tue Jan 7 14:42:36 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Jan 2014 08:42:36 -0600 Subject: Validating with Haddock In-Reply-To: <52CBD6FC.1080405@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD6FC.1080405@fuuzetsu.co.uk> Message-ID: Hi Mateusz, I remember your email and I believe I responded with the OK at the time - my impression was that it was ready to be merged and would shortly be done after that, but I didn't hear anything back about it. I apologize for my dropping the ball. As for your actual error - ghc-paths is only used in Haddock when it's not built in the GHC tree (as per the cabal file,) so I find it very suspicious that your package check is mentioning it at all (it's not mentioned anywhere else in any GHC sources.) Can you verify that it's there with `./inplace/bin/ghc-pkg list`? I'm not even sure how it could possibly get involved. Finally, can you be more specific about exactly how you tested these changes with your modified Haddock? I presume it was something like: $ ... clone ghc source ... $ cd ghc $ ... get extra stuff with ./sync-all ... $ cd utils/haddock $ ... use git to grab your code from github ... $ cd ../.. $ sh ./validate But I'd like to make sure I know exactly what's going on. I can try testing your branch later today. On Tue, Jan 7, 2014 at 4:29 AM, Mateusz Kowalczyk wrote: > On 07/01/14 10:20, Simon Peyton Jones wrote: >> | David stepped down and Simon Marlow has a long time ago too! It is now >> | Simon Hengel who maintains it. >> >> OK, well perhaps you can immediately push a change to haddock.cabal to reflect this? That's how we know. > > I will try later but I think I don't have permissions. I can at best > push to Simon's branch where he would periodically push to the GHC > hosted repository (or perhaps it would get pulled from, I do not know). > >> >> | Is it by now too late for 7.8? I'm afraid Simon H is away without much >> | access to technology until the 20th. >> >> Realistically that would push 7.8 RC to the end of Jan. Why would that be better than pushing to head just after the 7.8 release? Will 7.8 users see a big improvement if it was in? What do others think? > > The changes were mostly there for user benefit. The markup can now be > escaped much better. If we can validate what's on Simon's new-parser > branch reasonably quickly, we might even be able to push in the new > features: new mark up, nested paragraphs, better lists, headers? I'm > trying to push for 7.8 because Haddock ships with GHC and 7.8 is the > stable release that everyone will be using in couple of months time. If > the changes don't get into 7.8, we'll have to wait for the next stable > release for the users to benefit. Is this incorrect? I was always under > the impression that the only Haddock releases we can reasonably make are > with stable GHC releases. Of course, anyone can compile HEAD and > generate the docs for their own viewing but for example, Hackage will > run stable compiler and all the docs will still be using old Haddock. > > I'd love to hear that I'm wrong about this and that Haddock releases > separate from GHC are possible but I don't think that's the case. > >> >> S >> > > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From marlowsd at gmail.com Tue Jan 7 16:04:52 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 07 Jan 2014 16:04:52 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <87fvp3coqr.fsf@gnu.org> References: <87fvp3coqr.fsf@gnu.org> Message-ID: <52CC25A4.8060004@gmail.com> On 04/01/2014 23:26, Herbert Valerio Riedel wrote: > Hello, > > According to Note [Syntax of .cmm files], > > | There are two ways to write .cmm code: > | > | (1) High-level Cmm code delegates the stack handling to GHC, and > | never explicitly mentions Sp or registers. > | > | (2) Low-level Cmm manages the stack itself, and must know about > | calling conventions. > | > | Whether you want high-level or low-level Cmm is indicated by the > | presence of an argument list on a procedure. > > However, while working on integer-gmp I've been noticing in > integer-gmp/cbits/gmp-wrappers.cmm that even though all Cmm procedures > have been converted to high-level Cmm, they still reference the 'Sp' > register, e.g. > > > #define GMP_TAKE1_RET1(name,mp_fun) \ > name (W_ ws1, P_ d1) \ > { \ > W_ mp_tmp1; \ > W_ mp_result1; \ > \ > again: \ > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ > MAYBE_GC(again); \ > \ > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ > ... \ > > > So is this valid high-level Cmm code? What's the proper way to allocate > Stack (and/or Heap) memory from high-level Cmm code? Yes, this is technically wrong but luckily works. I'd very much like to have a better solution, preferably one that doesn't add any extra overhead. The problem here is that we need to allocate a couple of temporary words and take their address; that's an unusual thing to do in Cmm, so it only occurs in a few places (mainly interacting with gmp). Usually if you want some temporary storage you can use local variables or some heap-allocated memory. Cheers, Simon From marlowsd at gmail.com Tue Jan 7 16:06:03 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 07 Jan 2014 16:06:03 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <87bnzqabyb.fsf@gmail.com> References: <87fvp3coqr.fsf@gnu.org> <87bnzqabyb.fsf@gmail.com> Message-ID: <52CC25EB.7060802@gmail.com> On 05/01/2014 11:46, Herbert Valerio Riedel wrote: > On 2014-01-05 at 01:15:53 +0100, Carter Schonwald wrote: >> hey Herbert, >> I generally start with looking at the primops.cmm file for examples >> https://github.com/ghc/ghc/blob/master/rts/PrimOps.cmm#L572-L588 > > stg_decodeFloatzuIntzh ( F_ arg ) > { > W_ p, mp_tmp1, W_ mp_tmp_w; > > STK_CHK_GEN_N (WDS(2)); > > mp_tmp1 = Sp - WDS(1); > mp_tmp_w = Sp - WDS(2); > > ccall __decodeFloat_Int(mp_tmp1 "ptr", mp_tmp_w "ptr", arg); > > return (W_[mp_tmp1], W_[mp_tmp_w]); > } > > that function in particular is compiled to > > [stg_decodeFloatzuIntzh() // [F1] > { info_tbl: [] > stack_info: arg_space: 8 updfr_space: Just 8 > } > {offset > cc: _c0::F32 = F1; > goto c4; > c4: if ((old + 0) - 2 * 8 < SpLim) goto c6; else goto c7; > c6: I64[(young + 8)] = c5; > call stg_gc_noregs() returns to c5, args: 8, res: 8, upd: 8; > c5: goto c4; > c7: _c2::I64 = Sp - 1 * 8; > _c3::I64 = Sp - 2 * 8; > _c8::I64 = __decodeFloat_Int; > _c9::I64 = _c2::I64; > _ca::I64 = _c3::I64; > _cb::F32 = _c0::F32; > call "ccall" arg hints: [PtrHint, PtrHint,] result hints: [] > (_c8::I64)(_c9::I64, _ca::I64, _cb::F32); > R2 = I64[_c3::I64]; > R1 = I64[_c2::I64]; > call (P64[(old + 8)])(R2, R1) args: 8, res: 0, upd: 8; > } > }] > > But I see no effort to adjust Sp (i.e. `Sp = Sp - 16`) right before the > call to __decodeFloat_Int; how is it ensured that __decodeFloat_Int > doesn't use the locations Sp-8 and Sp-16 for as its local stack? __decodeFloat_Int is a C function, so it will not touch the Haskell stack. Cheers, Simon From austin at well-typed.com Tue Jan 7 16:11:13 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Jan 2014 10:11:13 -0600 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> Message-ID: Hi Erdi, After talking with Simon today, we indeed think this can make it into 7.8. I'll go ahead and try to do it today. Thanks for rebasing all your patches - it'll make it much easier! On Tue, Jan 7, 2014 at 5:50 AM, Dr. ERDI Gergo wrote: > On Mon, 6 Jan 2014, Carter Schonwald wrote: > >> as long as we clearly communicate that there may be refinements / breaking >> changes >> subsequently, i'm all for it, unless merging it in slows down 7.8 hitting >> RC . (its >> taken long enough for RC to happen... don't want to drag it out further) > > > If that helps, I've updated the version at https://github.com/gergoerdi/ghc > (and the two sister repos https://github.com/gergoerdi/ghc-testsuite and > https://github.com/gergoerdi/ghc-haddock) to be based on top of master as of > today. > > Bye, > Gergo > > -- > > .--= ULLA! =-----------------. > \ http://gergo.erdi.hu \ > `---= gergo at erdi.hu =-------' > Elvis is dead and I don't feel so good either. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From hvr at gnu.org Tue Jan 7 16:14:56 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Tue, 07 Jan 2014 17:14:56 +0100 Subject: High-level Cmm code and stack allocation In-Reply-To: <52CC25A4.8060004@gmail.com> (Simon Marlow's message of "Tue, 07 Jan 2014 16:04:52 +0000") References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> Message-ID: <87fvozixa7.fsf@gnu.org> Hello Simon, On 2014-01-07 at 17:04:52 +0100, Simon Marlow wrote: [...] > Yes, this is technically wrong but luckily works. ...but only as long as the code-generator doesn't try to push something on the stack, like e.g. when performing native 'call's which need to push the return-location on the stack...? > I'd very much like > to have a better solution, preferably one that doesn't add any extra > overhead. I see... I've noticed there's a 'push() { ... }' construct that allows to push items on the stack; couldn't we have generalized version of that, taking a size-argument, declaring that specified amount of stack-space is user-allocated/controlled within the '{ ... }' scope? Greetings, hvr From marlowsd at gmail.com Tue Jan 7 16:20:10 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 07 Jan 2014 16:20:10 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <87fvozixa7.fsf@gnu.org> References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> <87fvozixa7.fsf@gnu.org> Message-ID: <52CC293A.3040903@gmail.com> On 07/01/2014 16:14, Herbert Valerio Riedel wrote: > Hello Simon, > > On 2014-01-07 at 17:04:52 +0100, Simon Marlow wrote: > > [...] > >> Yes, this is technically wrong but luckily works. > > ...but only as long as the code-generator doesn't try to push something > on the stack, like e.g. when performing native 'call's which need to > push the return-location on the stack...? Right - in principle the code generator is in control of the stack so it can move the stack pointer whenever it likes, but in practice we know it only does this in certain places, like when making native calls, so these naughty functions just avoid doing that. >> I'd very much like >> to have a better solution, preferably one that doesn't add any extra >> overhead. > > I see... I've noticed there's a 'push() { ... }' construct that allows > to push items on the stack; couldn't we have generalized version of > that, taking a size-argument, declaring that specified amount of > stack-space is user-allocated/controlled within the '{ ... }' scope? We could push a stack frame, like we do for an update frame, but the problem is that we need a way to take the address of those stack locations. Taking the address of stack locations is also dodgy, because stacks move (say, during a native call). So it would still be unsafe. Also pushing a stack frame would incur an extra memory write for the info pointer, which is annoying. Cheers, Simon From austin at well-typed.com Tue Jan 7 16:21:13 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Jan 2014 10:21:13 -0600 Subject: Idea for improving communication between devs and potential devs In-Reply-To: <59543203684B2244980D7E4057D5FBC1487073E6@DB3EX14MBXC306.europe.corp.microsoft.com> References: <1388782638.65533.YahooMailNeo@web164004.mail.gq1.yahoo.com> <59543203684B2244980D7E4057D5FBC1487073E6@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Howard, Something like LWN would be neat. And improving the wiki (especially with documentation about GHC itself) would be great. IMO I also think the blog would work well for this (it's reasonably integrated into the wiki,) but I agree we should perhaps have some other people contribute and write for it if that was the case, for diversity. Of course, people also like to write on their own blogs about new developments... As a suggestion, one really easy thing to do is this: subscribe to ghc-commits at haskell.org, and just read all the commits that come in. No, you will not understand all of them immediately, and don't waste an hour per commit, but just read over them. If you follow the development closely, it will help immensely in your quest to understand 'the big picture' (this is essentially how I started - by lurking the commits list.) It'll also give you lots of great starting points for things to talk about. As always, the wiki is open and free to be edited - so please feel free to write some stuff down and send it with a summary email to the list (and be sure to include glasgow-haskell-users at haskell.org, so users can see it too.) On Mon, Jan 6, 2014 at 4:40 AM, Simon Peyton-Jones wrote: > Howard > > Thanks... improving the wiki would be a great contribution. If you point me to new material that you write, I'd be happy to review it. > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Howard > | B. Golden > | Sent: 03 January 2014 20:57 > | To: ghc-devs at haskell.org > | Subject: Idea for improving communication between devs and potential > | devs > | > | Hi, > | > | I'd like to get involved in developing, but I recognize the learning > | curve involved. To get started I'd like to improve the Trac wiki > | documentation. Part of this would include additional documentation of > | less-documented parts of the compiler and RTS. In addition, I'd like to > | start some sort of "what's new" that boils down the GHC Dev mailing list > | discussion as LWN does for the Linux kernel mailing list. I don't > | imagine that I can do this all by myself, but I hope this idea would > | resonate with others looking to get started as well. This is meant to be > | more frequent and more detailed than what HCAR does for GHC now, though > | I don't expect anyone can do it weekly. > | > | Please let me know what you think about this idea. I'm open to any > | suggestions for improving it also. > | > | Howard B. Golden > | Northridge, CA, USA > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From andrew.gibiansky at gmail.com Tue Jan 7 16:29:53 2014 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Tue, 7 Jan 2014 11:29:53 -0500 Subject: Changing GHC Error Message Wrapping In-Reply-To: <59543203684B2244980D7E4057D5FBC148707DFC@DB3EX14MBXC306.europe.corp.microsoft.com> References: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> <59543203684B2244980D7E4057D5FBC14870765F@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC148707DFC@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Simon, That's exactly what I'm looking for! But it seems that doing it dynamically in the GHC API doesn't work (as in my first email where I tried to adjust pprCols via setSessionDynFlags). I'm going to look into the source as what ppr-cols=N actually sets and probably file a bug - because this seems like buggy behaviour... Andrew On Tue, Jan 7, 2014 at 4:14 AM, Simon Peyton Jones wrote: > -dppr-cols=N changes the width of the output page; you could try a large > number there. There isn?t a setting meaning ?infinity?, sadly. > > > > Simon > > > > *From:* Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] > *Sent:* 07 January 2014 03:04 > *To:* Simon Peyton Jones > *Cc:* Erik de Castro Lopo; ghc-devs at haskell.org > > *Subject:* Re: Changing GHC Error Message Wrapping > > > > Thanks Simon. > > > > In general I think multiline tuples should have many elements per line, > but honestly the tuple case was a very specific example. If possible, I'd > like to change the *overall* wrapping for *all* error messages - how does > `sep` know when to break lines? there's clearly a numeric value for the > number of columns somewhere, but where is it, and is it user-adjustable? > > > > For now I am just hacking around this by special-casing some error > messages and "un-doing" the line wrapping by parsing the messages and > joining lines back together. > > > > Thanks, > > Andrew > > > > On Mon, Jan 6, 2014 at 7:44 AM, Simon Peyton-Jones > wrote: > > I think it?s line 705 in types/TypeRep.lhs > > > > pprTcApp p pp tc tys > > | isTupleTyCon tc && tyConArity tc == length tys > > = pprPromotionQuote tc <> > > tupleParens (tupleTyConSort tc) (sep (punctuate comma (map (pp > TopPrec) tys))) > > > > If you change ?sep? to ?fsep?, you?ll get behaviour more akin to > paragraph-filling (hence the ?f?). Give it a try. You?ll get validation > failure from the testsuite, but you can see whether you think the result is > better or worse. In general, should multi-line tuples be printed with many > elements per line, or just one? > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Andrew > Gibiansky > *Sent:* 04 January 2014 17:30 > *To:* Erik de Castro Lopo > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Changing GHC Error Message Wrapping > > > > Apologize for the broken image formatting. > > > > With the code I posted above, I get the following output: > > > > Couldn't match expected type `(GHC.Types.Int, > > GHC.Types.Int, > > GHC.Types.Int, > > t0, > > t10, > > t20, > > t30, > > t40, > > t50, > > t60, > > t70, > > t80, > > t90)' > > with actual type `(t1, t2, t3)' > > > > I would like the types to be on the same line, or at least wrapped to a > larger number of columns. > > > > Does anyone know how to do this, or where in the GHC source this wrapping > is done? > > > > Thanks! > > Andrew > > > > On Sat, Jan 4, 2014 at 2:55 AM, Erik de Castro Lopo > wrote: > > Carter Schonwald wrote: > > > hey andrew, your image link isn't working (i'm using gmail) > > I think the list software filters out image attachments. > > Erik > -- > ---------------------------------------------------------------------- > Erik de Castro Lopo > http://www.mega-nerd.com/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.frisby at gmail.com Tue Jan 7 16:42:46 2014 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Tue, 7 Jan 2014 10:42:46 -0600 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> Message-ID: Thanks everyone for putting the effort in on this. I'm looking forward to this extension! ... And I acknowledge all of the "caveat emptor"-s :) On Tue, Jan 7, 2014 at 10:11 AM, Austin Seipp wrote: > Hi Erdi, > > After talking with Simon today, we indeed think this can make it into 7.8. > > I'll go ahead and try to do it today. Thanks for rebasing all your > patches - it'll make it much easier! > > > > On Tue, Jan 7, 2014 at 5:50 AM, Dr. ERDI Gergo wrote: > > On Mon, 6 Jan 2014, Carter Schonwald wrote: > > > >> as long as we clearly communicate that there may be refinements / > breaking > >> changes > >> subsequently, i'm all for it, unless merging it in slows down 7.8 > hitting > >> RC . (its > >> taken long enough for RC to happen... don't want to drag it out further) > > > > > > If that helps, I've updated the version at > https://github.com/gergoerdi/ghc > > (and the two sister repos https://github.com/gergoerdi/ghc-testsuite and > > https://github.com/gergoerdi/ghc-haddock) to be based on top of master > as of > > today. > > > > Bye, > > Gergo > > > > -- > > > > .--= ULLA! =-----------------. > > \ http://gergo.erdi.hu \ > > `---= gergo at erdi.hu =-------' > > Elvis is dead and I don't feel so good either. > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jan 7 17:19:04 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Jan 2014 17:19:04 +0000 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> Message-ID: <59543203684B2244980D7E4057D5FBC148708411@DB3EX14MBXC306.europe.corp.microsoft.com> BTW, Gergo, did you write user-manual documentation? I think so, but if not we need it! S | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Austin | Seipp | Sent: 07 January 2014 16:11 | To: Dr. ERDI Gergo | Cc: Joachim Breitner; ghc-devs at haskell.org | Subject: Re: Pattern synonyms for 7.8? | | Hi Erdi, | | After talking with Simon today, we indeed think this can make it into | 7.8. | | I'll go ahead and try to do it today. Thanks for rebasing all your | patches - it'll make it much easier! | | | | On Tue, Jan 7, 2014 at 5:50 AM, Dr. ERDI Gergo wrote: | > On Mon, 6 Jan 2014, Carter Schonwald wrote: | > | >> as long as we clearly communicate that there may be refinements / | >> breaking changes subsequently, i'm all for it, unless merging it in | >> slows down 7.8 hitting RC . (its taken long enough for RC to | >> happen... don't want to drag it out further) | > | > | > If that helps, I've updated the version at | > https://github.com/gergoerdi/ghc (and the two sister repos | > https://github.com/gergoerdi/ghc-testsuite and | > https://github.com/gergoerdi/ghc-haddock) to be based on top of master | > as of today. | > | > Bye, | > Gergo | > | > -- | > | > .--= ULLA! =-----------------. | > \ http://gergo.erdi.hu \ | > `---= gergo at erdi.hu =-------' | > Elvis is dead and I don't feel so good either. | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | > | | | | -- | Regards, | | Austin Seipp, Haskell Consultant | Well-Typed LLP, http://www.well-typed.com/ | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Tue Jan 7 17:46:48 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 7 Jan 2014 12:46:48 -0500 Subject: Pattern synonyms for 7.8? In-Reply-To: <59543203684B2244980D7E4057D5FBC148708411@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <59543203684B2244980D7E4057D5FBC148708411@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Btw if new haddock gets merged in, the pattern synonym support has to get upstreamed right? On Tuesday, January 7, 2014, Simon Peyton Jones wrote: > BTW, Gergo, did you write user-manual documentation? I think so, but if > not we need it! > > S > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org ] On > Behalf Of Austin > | Seipp > | Sent: 07 January 2014 16:11 > | To: Dr. ERDI Gergo > | Cc: Joachim Breitner; ghc-devs at haskell.org > | Subject: Re: Pattern synonyms for 7.8? > | > | Hi Erdi, > | > | After talking with Simon today, we indeed think this can make it into > | 7.8. > | > | I'll go ahead and try to do it today. Thanks for rebasing all your > | patches - it'll make it much easier! > | > | > | > | On Tue, Jan 7, 2014 at 5:50 AM, Dr. ERDI Gergo > > wrote: > | > On Mon, 6 Jan 2014, Carter Schonwald wrote: > | > > | >> as long as we clearly communicate that there may be refinements / > | >> breaking changes subsequently, i'm all for it, unless merging it in > | >> slows down 7.8 hitting RC . (its taken long enough for RC to > | >> happen... don't want to drag it out further) > | > > | > > | > If that helps, I've updated the version at > | > https://github.com/gergoerdi/ghc (and the two sister repos > | > https://github.com/gergoerdi/ghc-testsuite and > | > https://github.com/gergoerdi/ghc-haddock) to be based on top of master > | > as of today. > | > > | > Bye, > | > Gergo > | > > | > -- > | > > | > .--= ULLA! =-----------------. > | > \ http://gergo.erdi.hu \ > | > `---= gergo at erdi.hu =-------' > | > Elvis is dead and I don't feel so good either. > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > http://www.haskell.org/mailman/listinfo/ghc-devs > | > > | > | > | > | -- > | Regards, > | > | Austin Seipp, Haskell Consultant > | Well-Typed LLP, http://www.well-typed.com/ > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Tue Jan 7 18:06:31 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 07 Jan 2014 18:06:31 +0000 Subject: Validating with Haddock In-Reply-To: References: Message-ID: <52CC4227.30204@fuuzetsu.co.uk> On 07/01/14 13:57, Simon Hengel wrote: > Hey! > Sorry for not being of much help with this right now. Regarding Haddock releases I think we updated the version used for Hackage independently of ghc before. Cheers. > Oh, if that's the case then I no longer feel that it's urgent that we get it into 7.8 considering Hackage is where the bulk of the docs are. -- Mateusz K. From fuuzetsu at fuuzetsu.co.uk Tue Jan 7 18:13:26 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 07 Jan 2014 18:13:26 +0000 Subject: Validating with Haddock In-Reply-To: References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD6FC.1080405@fuuzetsu.co.uk> Message-ID: <52CC43C6.9020701@fuuzetsu.co.uk> On 07/01/14 14:42, Austin Seipp wrote: > Hi Mateusz, > > I remember your email and I believe I responded with the OK at the > time - my impression was that it was ready to be merged and would > shortly be done after that, but I didn't hear anything back about it. > I apologize for my dropping the ball. We contacted you because we thought it wouldn't be this much trouble to get an OK from the validate process. The code was technically more or less ready months ago, although Simon has been making some changes here and there. > As for your actual error - ghc-paths is only used in Haddock when it's > not built in the GHC tree (as per the cabal file,) so I find it very > suspicious that your package check is mentioning it at all (it's not > mentioned anywhere else in any GHC sources.) Can you verify that it's > there with `./inplace/bin/ghc-pkg list`? I'm not even sure how it > could possibly get involved. > > Finally, can you be more specific about exactly how you tested these > changes with your modified Haddock? I presume it was something like: I had ran it on a as-is tree so that I could compare the results from before and after I put my changes in. I had just ran validate yesterday again (after sync-all) and I no longer get this package failure! > $ ... clone ghc source ... > $ cd ghc > $ ... get extra stuff with ./sync-all ... > $ cd utils/haddock > $ ... use git to grab your code from github ... > $ cd ../.. > $ sh ./validate As I mention, it was on unchanged tree but this is how I'll do it when testing the changes. > But I'd like to make sure I know exactly what's going on. I can try > testing your branch later today. I think the original issue is now gone. I do get 8 unexpected failures and about 11000+ skipped tests! Is this normal? Should I be filing bugs? Should I create a separate thread? Can someone look at my log? You can download it from [1], it's about 8MB. If GHC itself compiles the test information into some files you'd prefer, please let me know, I simply redirected all output from the validate script into a log. [1]: http://fuuzetsu.co.uk/misc/validatelog -- Mateusz K. From fuuzetsu at fuuzetsu.co.uk Tue Jan 7 18:18:33 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 07 Jan 2014 18:18:33 +0000 Subject: Alex unicode trick In-Reply-To: <52CC117F.8010006@gmail.com> References: <52CBABE8.4040001@fuuzetsu.co.uk> <52CC117F.8010006@gmail.com> Message-ID: <52CC44F9.6010201@fuuzetsu.co.uk> On 07/01/14 14:38, Simon Marlow wrote: > Krasimir is right, it would be hard to use Alex's built-in Unicode > support because we have to automatically generate the character classes > from the Unicode spec somehow. Probably Alex ought to include these as > built-in macros, but right now it doesn't. > > Even if we did have access to the right regular expressions, I'm > slightly concerned that the generated state machine might be enormous. > > Cheers, > Simon > > On 07/01/2014 08:26, Krasimir Angelov wrote: >> Hi, >> >> I was recenly looking at this code to see how the lexer decides that a >> character is a letter, space, etc. The problem is that with Unicode >> there are hundreds of thousands of characters that are declared to be >> alphanumeric. Even if they are compressed into a regular expression >> with a list of ranges there will be still ~390 ranges. The GHC lexer >> avoids hardcoding this ranges by calling isSpace, isAlpha, etc and >> then converting this result to a code. Ideally it would be nice if >> Alex had a predefined macroses corresponding to the Unicode >> categories, but for now you have to either hard code the ranges with >> huge regular expressions or use the workaround used in GHC. Is there >> any other solution? >> >> Regards, >> Krasimir >> >> Ah, I think I understand now. If this is the case, at least the ?alexGetChar? could be removed, right? Is Alex 2.x compatibility necessary for any reason whatsoever? -- Mateusz K. From austin at well-typed.com Tue Jan 7 18:21:11 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Jan 2014 12:21:11 -0600 Subject: Validating with Haddock In-Reply-To: <52CC43C6.9020701@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD6FC.1080405@fuuzetsu.co.uk> <52CC43C6.9020701@fuuzetsu.co.uk> Message-ID: Yes, the skipped tests are normal. The testsuite has a concept of tests being built a certain 'way' - for example, you might test a piece of code by making sure it works compiled with -threaded, non-threaded, profiling, the LLVM backend, or any combination of those, etc. So a single *test* gives rise to multiple *test cases*. When you run validate, it runs it in a 'fast' mode by default as opposed to the slow mode. The fast mode only runs a subset of the overall test cases - it runs the most basic tests per file, which generally gives a pretty good indication as to what is going on. Also, the performance failures you're seeing are (I speculate) due to out of date performance numbers. Sometimes these numbers go up or down just due to code churn, but they're sometimes finnicky, because they may depend on the exact time a major GC happens or something. So a small wibble can cause them to sometimes occasionally fail. In any case, these results seem to indicate your branch looks quite OK, so I can try to merge this soon, if you think it is actually complete and ready. On Tue, Jan 7, 2014 at 12:13 PM, Mateusz Kowalczyk wrote: > On 07/01/14 14:42, Austin Seipp wrote: >> Hi Mateusz, >> >> I remember your email and I believe I responded with the OK at the >> time - my impression was that it was ready to be merged and would >> shortly be done after that, but I didn't hear anything back about it. >> I apologize for my dropping the ball. > > We contacted you because we thought it wouldn't be this much trouble > to get an OK from the validate process. The code was technically more > or less ready months ago, although Simon has been making some changes > here and there. > >> As for your actual error - ghc-paths is only used in Haddock when it's >> not built in the GHC tree (as per the cabal file,) so I find it very >> suspicious that your package check is mentioning it at all (it's not >> mentioned anywhere else in any GHC sources.) Can you verify that it's >> there with `./inplace/bin/ghc-pkg list`? I'm not even sure how it >> could possibly get involved. >> >> Finally, can you be more specific about exactly how you tested these >> changes with your modified Haddock? I presume it was something like: > > I had ran it on a as-is tree so that I could compare the results from > before and after I put my changes in. I had just ran validate > yesterday again (after sync-all) and I no longer get this package failure! > >> $ ... clone ghc source ... >> $ cd ghc >> $ ... get extra stuff with ./sync-all ... >> $ cd utils/haddock >> $ ... use git to grab your code from github ... >> $ cd ../.. >> $ sh ./validate > > As I mention, it was on unchanged tree but this is how I'll do it when > testing the changes. > >> But I'd like to make sure I know exactly what's going on. I can try >> testing your branch later today. > > I think the original issue is now gone. I do get 8 unexpected failures > and about 11000+ skipped tests! Is this normal? Should I be filing > bugs? Should I create a separate thread? Can someone look at my log? > You can download it from [1], it's about 8MB. If GHC itself compiles > the test information into some files you'd prefer, please let me know, > I simply redirected all output from the validate script into a log. > > [1]: http://fuuzetsu.co.uk/misc/validatelog > > -- > Mateusz K. > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From fuuzetsu at fuuzetsu.co.uk Tue Jan 7 18:39:36 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 07 Jan 2014 18:39:36 +0000 Subject: Validating with Haddock In-Reply-To: References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD6FC.1080405@fuuzetsu.co.uk> <52CC43C6.9020701@fuuzetsu.co.uk> Message-ID: <52CC49E8.4040407@fuuzetsu.co.uk> On 07/01/14 18:21, Austin Seipp wrote: > Yes, the skipped tests are normal. The testsuite has a concept of > tests being built a certain 'way' - for example, you might test a > piece of code by making sure it works compiled with -threaded, > non-threaded, profiling, the LLVM backend, or any combination of > those, etc. So a single *test* gives rise to multiple *test cases*. > > When you run validate, it runs it in a 'fast' mode by default as > opposed to the slow mode. The fast mode only runs a subset of the > overall test cases - it runs the most basic tests per file, which > generally gives a pretty good indication as to what is going on. > > Also, the performance failures you're seeing are (I speculate) due to > out of date performance numbers. Sometimes these numbers go up or down > just due to code churn, but they're sometimes finnicky, because they > may depend on the exact time a major GC happens or something. So a > small wibble can cause them to sometimes occasionally fail. > > In any case, these results seem to indicate your branch looks quite > OK, so I can try to merge this soon, if you think it is actually > complete and ready. > These are the numbers from the clean tree. I will now merge in my changes, validate again, run Haddock test suite and let you know how it went. If I see similar results, I'll assume it's fine. I greatly appreciate the help I've been getting on this thread. @Simon H. Do you think that the new features could be merged fairly soon too, if the basic parser stuff checks out? Does anything extra need doing? -- Mateusz K. From austin at well-typed.com Tue Jan 7 19:27:30 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Jan 2014 13:27:30 -0600 Subject: Validating with Haddock In-Reply-To: <52CC49E8.4040407@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD6FC.1080405@fuuzetsu.co.uk> <52CC43C6.9020701@fuuzetsu.co.uk> <52CC49E8.4040407@fuuzetsu.co.uk> Message-ID: It's worth mentioning that Gergo's Pattern Synonym work lightly touches Haddock as well, so perhaps it's worth ensuring nothing conflicts there as well: https://github.com/gergoerdi/ghc-haddock - I'm not sure which should be merged first (Gergo's patch has some validate failures that need to be fixed up, so I imagine yours might make it first.) On Tue, Jan 7, 2014 at 12:39 PM, Mateusz Kowalczyk wrote: > On 07/01/14 18:21, Austin Seipp wrote: >> Yes, the skipped tests are normal. The testsuite has a concept of >> tests being built a certain 'way' - for example, you might test a >> piece of code by making sure it works compiled with -threaded, >> non-threaded, profiling, the LLVM backend, or any combination of >> those, etc. So a single *test* gives rise to multiple *test cases*. >> >> When you run validate, it runs it in a 'fast' mode by default as >> opposed to the slow mode. The fast mode only runs a subset of the >> overall test cases - it runs the most basic tests per file, which >> generally gives a pretty good indication as to what is going on. >> >> Also, the performance failures you're seeing are (I speculate) due to >> out of date performance numbers. Sometimes these numbers go up or down >> just due to code churn, but they're sometimes finnicky, because they >> may depend on the exact time a major GC happens or something. So a >> small wibble can cause them to sometimes occasionally fail. >> >> In any case, these results seem to indicate your branch looks quite >> OK, so I can try to merge this soon, if you think it is actually >> complete and ready. >> > > These are the numbers from the clean tree. I will now merge in my > changes, validate again, run Haddock test suite and let you know how it > went. If I see similar results, I'll assume it's fine. > > I greatly appreciate the help I've been getting on this thread. > > @Simon H. > Do you think that the new features could be merged fairly soon too, if > the basic parser stuff checks out? Does anything extra need doing? > > > -- > Mateusz K. > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Tue Jan 7 20:11:06 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Jan 2014 14:11:06 -0600 Subject: [PATCH] get rid of "Just" string in __GLASGOW_HASKELL_LLVM__ define for invoked GCC The patch fixes invoked GCC command line -D parameter from -D__GLASGOW_HASKELL_LLVM__=Just to correct -D__GLASGOW_HASKELL_LLVM__=, e.g. -D__GLASGOW_HASKELL_LLVM__=Just 32 fixed to -D__GLASGOW_HASKELL_LLVM__=32 for LLVM 3.2 In-Reply-To: <1389051571-8184-1-git-send-email-karel.gardas@centrum.cz> References: <1389051571-8184-1-git-send-email-karel.gardas@centrum.cz> Message-ID: Hi Karel, I merged this earlier. Thanks. On Mon, Jan 6, 2014 at 5:39 PM, Karel Gardas wrote: > --- > compiler/main/DriverPipeline.hs | 4 +++- > 1 files changed, 3 insertions(+), 1 deletions(-) > > diff --git a/compiler/main/DriverPipeline.hs b/compiler/main/DriverPipeline.hs > index 337778e..f789d44 100644 > --- a/compiler/main/DriverPipeline.hs > +++ b/compiler/main/DriverPipeline.hs > @@ -2086,7 +2086,9 @@ doCpp dflags raw input_fn output_fn = do > getBackendDefs :: DynFlags -> IO [String] > getBackendDefs dflags | hscTarget dflags == HscLlvm = do > llvmVer <- figureLlvmVersion dflags > - return [ "-D__GLASGOW_HASKELL_LLVM__="++show llvmVer ] > + return $ case llvmVer of > + Just n -> [ "-D__GLASGOW_HASKELL_LLVM__="++show n ] > + _ -> [] > > getBackendDefs _ = > return [] > -- > 1.7.3.2 > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From igloo at earth.li Tue Jan 7 20:15:47 2014 From: igloo at earth.li (Ian Lynagh) Date: Tue, 7 Jan 2014 20:15:47 +0000 Subject: Validating with Haddock In-Reply-To: <52CC49E8.4040407@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD6FC.1080405@fuuzetsu.co.uk> <52CC43C6.9020701@fuuzetsu.co.uk> <52CC49E8.4040407@fuuzetsu.co.uk> Message-ID: <20140107201547.GA15588@matrix.chaos.earth.li> On Tue, Jan 07, 2014 at 06:39:36PM +0000, Mateusz Kowalczyk wrote: > On 07/01/14 18:21, Austin Seipp wrote: > > > > Also, the performance failures you're seeing are (I speculate) due to > > out of date performance numbers. Sometimes these numbers go up or down > > just due to code churn, but they're sometimes finnicky, because they > > may depend on the exact time a major GC happens or something. So a > > small wibble can cause them to sometimes occasionally fail. > > These are the numbers from the clean tree. The haddock perf numbers look pretty bad, especially the peak_megabytes_allocated: =====> haddock.base(normal) 429 of 3855 [0, 0, 0] peak_megabytes_allocated value is too high: Expected peak_megabytes_allocated: 139 +/-1% Actual peak_megabytes_allocated: 180 =====> haddock.Cabal(normal) 430 of 3855 [0, 1, 0] peak_megabytes_allocated value is too high: Expected peak_megabytes_allocated: 89 +/-1% Actual peak_megabytes_allocated: 150 =====> haddock.compiler(normal) 431 of 3855 [0, 2, 0] max_bytes_used value is too high: Expected peak_megabytes_allocated: 663 +/-1% Actual peak_megabytes_allocated: 794 I think it would be worth working out what's going on before merging more haddock changes. Thanks Ian From austin at well-typed.com Tue Jan 7 20:54:32 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Jan 2014 14:54:32 -0600 Subject: LLVM and dynamic linking In-Reply-To: References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com> Message-ID: Hi all, Apologies for the late reply. First off, one thing to note wrt GMP: GMP is an LGPL library which we link against. Technically, we need to allow relinking to be compliant and free of of the LGPL for our own executables, but this should be reasonably possible - on systems where there is a system-wide GMP installed, we use that copy (this occurs mostly on OSX and Linux.) And so do executables compiled by GHC. Even when GHC uses static linking or dynamic linking for haskell code in this case, it will still always dynamically link to libgmp - meaning replacing the shared object should be possible. This is just the way modern Linux/OSX systems distribute system-wide C libraries, as you expect. In the case where we don't have this, we build our own copy of libgmp inside the source tree and use that instead. That said there are other reasons why we might want to be free of GMP entirely, but that's neither here nor there. In any case, the issue is pretty orthogonal to LLVM, dynamic haskell linking, etc - on a Linux system, you should reasonably be able to swap out a `libgmp.so` for another modified copy[1], and your Haskell programs should be compliant in this regard.[2] Now, as for LLVM. For one, LLVM actually is a 'relatively' cheap backend to have around. I say LLVM is 'relatively' cheap because All External Dependencies Have A Cost. The code is reasonably small, and in any case GHC still does most of the heavy lifting - the LLVM backend and native code generator share a very large amount of code. We don't really duplicate optimizations ourselves, for example, and some optimizations we do perform on our IR can't be done by LLVM anyway (it doesn't have enough information.) But LLVM has some very notable costs for GHC developers: * It's slower to compile with, because it tries to re-optimize the code we give it, but it mostly accomplishes nothing beyond advanced optimizations like vectorization/scalar evolution. * We support a wide range of LLVM versions (a nightmare IMO) which means pinning down specific versions and supporting them all is rather difficult. Combined with e.g. distro maintainers who may patch bugs themselves, and the things you're depending on in the wild (or what users might report bugs with) aren't as solid or well understood. * LLVM is extremely large, extremely complex, and the amount of people who can sensibly work on both GHC and LLVM are few and far inbetween. So fixing these issues is time consuming, difficult, and mostly tedious grunt work. All this basically sums up to the fact that dealing with LLVM comes with complications all on its own that makes it a different kind of beast to handle. So, the LLVM backend definitely needs some love. All of these things are solveable (and I have some ideas for solving most of them,) but none of them will quite come for free. But there are some real improvements that can be made here I think, and make LLVM much more smoothly supported for GHC itself. If you'd like to help it'd be really appreciated - I'd like to see LLVM have more love put forth, but it's a lot of work of course!. (Finally, in reference to the last point: I am in the obvious minority, but I am favorable to having the native code generator around, even if it's a bit old and crufty these days - at least it's small, fast and simple enough to be grokked and hacked on, and I don't think it fragments development all that much. By comparison, LLVM is a mammoth beast of incredible size with a sizeable entry barrier IMO. I think there's merit to having both a simple, 'obviously working' option in addition to the heavy duty one.) [1] Relevant tool: http://nixos.org/patchelf.html [2] Of course, IANAL, but there you go. On Wed, Jan 1, 2014 at 9:03 PM, Aaron Friel wrote: > Because I think it?s going to be an organizational issue and a duplication > of effort if GHC is built one way but the future direction of LLVM is > another. > > Imagine if GCC started developing a new engine and it didn?t work with one > of the biggest, most regular consumers of GCC. Say, the Linux kernel, or > itself. At first, the situation is optimistic - if this engine doesn?t work > for the project that has the smartest, brightest GCC hackers potentially > looking at it, then it should fix itself soon enough. Suppose the situation > lingers though, and continues for months without fix. The new GCC backend > starts to become the default, and the community around GCC advocates for > end-users to use it to optimize code for their projects and it even becomes > the default for some platforms, such as ARM. > > What I?ve described is analogous to the GHC situation - and the result is > that GHC isn?t self-hosting on some platforms and the inertia that used to > be behind the LLVM backend seems to have stagnated. Whereas LLVM used to be > the ?new hotness?, I?ve noticed that issues like Trac #7787 no longer have a > lot of eyes on them and externally it seems like GHC has accepted a > bifurcated approach for development. > > I dramatize the situation above, but there?s some truth to it. The LLVM > backend needs some care and attention and if the majority of GHC devs can?t > build GHC with LLVM, then that means the smartest, brightest GHC hackers > won?t have their attention turned toward fixing those problems. If a patch > to GHC-HEAD broke compilation for every backend, it would be fixed in short > order. If a new version of GCC did not work with GHC, I can imagine it would > be only hours before the first patches came in resolving the issue. On OS X > Mavericks, an incompatibility with GHC has led to a swift reaction and > strong support for resolving platform issues. The attention to the LLVM > backend is visibly smaller, but I don?t know enough about the people working > on GHC to know if it is actually smaller. > > The way I am trying to change this is by making it easier for people to > start using GHC (by putting images on Docker.io) and, in the process, > learning about GHC?s build process and trying to make things work for my own > projects. The Docker image allows anyone with a Linux kernel to build and > play with GHC HEAD. The information about building GHC yourself is difficult > to approach and I found it hard to get started, and I want to improve that > too, so I?m learning and asking questions. > > From: Carter Schonwald > Sent: ?Wednesday?, ?January? ?1?, ?2014 ?5?:?54? ?PM > To: Aaron Friel > Cc: ghc-devs at haskell.org > > 7.8 should have working dylib support on the llvm backend. (i believe some > of the relevant patches are in head already, though Ben Gamari can opine on > that) > > why do you want ghc to be built with llvm? (i know i've tried myself in the > past, and it should be doable with 7.8 using 7.8 soon too) > > > On Wed, Jan 1, 2014 at 5:38 PM, Aaron Friel wrote: >> >> Replying to include the email list. You?re right, the llvm backend and the >> gmp licensing issues are orthogonal - or should be. The problem is I get >> build errors when trying to build GHC with LLVM and dynamic libraries. >> >> The result is that I get a few different choices when producing a platform >> image for development, with some uncomfortable tradeoffs: >> >> LLVM-built GHC, dynamic libs - doesn?t build. >> LLVM-built GHC, static libs - potential licensing oddities with me >> shipping a statically linked ghc binary that is now gpled. I am not a >> lawyer, but the situation makes me uncomfortable. >> GCC/ASM-built GHC, dynamic libs - this is the *standard* for most >> platforms shipping ghc binaries, but it means that one of the biggest and >> most critical users of the LLVM backend is neglecting it. It also bifurcates >> development resources for GHC. Optimization work is duplicated and already >> devs are getting into the uncomfortable position of suggesting to users that >> they should trust GHC to build your programs in a particular way, but not >> itself. >> GCC/ASM-built GHC, static libs - worst of all possible worlds. >> >> >> Because of this, the libgmp and llvm-backend issues aren?t entirely >> orthogonal. Trac ticket #7885 is exactly the issue I get when trying to >> compile #1. >> >> From: Carter Schonwald >> Sent: ?Monday?, ?December? ?30?, ?2013 ?1?:?05? ?PM >> To: Aaron Friel >> >> Good question but you forgot to email the mailing list too :-) >> >> Using llvm has nothing to do with Gmp. Use the native code gen (it's >> simper) and integer-simple. >> >> That said, standard ghc dylinks to a system copy of Gmp anyways (I think >> ). Building ghc as a Dylib is orthogonal. >> >> -Carter >> >> On Dec 30, 2013, at 1:58 PM, Aaron Friel wrote: >> >> Excellent research - I?m curious if this is the right thread to inquire >> about the status of trying to link GHC itself dynamically. >> >> I?ve been attempting to do so with various LLVM versions (3.2, 3.3, 3.4) >> using snapshot builds of GHC (within the past week) from git, and I hit >> ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every time (even >> the exact same error message). >> >> I?m interested in dynamically linking GHC with LLVM to avoid the >> entanglement with libgmp?s license. >> >> If this is the wrong thread or if I should reply instead to the trac item, >> please let me know. > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From fuuzetsu at fuuzetsu.co.uk Tue Jan 7 21:07:32 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 07 Jan 2014 21:07:32 +0000 Subject: Validating with Haddock In-Reply-To: <20140107201547.GA15588@matrix.chaos.earth.li> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD6FC.1080405@fuuzetsu.co.uk> <52CC43C6.9020701@fuuzetsu.co.uk> <52CC49E8.4040407@fuuzetsu.co.uk> <20140107201547.GA15588@matrix.chaos.earth.li> Message-ID: <52CC6C94.1080800@fuuzetsu.co.uk> On 07/01/14 20:15, Ian Lynagh wrote: > On Tue, Jan 07, 2014 at 06:39:36PM +0000, Mateusz Kowalczyk wrote: >> On 07/01/14 18:21, Austin Seipp wrote: >>> >>> Also, the performance failures you're seeing are (I speculate) due to >>> out of date performance numbers. Sometimes these numbers go up or down >>> just due to code churn, but they're sometimes finnicky, because they >>> may depend on the exact time a major GC happens or something. So a >>> small wibble can cause them to sometimes occasionally fail. >> >> These are the numbers from the clean tree. > > The haddock perf numbers look pretty bad, especially the > peak_megabytes_allocated: > > =====> haddock.base(normal) 429 of 3855 [0, 0, 0] > peak_megabytes_allocated value is too high: > Expected peak_megabytes_allocated: 139 +/-1% > Actual peak_megabytes_allocated: 180 > > =====> haddock.Cabal(normal) 430 of 3855 [0, 1, 0] > peak_megabytes_allocated value is too high: > Expected peak_megabytes_allocated: 89 +/-1% > Actual peak_megabytes_allocated: 150 > > =====> haddock.compiler(normal) 431 of 3855 [0, 2, 0] > max_bytes_used value is too high: > Expected peak_megabytes_allocated: 663 +/-1% > Actual peak_megabytes_allocated: 794 > > I think it would be worth working out what's going on before merging > more haddock changes. > > > Thanks > Ian > Hi Ian, Is there any guidance on how these tests are performed? More importantly, is there any log of how the performance changed over time? Is it Haddock's fault that it has become slower or is it the cause of GHC changes? PS: If there's no performance over time log, it might be worth introducing something! -- Mateusz K. From george.colpitts at gmail.com Tue Jan 7 21:07:05 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 7 Jan 2014 17:07:05 -0400 Subject: LLVM and dynamic linking In-Reply-To: References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com> Message-ID: wrt We support a wide range of LLVM versions Why can't we stop doing that and only support one or two, e.g. GHC 7.8 would only support llvm 3.3 and perhaps 3.4? On Tue, Jan 7, 2014 at 4:54 PM, Austin Seipp wrote: > Hi all, > > Apologies for the late reply. > > First off, one thing to note wrt GMP: GMP is an LGPL library which we > link against. Technically, we need to allow relinking to be compliant > and free of of the LGPL for our own executables, but this should be > reasonably possible - on systems where there is a system-wide GMP > installed, we use that copy (this occurs mostly on OSX and Linux.) And > so do executables compiled by GHC. Even when GHC uses static linking > or dynamic linking for haskell code in this case, it will still always > dynamically link to libgmp - meaning replacing the shared object > should be possible. This is just the way modern Linux/OSX systems > distribute system-wide C libraries, as you expect. > > In the case where we don't have this, we build our own copy of libgmp > inside the source tree and use that instead. That said there are other > reasons why we might want to be free of GMP entirely, but that's > neither here nor there. In any case, the issue is pretty orthogonal to > LLVM, dynamic haskell linking, etc - on a Linux system, you should > reasonably be able to swap out a `libgmp.so` for another modified > copy[1], and your Haskell programs should be compliant in this > regard.[2] > > Now, as for LLVM. > > For one, LLVM actually is a 'relatively' cheap backend to have around. > I say LLVM is 'relatively' cheap because All External Dependencies > Have A Cost. The code is reasonably small, and in any case GHC still > does most of the heavy lifting - the LLVM backend and native code > generator share a very large amount of code. We don't really duplicate > optimizations ourselves, for example, and some optimizations we do > perform on our IR can't be done by LLVM anyway (it doesn't have enough > information.) > > But LLVM has some very notable costs for GHC developers: > > * It's slower to compile with, because it tries to re-optimize the > code we give it, but it mostly accomplishes nothing beyond advanced > optimizations like vectorization/scalar evolution. > * We support a wide range of LLVM versions (a nightmare IMO) which > means pinning down specific versions and supporting them all is rather > difficult. Combined with e.g. distro maintainers who may patch bugs > themselves, and the things you're depending on in the wild (or what > users might report bugs with) aren't as solid or well understood. > * LLVM is extremely large, extremely complex, and the amount of > people who can sensibly work on both GHC and LLVM are few and far > inbetween. So fixing these issues is time consuming, difficult, and > mostly tedious grunt work. > > All this basically sums up to the fact that dealing with LLVM comes > with complications all on its own that makes it a different kind of > beast to handle. > > So, the LLVM backend definitely needs some love. All of these things > are solveable (and I have some ideas for solving most of them,) but > none of them will quite come for free. But there are some real > improvements that can be made here I think, and make LLVM much more > smoothly supported for GHC itself. If you'd like to help it'd be > really appreciated - I'd like to see LLVM have more love put forth, > but it's a lot of work of course!. > > (Finally, in reference to the last point: I am in the obvious > minority, but I am favorable to having the native code generator > around, even if it's a bit old and crufty these days - at least it's > small, fast and simple enough to be grokked and hacked on, and I don't > think it fragments development all that much. By comparison, LLVM is a > mammoth beast of incredible size with a sizeable entry barrier IMO. I > think there's merit to having both a simple, 'obviously working' > option in addition to the heavy duty one.) > > [1] Relevant tool: http://nixos.org/patchelf.html > [2] Of course, IANAL, but there you go. > > On Wed, Jan 1, 2014 at 9:03 PM, Aaron Friel wrote: > > Because I think it?s going to be an organizational issue and a > duplication > > of effort if GHC is built one way but the future direction of LLVM is > > another. > > > > Imagine if GCC started developing a new engine and it didn?t work with > one > > of the biggest, most regular consumers of GCC. Say, the Linux kernel, or > > itself. At first, the situation is optimistic - if this engine doesn?t > work > > for the project that has the smartest, brightest GCC hackers potentially > > looking at it, then it should fix itself soon enough. Suppose the > situation > > lingers though, and continues for months without fix. The new GCC backend > > starts to become the default, and the community around GCC advocates for > > end-users to use it to optimize code for their projects and it even > becomes > > the default for some platforms, such as ARM. > > > > What I?ve described is analogous to the GHC situation - and the result is > > that GHC isn?t self-hosting on some platforms and the inertia that used > to > > be behind the LLVM backend seems to have stagnated. Whereas LLVM used to > be > > the ?new hotness?, I?ve noticed that issues like Trac #7787 no longer > have a > > lot of eyes on them and externally it seems like GHC has accepted a > > bifurcated approach for development. > > > > I dramatize the situation above, but there?s some truth to it. The LLVM > > backend needs some care and attention and if the majority of GHC devs > can?t > > build GHC with LLVM, then that means the smartest, brightest GHC hackers > > won?t have their attention turned toward fixing those problems. If a > patch > > to GHC-HEAD broke compilation for every backend, it would be fixed in > short > > order. If a new version of GCC did not work with GHC, I can imagine it > would > > be only hours before the first patches came in resolving the issue. On > OS X > > Mavericks, an incompatibility with GHC has led to a swift reaction and > > strong support for resolving platform issues. The attention to the LLVM > > backend is visibly smaller, but I don?t know enough about the people > working > > on GHC to know if it is actually smaller. > > > > The way I am trying to change this is by making it easier for people to > > start using GHC (by putting images on Docker.io) and, in the process, > > learning about GHC?s build process and trying to make things work for my > own > > projects. The Docker image allows anyone with a Linux kernel to build and > > play with GHC HEAD. The information about building GHC yourself is > difficult > > to approach and I found it hard to get started, and I want to improve > that > > too, so I?m learning and asking questions. > > > > From: Carter Schonwald > > Sent: Wednesday, January 1, 2014 5:54 PM > > To: Aaron Friel > > Cc: ghc-devs at haskell.org > > > > 7.8 should have working dylib support on the llvm backend. (i believe > some > > of the relevant patches are in head already, though Ben Gamari can opine > on > > that) > > > > why do you want ghc to be built with llvm? (i know i've tried myself in > the > > past, and it should be doable with 7.8 using 7.8 soon too) > > > > > > On Wed, Jan 1, 2014 at 5:38 PM, Aaron Friel wrote: > >> > >> Replying to include the email list. You?re right, the llvm backend and > the > >> gmp licensing issues are orthogonal - or should be. The problem is I get > >> build errors when trying to build GHC with LLVM and dynamic libraries. > >> > >> The result is that I get a few different choices when producing a > platform > >> image for development, with some uncomfortable tradeoffs: > >> > >> LLVM-built GHC, dynamic libs - doesn?t build. > >> LLVM-built GHC, static libs - potential licensing oddities with me > >> shipping a statically linked ghc binary that is now gpled. I am not a > >> lawyer, but the situation makes me uncomfortable. > >> GCC/ASM-built GHC, dynamic libs - this is the *standard* for most > >> platforms shipping ghc binaries, but it means that one of the biggest > and > >> most critical users of the LLVM backend is neglecting it. It also > bifurcates > >> development resources for GHC. Optimization work is duplicated and > already > >> devs are getting into the uncomfortable position of suggesting to users > that > >> they should trust GHC to build your programs in a particular way, but > not > >> itself. > >> GCC/ASM-built GHC, static libs - worst of all possible worlds. > >> > >> > >> Because of this, the libgmp and llvm-backend issues aren?t entirely > >> orthogonal. Trac ticket #7885 is exactly the issue I get when trying to > >> compile #1. > >> > >> From: Carter Schonwald > >> Sent: Monday, December 30, 2013 1:05 PM > >> To: Aaron Friel > >> > >> Good question but you forgot to email the mailing list too :-) > >> > >> Using llvm has nothing to do with Gmp. Use the native code gen (it's > >> simper) and integer-simple. > >> > >> That said, standard ghc dylinks to a system copy of Gmp anyways (I think > >> ). Building ghc as a Dylib is orthogonal. > >> > >> -Carter > >> > >> On Dec 30, 2013, at 1:58 PM, Aaron Friel wrote: > >> > >> Excellent research - I?m curious if this is the right thread to inquire > >> about the status of trying to link GHC itself dynamically. > >> > >> I?ve been attempting to do so with various LLVM versions (3.2, 3.3, 3.4) > >> using snapshot builds of GHC (within the past week) from git, and I hit > >> ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every time > (even > >> the exact same error message). > >> > >> I?m interested in dynamically linking GHC with LLVM to avoid the > >> entanglement with libgmp?s license. > >> > >> If this is the wrong thread or if I should reply instead to the trac > item, > >> please let me know. > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Tue Jan 7 21:20:24 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 7 Jan 2014 15:20:24 -0600 Subject: Validating with Haddock In-Reply-To: <52CC6C94.1080800@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD6FC.1080405@fuuzetsu.co.uk> <52CC43C6.9020701@fuuzetsu.co.uk> <52CC49E8.4040407@fuuzetsu.co.uk> <20140107201547.GA15588@matrix.chaos.earth.li> <52CC6C94.1080800@fuuzetsu.co.uk> Message-ID: For the record and other people reading - after a quick discussion on IRC, it simply looks like the 32-bit peak_megabytes_allocated numbers for those tests probably weren't updated at the same time as the 64bit ones, leaving them out of date. On Tue, Jan 7, 2014 at 3:07 PM, Mateusz Kowalczyk wrote: > On 07/01/14 20:15, Ian Lynagh wrote: >> On Tue, Jan 07, 2014 at 06:39:36PM +0000, Mateusz Kowalczyk wrote: >>> On 07/01/14 18:21, Austin Seipp wrote: >>>> >>>> Also, the performance failures you're seeing are (I speculate) due to >>>> out of date performance numbers. Sometimes these numbers go up or down >>>> just due to code churn, but they're sometimes finnicky, because they >>>> may depend on the exact time a major GC happens or something. So a >>>> small wibble can cause them to sometimes occasionally fail. >>> >>> These are the numbers from the clean tree. >> >> The haddock perf numbers look pretty bad, especially the >> peak_megabytes_allocated: >> >> =====> haddock.base(normal) 429 of 3855 [0, 0, 0] >> peak_megabytes_allocated value is too high: >> Expected peak_megabytes_allocated: 139 +/-1% >> Actual peak_megabytes_allocated: 180 >> >> =====> haddock.Cabal(normal) 430 of 3855 [0, 1, 0] >> peak_megabytes_allocated value is too high: >> Expected peak_megabytes_allocated: 89 +/-1% >> Actual peak_megabytes_allocated: 150 >> >> =====> haddock.compiler(normal) 431 of 3855 [0, 2, 0] >> max_bytes_used value is too high: >> Expected peak_megabytes_allocated: 663 +/-1% >> Actual peak_megabytes_allocated: 794 >> >> I think it would be worth working out what's going on before merging >> more haddock changes. >> >> >> Thanks >> Ian >> > > Hi Ian, > > Is there any guidance on how these tests are performed? More > importantly, is there any log of how the performance changed over time? > Is it Haddock's fault that it has become slower or is it the cause of > GHC changes? > > PS: If there's no performance over time log, it might be worth > introducing something! > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Tue Jan 7 22:53:38 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 7 Jan 2014 22:53:38 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <52CC25A4.8060004@gmail.com> References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> Message-ID: <59543203684B2244980D7E4057D5FBC148708D03@DB3EX14MBXC306.europe.corp.microsoft.com> | Yes, this is technically wrong but luckily works. I'd very much like | to | have a better solution, preferably one that doesn't add any extra | overhead. | __decodeFloat_Int is a C function, so it will not touch the Haskell | stack. This all seems terribly fragile to me. At least it ought to be surrounded with massive comments pointing out how terribly fragile it is, breaking all the rules that we carefully document elsewhere. Can't we just allocate a Cmm "area"? The address of an area is a perfectly well-defined Cmm value. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon | Marlow | Sent: 07 January 2014 16:05 | To: Herbert Valerio Riedel; ghc-devs at haskell.org | Subject: Re: High-level Cmm code and stack allocation | | On 04/01/2014 23:26, Herbert Valerio Riedel wrote: | > Hello, | > | > According to Note [Syntax of .cmm files], | > | > | There are two ways to write .cmm code: | > | | > | (1) High-level Cmm code delegates the stack handling to GHC, and | > | never explicitly mentions Sp or registers. | > | | > | (2) Low-level Cmm manages the stack itself, and must know about | > | calling conventions. | > | | > | Whether you want high-level or low-level Cmm is indicated by the | > | presence of an argument list on a procedure. | > | > However, while working on integer-gmp I've been noticing in | > integer-gmp/cbits/gmp-wrappers.cmm that even though all Cmm | procedures | > have been converted to high-level Cmm, they still reference the 'Sp' | > register, e.g. | > | > | > #define GMP_TAKE1_RET1(name,mp_fun) \ | > name (W_ ws1, P_ d1) \ | > { \ | > W_ mp_tmp1; \ | > W_ mp_result1; \ | > \ | > again: \ | > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ | > MAYBE_GC(again); \ | > \ | > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ | > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ | > ... \ | > | > | > So is this valid high-level Cmm code? What's the proper way to | allocate | > Stack (and/or Heap) memory from high-level Cmm code? | | Yes, this is technically wrong but luckily works. I'd very much like | to | have a better solution, preferably one that doesn't add any extra | overhead. | | The problem here is that we need to allocate a couple of temporary | words | and take their address; that's an unusual thing to do in Cmm, so it | only | occurs in a few places (mainly interacting with gmp). Usually if you | want some temporary storage you can use local variables or some | heap-allocated memory. | | Cheers, | Simon | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From gergo at erdi.hu Tue Jan 7 23:05:23 2014 From: gergo at erdi.hu (=?UTF-8?B?RHIuIMOJUkRJIEdlcmfFkQ==?=) Date: Wed, 8 Jan 2014 07:05:23 +0800 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> Message-ID: Hi, Wow, so, I thought there would be some back-and-forth, then a decision, then I would go and walk the last mile and then formally submit the patch for review - and now I see that in <2 days all that has passed... Of course I'll make validate pass, I just didn't even know about it. Likewise, I needed the carrot of 7.8 inclusion dangling before me to start writing the user docs. One problem, though, is that I'll be on holiday from tomorrow, so I'll only have time to look into this tonight before next weekend. I'll try my best to fix up validate tonight, and I'll write the docs (which I hope will mostly be an editing job on the wiki) next week. How does that sound? Thanks, Gergo On Jan 8, 2014 3:41 AM, "Austin Seipp" wrote: > Hi Gergo, > > Thanks for rebasing your changes. Unfortunately, they do not compile > cleanly with ./validate, which we really need to have working for all > incoming patches. > > In particular, ./validate enables -Werror and a slew of warnings that > you won't normally see during development, which greatly aids in > keeping the code clean. One, for example, is that some of your commits > introduce tabs - we ban tabs and validate errors on them! > > Another: the problem is that in > > https://github.com/gergoerdi/ghc/commit/afefa7ac948b1d7801d622824fbdd75ade2ada3f > , > you added a Monoid instance for UniqSet - but this doesn't work > correctly. The problem is that UniqSet is just an alias for UniqFM > (type UniqSet a = UniqFM a), so the instance is technically seen as an > orphan. Orphan instances cause -Werror failures with ./validate > (unless you disable them for that module, but here we really > shouldn't.) > > The fix is to write the Monoid instance for UniqFM directly in > UniqFM.hs instead. > > Likewise, here's a real bug that -Werror found in your patch in the > renamer (by building with ./validate): > > compiler/rename/RnBinds.lhs:744:1: Warning: > Pattern match(es) are non-exhaustive > In an equation for `renameSig': > Patterns not matched: _ (PatSynSig _ _ _ _ _) > > Indeed, renameSig in RnBinds doesn't check the PatSynSig case! The > missing instance looks straightforward to implement, but this could > have been a nasty bug waiting. > > If you could please take the time to clean up the ./validate failures, > I'd really appreciate it. I imagine it'll take very little time, and > it will make merging much easier for me. An easy way to do it is just > to check out your pattern-synonyms branches, then say: > > $ CPUS=X sh ./validate > > where 'X' is the number of cores, similar to 'make -jX' > > If it fails, you can make a change, and keep going with: > > $ CPUS=X sh ./validate --no-clean > > and rinse and repeat until it's done. > > Note the --no-clean is required, since `./validate` will immediately > run `make distclean` by default if you do not specify it. > > On Tue, Jan 7, 2014 at 5:50 AM, Dr. ERDI Gergo wrote: > > On Mon, 6 Jan 2014, Carter Schonwald wrote: > > > >> as long as we clearly communicate that there may be refinements / > breaking > >> changes > >> subsequently, i'm all for it, unless merging it in slows down 7.8 > hitting > >> RC . (its > >> taken long enough for RC to happen... don't want to drag it out further) > > > > > > If that helps, I've updated the version at > https://github.com/gergoerdi/ghc > > (and the two sister repos https://github.com/gergoerdi/ghc-testsuite and > > https://github.com/gergoerdi/ghc-haddock) to be based on top of master > as of > > today. > > > > Bye, > > Gergo > > > > -- > > > > .--= ULLA! =-----------------. > > \ http://gergo.erdi.hu \ > > `---= gergo at erdi.hu =-------' > > Elvis is dead and I don't feel so good either. > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > -- > Regards, > Austin - PGP: 4096R/0x91384671 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Tue Jan 7 23:08:39 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 07 Jan 2014 23:08:39 +0000 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> Message-ID: <52CC88F7.1010703@fuuzetsu.co.uk> On 07/01/14 23:05, Dr. ?RDI Gerg? wrote: > Hi, > > Wow, so, I thought there would be some back-and-forth, then a decision, > then I would go and walk the last mile and then formally submit the patch > for review - and now I see that in <2 days all that has passed... > > Of course I'll make validate pass, I just didn't even know about it. > Likewise, I needed the carrot of 7.8 inclusion dangling before me to start > writing the user docs. > > One problem, though, is that I'll be on holiday from tomorrow, so I'll only > have time to look into this tonight before next weekend. I'll try my best > to fix up validate tonight, and I'll write the docs (which I hope will > mostly be an editing job on the wiki) next week. How does that sound? > > Thanks, > Gergo Hi Erdi, I'm hoping to push in some stuff for Haddock in few hours (or rather, have someone do it for me) but I know you have changed a few things in it for the pattern synonyms stuff. I looked at the changes and they weren't big and shouldn't clash. Is it fine with you to push the changes on our side and then have you merge on top of that or would you prefer to have it done another way? Thanks -- Mateusz K. From gergo at erdi.hu Tue Jan 7 23:10:30 2014 From: gergo at erdi.hu (=?UTF-8?B?RHIuIMOJUkRJIEdlcmfFkQ==?=) Date: Wed, 8 Jan 2014 07:10:30 +0800 Subject: Pattern synonyms for 7.8? In-Reply-To: <52CC88F7.1010703@fuuzetsu.co.uk> References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <52CC88F7.1010703@fuuzetsu.co.uk> Message-ID: Of course. That's how I've been keeping up with GHC proper all along. On Jan 8, 2014 7:09 AM, "Mateusz Kowalczyk" wrote: > On 07/01/14 23:05, Dr. ?RDI Gerg? wrote: > > Hi, > > > > Wow, so, I thought there would be some back-and-forth, then a decision, > > then I would go and walk the last mile and then formally submit the patch > > for review - and now I see that in <2 days all that has passed... > > > > Of course I'll make validate pass, I just didn't even know about it. > > Likewise, I needed the carrot of 7.8 inclusion dangling before me to > start > > writing the user docs. > > > > One problem, though, is that I'll be on holiday from tomorrow, so I'll > only > > have time to look into this tonight before next weekend. I'll try my best > > to fix up validate tonight, and I'll write the docs (which I hope will > > mostly be an editing job on the wiki) next week. How does that sound? > > > > Thanks, > > Gergo > > Hi Erdi, > > I'm hoping to push in some stuff for Haddock in few hours (or rather, > have someone do it for me) but I know you have changed a few things in > it for the pattern synonyms stuff. I looked at the changes and they > weren't big and shouldn't clash. Is it fine with you to push the changes > on our side and then have you merge on top of that or would you prefer > to have it done another way? > > Thanks > > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From awick at galois.com Tue Jan 7 23:43:01 2014 From: awick at galois.com (Adam Wick) Date: Tue, 7 Jan 2014 15:43:01 -0800 Subject: panic when compiling SHA In-Reply-To: <201401071311.12056.jan.stolarek@p.lodz.pl> References: <52C7DB7E.1030408@gmail.com> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> <201401071311.12056.jan.stolarek@p.lodz.pl> Message-ID: <7EF66CD7-1D46-4ACF-A850-A64DEE3CFF3E@galois.com> On Jan 7, 2014, at 4:11 AM, Jan Stolarek wrote: >> GHC crashes on valid input. Which is a bug. > As Ben pointed out it is conceivable that compiler will not be able handle a correct program. Personally, I find this view extremely disappointing. If my SHA library failed to work on a valid input, I would consider that a bug. Why is GHC special? Keep in mind that I?m not saying that this bug needs to be highest priority and fixed immediately, but instead I?m merely arguing that it should be acknowledged as a bug. > But as a user I would expect GHC to detect such situations (if possible) and display an error > message, not crash with a panic (which clearly says this is a bug and should be reported). Personally, I?d find this an acceptable, if a bit disappointing, solution. Essentially you?re redefining "valid input." It just seems a shame to be doing so because of an implementation weakness rather than an actual, fundamental problem. - Adam -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2199 bytes Desc: not available URL: From awick at galois.com Tue Jan 7 23:57:09 2014 From: awick at galois.com (Adam Wick) Date: Tue, 7 Jan 2014 15:57:09 -0800 Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> Message-ID: On Jan 7, 2014, at 2:27 AM, Ben Lippmeier wrote: > On 07/01/2014, at 9:26 , Adam Wick wrote: > >>> Not if we just have this one test. I'd be keen to blame excessive use of inline pragmas in the SHA library itself, or excessive optimisation flags. It's not really a bug in GHC until there are two tests that exhibit the same problem. >> >> The SHA library uses SPECIALIZE, INLINE, and bang patterns in fairly standard ways. There?s nothing too exotic in there, I just basically sprinkled hints in places I thought would be useful, and then backed those up with benchmarking. > > Ahh. It's the "sprinkled hints in places I thought would be useful" which is what I'm concerned about. If you just add pragmas without understanding their effect on the core program then it'll bite further down the line. Did you compare the object code size as well as wall clock speedup? I understand the pragmas and what they do with my code. I use SPECIALIZE twice for two functions. In both functions, it was clearer to write the function as (a -> a -> a -> a), but I wanted specialized versions for the two versions that were going to be used, in which (a == Word32) or (a == Word64). This benchmarked as faster while maintaining code clarity and concision. I use INLINE in five places, each of them a SHA step function, with the understanding that it would generate ideal code for a compiler for the performance-critical parts of the algorithm: straight line, single-block code with no conditionals. When I did my original performance work, several versions of GHC ago, I did indeed consider compile time, runtime performance, and space usage. I picked what I thought was a reasonable balance at the time. I also just performed an experiment in which I took the SHA library, deleted all instances of INLINE and SPECIALIZE, and compiled it with HEAD on 32-bit Linux. You get the same crash. So my usage of SPECIALIZE and INLINE is beside the point. > Sadly, "valid input" isn't a well defined concept in practice. You could write a "valid" 10GB Haskell source file that obeyed the Haskell standard grammar, but I wouldn't expect that to compile either. I would. I?m a little disappointed that ghc-devs does not. I wouldn?t expect it to compile quickly, but I would expect it to run. > You could also write small (< 1k) source programs that trigger complexity problems in Hindley-Milner style type inference. You could also use compile-time meta programming (like Template Haskell) to generate intermediate code that is well formed but much too big to compile. The fact that a program obeys a published grammar is not sufficient to expect it to compile with a particular implementation (sorry to say). If I write a broken Template Haskell macro, then yes, I agree. This is not the case in this example. > Adding an INLINE pragma is akin to using compile-time meta programming. Is it? I find that a strange point of view. Isn?t INLINE just a strong hint to the compiler that this function should be inlined? How is using INLINE any different from simply manually inserting the code at every call site? - Adam -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2199 bytes Desc: not available URL: From andrew.gibiansky at gmail.com Wed Jan 8 00:09:27 2014 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Tue, 7 Jan 2014 19:09:27 -0500 Subject: Changing GHC Error Message Wrapping In-Reply-To: References: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> <59543203684B2244980D7E4057D5FBC14870765F@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC148707DFC@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Hello all, I figured out that this isn't quite a bug and figured out how to do what I wanted. It turns out that the `Show` instance for SourceError does not respect `pprCols` - I don't know if that's a reasonable expectation (although it's what I expected). I ended up using the following code to print these messages: flip gcatch handler $ do runStmt "let f (x, y, z, w, e, r, d , ax, b ,c,ex ,g ,h) = (x :: Int) + y + z" RunToCompletion runStmt "f (1, 2, 3)" RunToCompletion return () where handler :: SourceError -> Ghc () handler srcerr = do let msgs = bagToList $ srcErrorMessages srcerr forM_ msgs $ \msg -> do s <- doc $ errMsgShortDoc msg liftIO $ putStrLn s doc :: GhcMonad m => SDoc -> m String doc sdoc = do flags <- getSessionDynFlags let cols = pprCols flags d = runSDoc sdoc (initSDocContext flags defaultUserStyle) return $ Pretty.fullRender Pretty.PageMode cols 1.5 string_txt "" d where string_txt :: Pretty.TextDetails -> String -> String string_txt (Pretty.Chr c) s = c:s string_txt (Pretty.Str s1) s2 = s1 ++ s2 string_txt (Pretty.PStr s1) s2 = unpackFS s1 ++ s2 string_txt (Pretty.LStr s1 _) s2 = unpackLitString s1 ++ s2 As far as I can tell, there is no simpler way, every function in `Pretty` except for `fullRender` just assumes a default of 100-char lines. -- Andrew On Tue, Jan 7, 2014 at 11:29 AM, Andrew Gibiansky < andrew.gibiansky at gmail.com> wrote: > Simon, > > That's exactly what I'm looking for! But it seems that doing it > dynamically in the GHC API doesn't work (as in my first email where I tried > to adjust pprCols via setSessionDynFlags). > > I'm going to look into the source as what ppr-cols=N actually sets and > probably file a bug - because this seems like buggy behaviour... > > Andrew > > > On Tue, Jan 7, 2014 at 4:14 AM, Simon Peyton Jones wrote: > >> -dppr-cols=N changes the width of the output page; you could try a >> large number there. There isn?t a setting meaning ?infinity?, sadly. >> >> >> >> Simon >> >> >> >> *From:* Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] >> *Sent:* 07 January 2014 03:04 >> *To:* Simon Peyton Jones >> *Cc:* Erik de Castro Lopo; ghc-devs at haskell.org >> >> *Subject:* Re: Changing GHC Error Message Wrapping >> >> >> >> Thanks Simon. >> >> >> >> In general I think multiline tuples should have many elements per line, >> but honestly the tuple case was a very specific example. If possible, I'd >> like to change the *overall* wrapping for *all* error messages - how does >> `sep` know when to break lines? there's clearly a numeric value for the >> number of columns somewhere, but where is it, and is it user-adjustable? >> >> >> >> For now I am just hacking around this by special-casing some error >> messages and "un-doing" the line wrapping by parsing the messages and >> joining lines back together. >> >> >> >> Thanks, >> >> Andrew >> >> >> >> On Mon, Jan 6, 2014 at 7:44 AM, Simon Peyton-Jones >> wrote: >> >> I think it?s line 705 in types/TypeRep.lhs >> >> >> >> pprTcApp p pp tc tys >> >> | isTupleTyCon tc && tyConArity tc == length tys >> >> = pprPromotionQuote tc <> >> >> tupleParens (tupleTyConSort tc) (sep (punctuate comma (map (pp >> TopPrec) tys))) >> >> >> >> If you change ?sep? to ?fsep?, you?ll get behaviour more akin to >> paragraph-filling (hence the ?f?). Give it a try. You?ll get validation >> failure from the testsuite, but you can see whether you think the result is >> better or worse. In general, should multi-line tuples be printed with many >> elements per line, or just one? >> >> >> >> Simon >> >> >> >> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Andrew >> Gibiansky >> *Sent:* 04 January 2014 17:30 >> *To:* Erik de Castro Lopo >> *Cc:* ghc-devs at haskell.org >> *Subject:* Re: Changing GHC Error Message Wrapping >> >> >> >> Apologize for the broken image formatting. >> >> >> >> With the code I posted above, I get the following output: >> >> >> >> Couldn't match expected type `(GHC.Types.Int, >> >> GHC.Types.Int, >> >> GHC.Types.Int, >> >> t0, >> >> t10, >> >> t20, >> >> t30, >> >> t40, >> >> t50, >> >> t60, >> >> t70, >> >> t80, >> >> t90)' >> >> with actual type `(t1, t2, t3)' >> >> >> >> I would like the types to be on the same line, or at least wrapped to a >> larger number of columns. >> >> >> >> Does anyone know how to do this, or where in the GHC source this >> wrapping is done? >> >> >> >> Thanks! >> >> Andrew >> >> >> >> On Sat, Jan 4, 2014 at 2:55 AM, Erik de Castro Lopo >> wrote: >> >> Carter Schonwald wrote: >> >> > hey andrew, your image link isn't working (i'm using gmail) >> >> I think the list software filters out image attachments. >> >> Erik >> -- >> ---------------------------------------------------------------------- >> Erik de Castro Lopo >> http://www.mega-nerd.com/ >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> >> >> >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Wed Jan 8 00:20:40 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Wed, 08 Jan 2014 00:20:40 +0000 Subject: Validating with Haddock In-Reply-To: References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD6FC.1080405@fuuzetsu.co.uk> <52CC43C6.9020701@fuuzetsu.co.uk> <52CC49E8.4040407@fuuzetsu.co.uk> <20140107201547.GA15588@matrix.chaos.earth.li> <52CC6C94.1080800@fuuzetsu.co.uk> Message-ID: <52CC99D8.9040508@fuuzetsu.co.uk> On 07/01/14 21:20, Austin Seipp wrote: > For the record and other people reading - after a quick discussion on > IRC, it simply looks like the 32-bit peak_megabytes_allocated numbers > for those tests probably weren't updated at the same time as the 64bit > ones, leaving them out of date. > I have now validated GHC with the new Haddock stuff in place. You can see the new log at [1]. The end result is the same as validation on a tree without changes: same 8 tests failing. I have also built and ran Haddock's own tests with HEAD and they now all check out. The branch at [2] should now be ready to be merged into upstream Haddock. If someone could merge that in, that'd be great. This is the new parser which contains few bug fixes. We have more changes than this which include user-visible features and new documentation. I'll prepare and validate those for you tomorrow and bother you some more. Let me know if anything needs changing. Thanks! [1]: http://fuuzetsu.co.uk/misc/validateloghaddock [2]: https://github.com/sol/haddock/tree/new-parser -- Mateusz K. From benl at ouroborus.net Wed Jan 8 04:30:05 2014 From: benl at ouroborus.net (Ben Lippmeier) Date: Wed, 8 Jan 2014 15:30:05 +1100 Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> Message-ID: <21E35601-53DD-409A-86A2-F806C39639F9@ouroborus.net> On 08/01/2014, at 10:57 , Adam Wick wrote: > I also just performed an experiment in which I took the SHA library, deleted all instances of INLINE and SPECIALIZE, and compiled it with HEAD on 32-bit Linux. You get the same crash. So my usage of SPECIALIZE and INLINE is beside the point. Ok, then maybe the default inliner heuristics are a bit too eager for this program. Whether that's a bug is open for debate. The standard way of setting such heuristics is to compile a "representative" set of benchmarks (eg, nofib) and choose some settings that give good average performance. I don't think this is an ideal approach, but it's the typical one for compiler engineering. >> Sadly, "valid input" isn't a well defined concept in practice. You could write a "valid" 10GB Haskell source file that obeyed the Haskell standard grammar, but I wouldn't expect that to compile either. > > I would. I?m a little disappointed that ghc-devs does not. I wouldn?t expect it to compile quickly, but I would expect it to run. To satisfy such a demand GHC would need to have linear space usage with respect to the input program size. This implies it must also be linear with respect to the number of top-level declarations, number of variables, number of quantifiers in type sigs, and any other countable thing in the input program. It would also need to be linear for other finite resources that might run out, like symbol table entries. If you had 1Gig top-level foreign exported declarations in the source program I suspect the ELF linker would freak out. I'm not trying to be difficult or argumentative -- I think limits like these come naturally with a concrete implementation. I agree it's sad that client programmers can't just enable -O2 and expect every program to work. It'd be nice to have optimisation levels that give resource or complexity guarantees, like "enabling this won't make the code-size non-linear in the input size", but that's not how it works at the moment. I'm not aware of any compiler for a "high level" language that gives such guarantees, but there may be some. I'd be interested to know of any. >> Adding an INLINE pragma is akin to using compile-time meta programming. > > Is it? I find that a strange point of view. Isn?t INLINE just a strong hint to the compiler that this function should be inlined? How is using INLINE any different from simply manually inserting the code at every call site? It's not a "hint" -- it *forces* inlining at every call site like you said. It'll make a new copy of the function body for every call site, and not back-out if the program gets "too big". Suppose: f x = g x ... g x' ... g x'' g y = h y ... h y' ... h y'' h z = i z ... i z' ... i z'' ... now force inlining for all of f g h etc. I'd expect to see at least 3*3*3=27 copies of the body of 'i' in the core program, and even more if SpecConstr and the LiberateCase transform are turned on. We had (and have) big problems like this with DPH. It took too long for the DPH team to unlearn the dogma that "inlining and call pattern specialisation make the program better". Ben. From carter.schonwald at gmail.com Wed Jan 8 06:11:07 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 8 Jan 2014 01:11:07 -0500 Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> Message-ID: Adam, I agree that it should be considered a misfeature (or at the very least a good stress test that currently breaks the register allocator). That said, INLINE / INLINEABLE are only needed for intermodule optimization, have you tried using the special "inline" primop selectively, or using INLINEABLE plus selective inline? I think inline should work in the defining module even if you don't provide an INLINE or INLINEABLE. question 1: does the code compile well when you use -fllvm? (seems like the discussion so far has been NCG focused). how does the generated assembly fair that way vs the workaroudn path on NCG? On Tue, Jan 7, 2014 at 6:57 PM, Adam Wick wrote: > On Jan 7, 2014, at 2:27 AM, Ben Lippmeier wrote: > > On 07/01/2014, at 9:26 , Adam Wick wrote: > > > >>> Not if we just have this one test. I'd be keen to blame excessive use > of inline pragmas in the SHA library itself, or excessive optimisation > flags. It's not really a bug in GHC until there are two tests that exhibit > the same problem. > >> > >> The SHA library uses SPECIALIZE, INLINE, and bang patterns in fairly > standard ways. There?s nothing too exotic in there, I just basically > sprinkled hints in places I thought would be useful, and then backed those > up with benchmarking. > > > > Ahh. It's the "sprinkled hints in places I thought would be useful" > which is what I'm concerned about. If you just add pragmas without > understanding their effect on the core program then it'll bite further down > the line. Did you compare the object code size as well as wall clock > speedup? > > I understand the pragmas and what they do with my code. I use SPECIALIZE > twice for two functions. In both functions, it was clearer to write the > function as (a -> a -> a -> a), but I wanted specialized versions for the > two versions that were going to be used, in which (a == Word32) or (a == > Word64). This benchmarked as faster while maintaining code clarity and > concision. I use INLINE in five places, each of them a SHA step function, > with the understanding that it would generate ideal code for a compiler for > the performance-critical parts of the algorithm: straight line, > single-block code with no conditionals. > > When I did my original performance work, several versions of GHC ago, I > did indeed consider compile time, runtime performance, and space usage. I > picked what I thought was a reasonable balance at the time. > > I also just performed an experiment in which I took the SHA library, > deleted all instances of INLINE and SPECIALIZE, and compiled it with HEAD > on 32-bit Linux. You get the same crash. So my usage of SPECIALIZE and > INLINE is beside the point. > > > Sadly, "valid input" isn't a well defined concept in practice. You could > write a "valid" 10GB Haskell source file that obeyed the Haskell standard > grammar, but I wouldn't expect that to compile either. > > I would. I?m a little disappointed that ghc-devs does not. I wouldn?t > expect it to compile quickly, but I would expect it to run. > > > You could also write small (< 1k) source programs that trigger > complexity problems in Hindley-Milner style type inference. You could also > use compile-time meta programming (like Template Haskell) to generate > intermediate code that is well formed but much too big to compile. The fact > that a program obeys a published grammar is not sufficient to expect it to > compile with a particular implementation (sorry to say). > > If I write a broken Template Haskell macro, then yes, I agree. This is not > the case in this example. > > > Adding an INLINE pragma is akin to using compile-time meta programming. > > Is it? I find that a strange point of view. Isn?t INLINE just a strong > hint to the compiler that this function should be inlined? How is using > INLINE any different from simply manually inserting the code at every call > site? > > > - Adam > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Wed Jan 8 07:07:42 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 8 Jan 2014 08:07:42 +0100 Subject: panic when compiling SHA In-Reply-To: <21E35601-53DD-409A-86A2-F806C39639F9@ouroborus.net> References: <52C7DB7E.1030408@gmail.com> <21E35601-53DD-409A-86A2-F806C39639F9@ouroborus.net> Message-ID: <201401080807.42680.jan.stolarek@p.lodz.pl> > It's not a "hint" -- it *forces* inlining at every call site like you said. There are exceptions: function must be fully applied to be inlined and there are loop-breakers (e.g. a self-recursive function will not be inlined). Janek From iavor.diatchki at gmail.com Wed Jan 8 07:14:29 2014 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Tue, 7 Jan 2014 23:14:29 -0800 Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> Message-ID: Hello, I find it a bit perplexing (and not at all constructive) that we are arguing over semantics here. We have a program (1 module, ~1000 lines of "no fancy extension Haskell"), which causes GHC to panic. This is a bug. An invariant that we were assuming did not actually hold. Hence the message that the "impossible" happened. If GHC decides to refuse to compile a program, it should not panic but, rather, explain what happened and maybe suggest a workaround. I am not familiar with GHC's back-end, but it seems that there might be something interesting that's going on here. The SHA library works fine with 7.6.3, and it compiles (admittedly very slowly) using GHC head on my 64-bit machine. So something has changed, and it'd be nice if we understood what's causing the problem. Ben suggested that the issue might be the INLINE pragmas, but clearly that's not the problem, as Adam reproduced the same behavior without those pragmas. If the issue is indeed with the built-in inline heuristics, it sounds like we either should fix the heuristics, or come up with some suggestions about what to avoid in user programs. Or, perhaps, the issue something completely unrelated (e.g., a bug in the register allocator). Either way, I think this deserves a ticket. -Iavor On Tue, Jan 7, 2014 at 10:11 PM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > Adam, > I agree that it should be considered a misfeature (or at the very least a > good stress test that currently breaks the register allocator). That said, > INLINE / INLINEABLE are only needed for intermodule optimization, have you > tried using the special "inline" primop selectively, or using INLINEABLE > plus selective inline? I think inline should work in the defining module > even if you don't provide an INLINE or INLINEABLE. > > question 1: does the code compile well when you use -fllvm? (seems like > the discussion so far has been NCG focused). > how does the generated assembly fair that way vs the workaroudn path on > NCG? > > > > > On Tue, Jan 7, 2014 at 6:57 PM, Adam Wick wrote: > >> On Jan 7, 2014, at 2:27 AM, Ben Lippmeier wrote: >> > On 07/01/2014, at 9:26 , Adam Wick wrote: >> > >> >>> Not if we just have this one test. I'd be keen to blame excessive use >> of inline pragmas in the SHA library itself, or excessive optimisation >> flags. It's not really a bug in GHC until there are two tests that exhibit >> the same problem. >> >> >> >> The SHA library uses SPECIALIZE, INLINE, and bang patterns in fairly >> standard ways. There?s nothing too exotic in there, I just basically >> sprinkled hints in places I thought would be useful, and then backed those >> up with benchmarking. >> > >> > Ahh. It's the "sprinkled hints in places I thought would be useful" >> which is what I'm concerned about. If you just add pragmas without >> understanding their effect on the core program then it'll bite further down >> the line. Did you compare the object code size as well as wall clock >> speedup? >> >> I understand the pragmas and what they do with my code. I use SPECIALIZE >> twice for two functions. In both functions, it was clearer to write the >> function as (a -> a -> a -> a), but I wanted specialized versions for the >> two versions that were going to be used, in which (a == Word32) or (a == >> Word64). This benchmarked as faster while maintaining code clarity and >> concision. I use INLINE in five places, each of them a SHA step function, >> with the understanding that it would generate ideal code for a compiler for >> the performance-critical parts of the algorithm: straight line, >> single-block code with no conditionals. >> >> When I did my original performance work, several versions of GHC ago, I >> did indeed consider compile time, runtime performance, and space usage. I >> picked what I thought was a reasonable balance at the time. >> >> I also just performed an experiment in which I took the SHA library, >> deleted all instances of INLINE and SPECIALIZE, and compiled it with HEAD >> on 32-bit Linux. You get the same crash. So my usage of SPECIALIZE and >> INLINE is beside the point. >> >> > Sadly, "valid input" isn't a well defined concept in practice. You >> could write a "valid" 10GB Haskell source file that obeyed the Haskell >> standard grammar, but I wouldn't expect that to compile either. >> >> I would. I?m a little disappointed that ghc-devs does not. I wouldn?t >> expect it to compile quickly, but I would expect it to run. >> >> > You could also write small (< 1k) source programs that trigger >> complexity problems in Hindley-Milner style type inference. You could also >> use compile-time meta programming (like Template Haskell) to generate >> intermediate code that is well formed but much too big to compile. The fact >> that a program obeys a published grammar is not sufficient to expect it to >> compile with a particular implementation (sorry to say). >> >> If I write a broken Template Haskell macro, then yes, I agree. This is >> not the case in this example. >> >> > Adding an INLINE pragma is akin to using compile-time meta programming. >> >> Is it? I find that a strange point of view. Isn?t INLINE just a strong >> hint to the compiler that this function should be inlined? How is using >> INLINE any different from simply manually inserting the code at every call >> site? >> >> >> - Adam >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Jan 8 07:35:05 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 8 Jan 2014 02:35:05 -0500 Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> Message-ID: well said iavor. It perhaps hints at the register allocators needing some love? I hope to dig deep into those myself later this year, but maybe it needs some wibbles to clean up for 7.8 right now? On Wed, Jan 8, 2014 at 2:14 AM, Iavor Diatchki wrote: > Hello, > > I find it a bit perplexing (and not at all constructive) that we are > arguing over semantics here. We have a program (1 module, ~1000 lines of > "no fancy extension Haskell"), which causes GHC to panic. This is a bug. > An invariant that we were assuming did not actually hold. Hence the > message that the "impossible" happened. If GHC decides to refuse to > compile a program, it should not panic but, rather, explain what happened > and maybe suggest a workaround. > > I am not familiar with GHC's back-end, but it seems that there might be > something interesting that's going on here. The SHA library works fine > with 7.6.3, and it compiles (admittedly very slowly) using GHC head on my > 64-bit machine. So something has changed, and it'd be nice if we > understood what's causing the problem. > > Ben suggested that the issue might be the INLINE pragmas, but clearly > that's not the problem, as Adam reproduced the same behavior without those > pragmas. If the issue is indeed with the built-in inline heuristics, it > sounds like we either should fix the heuristics, or come up with some > suggestions about what to avoid in user programs. Or, perhaps, the issue > something completely unrelated (e.g., a bug in the register allocator). > Either way, I think this deserves a ticket. > > -Iavor > > > > > > > > > On Tue, Jan 7, 2014 at 10:11 PM, Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> Adam, >> I agree that it should be considered a misfeature (or at the very least a >> good stress test that currently breaks the register allocator). That said, >> INLINE / INLINEABLE are only needed for intermodule optimization, have >> you tried using the special "inline" primop selectively, or using >> INLINEABLE plus selective inline? I think inline should work in the >> defining module even if you don't provide an INLINE or INLINEABLE. >> >> question 1: does the code compile well when you use -fllvm? (seems like >> the discussion so far has been NCG focused). >> how does the generated assembly fair that way vs the workaroudn path on >> NCG? >> >> >> >> >> On Tue, Jan 7, 2014 at 6:57 PM, Adam Wick wrote: >> >>> On Jan 7, 2014, at 2:27 AM, Ben Lippmeier wrote: >>> > On 07/01/2014, at 9:26 , Adam Wick wrote: >>> > >>> >>> Not if we just have this one test. I'd be keen to blame excessive >>> use of inline pragmas in the SHA library itself, or excessive optimisation >>> flags. It's not really a bug in GHC until there are two tests that exhibit >>> the same problem. >>> >> >>> >> The SHA library uses SPECIALIZE, INLINE, and bang patterns in fairly >>> standard ways. There?s nothing too exotic in there, I just basically >>> sprinkled hints in places I thought would be useful, and then backed those >>> up with benchmarking. >>> > >>> > Ahh. It's the "sprinkled hints in places I thought would be useful" >>> which is what I'm concerned about. If you just add pragmas without >>> understanding their effect on the core program then it'll bite further down >>> the line. Did you compare the object code size as well as wall clock >>> speedup? >>> >>> I understand the pragmas and what they do with my code. I use SPECIALIZE >>> twice for two functions. In both functions, it was clearer to write the >>> function as (a -> a -> a -> a), but I wanted specialized versions for the >>> two versions that were going to be used, in which (a == Word32) or (a == >>> Word64). This benchmarked as faster while maintaining code clarity and >>> concision. I use INLINE in five places, each of them a SHA step function, >>> with the understanding that it would generate ideal code for a compiler for >>> the performance-critical parts of the algorithm: straight line, >>> single-block code with no conditionals. >>> >>> When I did my original performance work, several versions of GHC ago, I >>> did indeed consider compile time, runtime performance, and space usage. I >>> picked what I thought was a reasonable balance at the time. >>> >>> I also just performed an experiment in which I took the SHA library, >>> deleted all instances of INLINE and SPECIALIZE, and compiled it with HEAD >>> on 32-bit Linux. You get the same crash. So my usage of SPECIALIZE and >>> INLINE is beside the point. >>> >>> > Sadly, "valid input" isn't a well defined concept in practice. You >>> could write a "valid" 10GB Haskell source file that obeyed the Haskell >>> standard grammar, but I wouldn't expect that to compile either. >>> >>> I would. I?m a little disappointed that ghc-devs does not. I wouldn?t >>> expect it to compile quickly, but I would expect it to run. >>> >>> > You could also write small (< 1k) source programs that trigger >>> complexity problems in Hindley-Milner style type inference. You could also >>> use compile-time meta programming (like Template Haskell) to generate >>> intermediate code that is well formed but much too big to compile. The fact >>> that a program obeys a published grammar is not sufficient to expect it to >>> compile with a particular implementation (sorry to say). >>> >>> If I write a broken Template Haskell macro, then yes, I agree. This is >>> not the case in this example. >>> >>> > Adding an INLINE pragma is akin to using compile-time meta programming. >>> >>> Is it? I find that a strange point of view. Isn?t INLINE just a strong >>> hint to the compiler that this function should be inlined? How is using >>> INLINE any different from simply manually inserting the code at every call >>> site? >>> >>> >>> - Adam >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Wed Jan 8 07:43:15 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 8 Jan 2014 01:43:15 -0600 Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> Message-ID: Just a guess, but I imagine it works for you because on amd64 the NCG will have more registers available to satisfy all the live ranges. On 32bit the situation is much worse because that's just how x86 is. In any case, I'm inclined to also agree this is a bug, and merits an open ticket. And it's not like there is no precedent for these things: plenty of libraries (vector-algorithms & others) stress the simplifier e.g. exhausting the simplifier ticks, and we tend to try and work towards fixing these. While INLINE is crucial in vector-algorithms, it probably isn't for the SHA package, and based on what Adam said I do think this perhaps merits some more investigation. I doubt it will get 'fixed' properly in time for 7.8 though - a workaround or something will probably be needed for 32bit builds in some way (perhaps just a few careful refactorings with uses of NOINLINE, although I can't estimate the performance impact.) On Wed, Jan 8, 2014 at 1:14 AM, Iavor Diatchki wrote: > Hello, > > I find it a bit perplexing (and not at all constructive) that we are arguing > over semantics here. We have a program (1 module, ~1000 lines of "no fancy > extension Haskell"), which causes GHC to panic. This is a bug. An > invariant that we were assuming did not actually hold. Hence the message > that the "impossible" happened. If GHC decides to refuse to compile a > program, it should not panic but, rather, explain what happened and maybe > suggest a workaround. > > I am not familiar with GHC's back-end, but it seems that there might be > something interesting that's going on here. The SHA library works fine > with 7.6.3, and it compiles (admittedly very slowly) using GHC head on my > 64-bit machine. So something has changed, and it'd be nice if we > understood what's causing the problem. > > Ben suggested that the issue might be the INLINE pragmas, but clearly that's > not the problem, as Adam reproduced the same behavior without those pragmas. > If the issue is indeed with the built-in inline heuristics, it sounds like > we either should fix the heuristics, or come up with some suggestions about > what to avoid in user programs. Or, perhaps, the issue something completely > unrelated (e.g., a bug in the register allocator). Either way, I think > this deserves a ticket. > > -Iavor > > > > > > > > > On Tue, Jan 7, 2014 at 10:11 PM, Carter Schonwald > wrote: >> >> Adam, >> I agree that it should be considered a misfeature (or at the very least a >> good stress test that currently breaks the register allocator). That said, >> INLINE / INLINEABLE are only needed for intermodule optimization, have you >> tried using the special "inline" primop selectively, or using INLINEABLE >> plus selective inline? I think inline should work in the defining module >> even if you don't provide an INLINE or INLINEABLE. >> >> question 1: does the code compile well when you use -fllvm? (seems like >> the discussion so far has been NCG focused). >> how does the generated assembly fair that way vs the workaroudn path on >> NCG? >> >> >> >> >> On Tue, Jan 7, 2014 at 6:57 PM, Adam Wick wrote: >>> >>> On Jan 7, 2014, at 2:27 AM, Ben Lippmeier wrote: >>> > On 07/01/2014, at 9:26 , Adam Wick wrote: >>> > >>> >>> Not if we just have this one test. I'd be keen to blame excessive use >>> >>> of inline pragmas in the SHA library itself, or excessive optimisation >>> >>> flags. It's not really a bug in GHC until there are two tests that exhibit >>> >>> the same problem. >>> >> >>> >> The SHA library uses SPECIALIZE, INLINE, and bang patterns in fairly >>> >> standard ways. There?s nothing too exotic in there, I just basically >>> >> sprinkled hints in places I thought would be useful, and then backed those >>> >> up with benchmarking. >>> > >>> > Ahh. It's the "sprinkled hints in places I thought would be useful" >>> > which is what I'm concerned about. If you just add pragmas without >>> > understanding their effect on the core program then it'll bite further down >>> > the line. Did you compare the object code size as well as wall clock >>> > speedup? >>> >>> I understand the pragmas and what they do with my code. I use SPECIALIZE >>> twice for two functions. In both functions, it was clearer to write the >>> function as (a -> a -> a -> a), but I wanted specialized versions for the >>> two versions that were going to be used, in which (a == Word32) or (a == >>> Word64). This benchmarked as faster while maintaining code clarity and >>> concision. I use INLINE in five places, each of them a SHA step function, >>> with the understanding that it would generate ideal code for a compiler for >>> the performance-critical parts of the algorithm: straight line, single-block >>> code with no conditionals. >>> >>> When I did my original performance work, several versions of GHC ago, I >>> did indeed consider compile time, runtime performance, and space usage. I >>> picked what I thought was a reasonable balance at the time. >>> >>> I also just performed an experiment in which I took the SHA library, >>> deleted all instances of INLINE and SPECIALIZE, and compiled it with HEAD on >>> 32-bit Linux. You get the same crash. So my usage of SPECIALIZE and INLINE >>> is beside the point. >>> >>> > Sadly, "valid input" isn't a well defined concept in practice. You >>> > could write a "valid" 10GB Haskell source file that obeyed the Haskell >>> > standard grammar, but I wouldn't expect that to compile either. >>> >>> I would. I?m a little disappointed that ghc-devs does not. I wouldn?t >>> expect it to compile quickly, but I would expect it to run. >>> >>> > You could also write small (< 1k) source programs that trigger >>> > complexity problems in Hindley-Milner style type inference. You could also >>> > use compile-time meta programming (like Template Haskell) to generate >>> > intermediate code that is well formed but much too big to compile. The fact >>> > that a program obeys a published grammar is not sufficient to expect it to >>> > compile with a particular implementation (sorry to say). >>> >>> If I write a broken Template Haskell macro, then yes, I agree. This is >>> not the case in this example. >>> >>> > Adding an INLINE pragma is akin to using compile-time meta programming. >>> >>> Is it? I find that a strange point of view. Isn?t INLINE just a strong >>> hint to the compiler that this function should be inlined? How is using >>> INLINE any different from simply manually inserting the code at every call >>> site? >>> >>> >>> - Adam >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Wed Jan 8 07:46:35 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 8 Jan 2014 01:46:35 -0600 Subject: Validating with Haddock In-Reply-To: <52CC99D8.9040508@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD6FC.1080405@fuuzetsu.co.uk> <52CC43C6.9020701@fuuzetsu.co.uk> <52CC49E8.4040407@fuuzetsu.co.uk> <20140107201547.GA15588@matrix.chaos.earth.li> <52CC6C94.1080800@fuuzetsu.co.uk> <52CC99D8.9040508@fuuzetsu.co.uk> Message-ID: Excellent, thank you. We should really fix the 32bit performance numbers too I think, based on what we discussed on IRC earlier. Would you like to submit a patch for that too please? You can find the numbers in testsuite/tests/perf/haddock/all.T. Also, is there any new documentation we should need for this? Is all the new stuff properly documented somewhere? Etc. On Tue, Jan 7, 2014 at 6:20 PM, Mateusz Kowalczyk wrote: > On 07/01/14 21:20, Austin Seipp wrote: >> For the record and other people reading - after a quick discussion on >> IRC, it simply looks like the 32-bit peak_megabytes_allocated numbers >> for those tests probably weren't updated at the same time as the 64bit >> ones, leaving them out of date. >> > > I have now validated GHC with the new Haddock stuff in place. You can > see the new log at [1]. The end result is the same as validation on a > tree without changes: same 8 tests failing. > > I have also built and ran Haddock's own tests with HEAD and they now all > check out. The branch at [2] should now be ready to be merged into > upstream Haddock. If someone could merge that in, that'd be great. This > is the new parser which contains few bug fixes. We have more changes > than this which include user-visible features and new documentation. > > I'll prepare and validate those for you tomorrow and bother you some more. > > Let me know if anything needs changing. > > Thanks! > > [1]: http://fuuzetsu.co.uk/misc/validateloghaddock > [2]: https://github.com/sol/haddock/tree/new-parser > > -- > Mateusz K. > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Wed Jan 8 08:01:20 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 8 Jan 2014 02:01:20 -0600 Subject: LLVM and dynamic linking In-Reply-To: References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com> Message-ID: Personally I'd be in favor of that to keep it easy, but there hasn't really been any poll about what to do. For the most part it tends to work fine, but I think it's the wrong thing to do in any case. IMO the truly 'correct' thing to do, is not to rely on the system LLVM at all, but a version specifically tested with and distributed with GHC. This can be a private binary only we use. We already do this with MinGW on Windows actually, because in practice, relying on versions 'in the wild' is somewhat troublesome. In our case, we really just need the bitcode compiler and optimizer, which are pretty small pieces. Relying on a moving target like the system install or whatever possible random XYZ install from SVN or (some derivative forked toolchain!) is problematic for developers, and users invariably want to try new combinations, which can break in subtle or odd ways. I think it's more sensible and straightforward - for the vast majority of users and use-cases - for us to pick version that is tested, reliably works and optimizes code well, and ship that. Then users just know '-fasm is faster for compiling, -fllvm will optimize better for some code.' That's all they really need to know. If LLVM is to be considered 'stable' for Tier 1 GHC platforms, I'm sympathetic to Aaron's argument, and I'd say it should be held to the same standards as the NCG. That means it should be considered a reliable option and we should vet it to reasonable standards, even if it's a bit more work. It's just really hard to do that right now. But I think implementing this wouldn't be difficult, it just has some sticky bits about how to do it. We can of course upgrade it over time - but I think trying to hit moving targets in the wild is a bad long-term solution. On Tue, Jan 7, 2014 at 3:07 PM, George Colpitts wrote: > wrt > > We support a wide range of LLVM versions > > Why can't we stop doing that and only support one or two, e.g. GHC 7.8 would > only support llvm 3.3 and perhaps 3.4? > > > > > > On Tue, Jan 7, 2014 at 4:54 PM, Austin Seipp wrote: >> >> Hi all, >> >> Apologies for the late reply. >> >> First off, one thing to note wrt GMP: GMP is an LGPL library which we >> link against. Technically, we need to allow relinking to be compliant >> and free of of the LGPL for our own executables, but this should be >> reasonably possible - on systems where there is a system-wide GMP >> installed, we use that copy (this occurs mostly on OSX and Linux.) And >> so do executables compiled by GHC. Even when GHC uses static linking >> or dynamic linking for haskell code in this case, it will still always >> dynamically link to libgmp - meaning replacing the shared object >> should be possible. This is just the way modern Linux/OSX systems >> distribute system-wide C libraries, as you expect. >> >> In the case where we don't have this, we build our own copy of libgmp >> inside the source tree and use that instead. That said there are other >> reasons why we might want to be free of GMP entirely, but that's >> neither here nor there. In any case, the issue is pretty orthogonal to >> LLVM, dynamic haskell linking, etc - on a Linux system, you should >> reasonably be able to swap out a `libgmp.so` for another modified >> copy[1], and your Haskell programs should be compliant in this >> regard.[2] >> >> Now, as for LLVM. >> >> For one, LLVM actually is a 'relatively' cheap backend to have around. >> I say LLVM is 'relatively' cheap because All External Dependencies >> Have A Cost. The code is reasonably small, and in any case GHC still >> does most of the heavy lifting - the LLVM backend and native code >> generator share a very large amount of code. We don't really duplicate >> optimizations ourselves, for example, and some optimizations we do >> perform on our IR can't be done by LLVM anyway (it doesn't have enough >> information.) >> >> But LLVM has some very notable costs for GHC developers: >> >> * It's slower to compile with, because it tries to re-optimize the >> code we give it, but it mostly accomplishes nothing beyond advanced >> optimizations like vectorization/scalar evolution. >> * We support a wide range of LLVM versions (a nightmare IMO) which >> means pinning down specific versions and supporting them all is rather >> difficult. Combined with e.g. distro maintainers who may patch bugs >> themselves, and the things you're depending on in the wild (or what >> users might report bugs with) aren't as solid or well understood. >> * LLVM is extremely large, extremely complex, and the amount of >> people who can sensibly work on both GHC and LLVM are few and far >> inbetween. So fixing these issues is time consuming, difficult, and >> mostly tedious grunt work. >> >> All this basically sums up to the fact that dealing with LLVM comes >> with complications all on its own that makes it a different kind of >> beast to handle. >> >> So, the LLVM backend definitely needs some love. All of these things >> are solveable (and I have some ideas for solving most of them,) but >> none of them will quite come for free. But there are some real >> improvements that can be made here I think, and make LLVM much more >> smoothly supported for GHC itself. If you'd like to help it'd be >> really appreciated - I'd like to see LLVM have more love put forth, >> but it's a lot of work of course!. >> >> (Finally, in reference to the last point: I am in the obvious >> minority, but I am favorable to having the native code generator >> around, even if it's a bit old and crufty these days - at least it's >> small, fast and simple enough to be grokked and hacked on, and I don't >> think it fragments development all that much. By comparison, LLVM is a >> mammoth beast of incredible size with a sizeable entry barrier IMO. I >> think there's merit to having both a simple, 'obviously working' >> option in addition to the heavy duty one.) >> >> [1] Relevant tool: http://nixos.org/patchelf.html >> [2] Of course, IANAL, but there you go. >> >> On Wed, Jan 1, 2014 at 9:03 PM, Aaron Friel wrote: >> > Because I think it?s going to be an organizational issue and a >> > duplication >> > of effort if GHC is built one way but the future direction of LLVM is >> > another. >> > >> > Imagine if GCC started developing a new engine and it didn?t work with >> > one >> > of the biggest, most regular consumers of GCC. Say, the Linux kernel, or >> > itself. At first, the situation is optimistic - if this engine doesn?t >> > work >> > for the project that has the smartest, brightest GCC hackers potentially >> > looking at it, then it should fix itself soon enough. Suppose the >> > situation >> > lingers though, and continues for months without fix. The new GCC >> > backend >> > starts to become the default, and the community around GCC advocates for >> > end-users to use it to optimize code for their projects and it even >> > becomes >> > the default for some platforms, such as ARM. >> > >> > What I?ve described is analogous to the GHC situation - and the result >> > is >> > that GHC isn?t self-hosting on some platforms and the inertia that used >> > to >> > be behind the LLVM backend seems to have stagnated. Whereas LLVM used to >> > be >> > the ?new hotness?, I?ve noticed that issues like Trac #7787 no longer >> > have a >> > lot of eyes on them and externally it seems like GHC has accepted a >> > bifurcated approach for development. >> > >> > I dramatize the situation above, but there?s some truth to it. The LLVM >> > backend needs some care and attention and if the majority of GHC devs >> > can?t >> > build GHC with LLVM, then that means the smartest, brightest GHC hackers >> > won?t have their attention turned toward fixing those problems. If a >> > patch >> > to GHC-HEAD broke compilation for every backend, it would be fixed in >> > short >> > order. If a new version of GCC did not work with GHC, I can imagine it >> > would >> > be only hours before the first patches came in resolving the issue. On >> > OS X >> > Mavericks, an incompatibility with GHC has led to a swift reaction and >> > strong support for resolving platform issues. The attention to the LLVM >> > backend is visibly smaller, but I don?t know enough about the people >> > working >> > on GHC to know if it is actually smaller. >> > >> > The way I am trying to change this is by making it easier for people to >> > start using GHC (by putting images on Docker.io) and, in the process, >> > learning about GHC?s build process and trying to make things work for my >> > own >> > projects. The Docker image allows anyone with a Linux kernel to build >> > and >> > play with GHC HEAD. The information about building GHC yourself is >> > difficult >> > to approach and I found it hard to get started, and I want to improve >> > that >> > too, so I?m learning and asking questions. >> > >> > From: Carter Schonwald >> > Sent: Wednesday, January 1, 2014 5:54 PM >> >> > To: Aaron Friel >> > Cc: ghc-devs at haskell.org >> > >> > 7.8 should have working dylib support on the llvm backend. (i believe >> > some >> > of the relevant patches are in head already, though Ben Gamari can opine >> > on >> > that) >> > >> > why do you want ghc to be built with llvm? (i know i've tried myself in >> > the >> > past, and it should be doable with 7.8 using 7.8 soon too) >> > >> > >> > On Wed, Jan 1, 2014 at 5:38 PM, Aaron Friel wrote: >> >> >> >> Replying to include the email list. You?re right, the llvm backend and >> >> the >> >> gmp licensing issues are orthogonal - or should be. The problem is I >> >> get >> >> build errors when trying to build GHC with LLVM and dynamic libraries. >> >> >> >> The result is that I get a few different choices when producing a >> >> platform >> >> image for development, with some uncomfortable tradeoffs: >> >> >> >> LLVM-built GHC, dynamic libs - doesn?t build. >> >> LLVM-built GHC, static libs - potential licensing oddities with me >> >> shipping a statically linked ghc binary that is now gpled. I am not a >> >> lawyer, but the situation makes me uncomfortable. >> >> GCC/ASM-built GHC, dynamic libs - this is the *standard* for most >> >> platforms shipping ghc binaries, but it means that one of the biggest >> >> and >> >> most critical users of the LLVM backend is neglecting it. It also >> >> bifurcates >> >> development resources for GHC. Optimization work is duplicated and >> >> already >> >> devs are getting into the uncomfortable position of suggesting to users >> >> that >> >> they should trust GHC to build your programs in a particular way, but >> >> not >> >> itself. >> >> GCC/ASM-built GHC, static libs - worst of all possible worlds. >> >> >> >> >> >> Because of this, the libgmp and llvm-backend issues aren?t entirely >> >> orthogonal. Trac ticket #7885 is exactly the issue I get when trying to >> >> compile #1. >> >> >> >> From: Carter Schonwald >> >> Sent: Monday, December 30, 2013 1:05 PM >> >> >> To: Aaron Friel >> >> >> >> Good question but you forgot to email the mailing list too :-) >> >> >> >> Using llvm has nothing to do with Gmp. Use the native code gen (it's >> >> simper) and integer-simple. >> >> >> >> That said, standard ghc dylinks to a system copy of Gmp anyways (I >> >> think >> >> ). Building ghc as a Dylib is orthogonal. >> >> >> >> -Carter >> >> >> >> On Dec 30, 2013, at 1:58 PM, Aaron Friel wrote: >> >> >> >> Excellent research - I?m curious if this is the right thread to inquire >> >> about the status of trying to link GHC itself dynamically. >> >> >> >> I?ve been attempting to do so with various LLVM versions (3.2, 3.3, >> >> 3.4) >> >> using snapshot builds of GHC (within the past week) from git, and I hit >> >> ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every time >> >> (even >> >> the exact same error message). >> >> >> >> I?m interested in dynamically linking GHC with LLVM to avoid the >> >> entanglement with libgmp?s license. >> >> >> >> If this is the wrong thread or if I should reply instead to the trac >> >> item, >> >> please let me know. >> > >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> > >> >> >> >> -- >> Regards, >> >> Austin Seipp, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From carter.schonwald at gmail.com Wed Jan 8 08:09:41 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 8 Jan 2014 03:09:41 -0500 Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> Message-ID: one approach for the sha near term fix is to add some CPP such that certain inlines are suppressed on x86_32 perhaps? On Wed, Jan 8, 2014 at 2:43 AM, Austin Seipp wrote: > Just a guess, but I imagine it works for you because on amd64 the NCG > will have more registers available to satisfy all the live ranges. On > 32bit the situation is much worse because that's just how x86 is. > > In any case, I'm inclined to also agree this is a bug, and merits an > open ticket. And it's not like there is no precedent for these things: > plenty of libraries (vector-algorithms & others) stress the simplifier > e.g. exhausting the simplifier ticks, and we tend to try and work > towards fixing these. > > While INLINE is crucial in vector-algorithms, it probably isn't for > the SHA package, and based on what Adam said I do think this perhaps > merits some more investigation. I doubt it will get 'fixed' properly > in time for 7.8 though - a workaround or something will probably be > needed for 32bit builds in some way (perhaps just a few careful > refactorings with uses of NOINLINE, although I can't estimate the > performance impact.) > > On Wed, Jan 8, 2014 at 1:14 AM, Iavor Diatchki > wrote: > > Hello, > > > > I find it a bit perplexing (and not at all constructive) that we are > arguing > > over semantics here. We have a program (1 module, ~1000 lines of "no > fancy > > extension Haskell"), which causes GHC to panic. This is a bug. An > > invariant that we were assuming did not actually hold. Hence the message > > that the "impossible" happened. If GHC decides to refuse to compile a > > program, it should not panic but, rather, explain what happened and maybe > > suggest a workaround. > > > > I am not familiar with GHC's back-end, but it seems that there might be > > something interesting that's going on here. The SHA library works fine > > with 7.6.3, and it compiles (admittedly very slowly) using GHC head on my > > 64-bit machine. So something has changed, and it'd be nice if we > > understood what's causing the problem. > > > > Ben suggested that the issue might be the INLINE pragmas, but clearly > that's > > not the problem, as Adam reproduced the same behavior without those > pragmas. > > If the issue is indeed with the built-in inline heuristics, it sounds > like > > we either should fix the heuristics, or come up with some suggestions > about > > what to avoid in user programs. Or, perhaps, the issue something > completely > > unrelated (e.g., a bug in the register allocator). Either way, I think > > this deserves a ticket. > > > > -Iavor > > > > > > > > > > > > > > > > > > On Tue, Jan 7, 2014 at 10:11 PM, Carter Schonwald > > wrote: > >> > >> Adam, > >> I agree that it should be considered a misfeature (or at the very least > a > >> good stress test that currently breaks the register allocator). That > said, > >> INLINE / INLINEABLE are only needed for intermodule optimization, have > you > >> tried using the special "inline" primop selectively, or using INLINEABLE > >> plus selective inline? I think inline should work in the defining module > >> even if you don't provide an INLINE or INLINEABLE. > >> > >> question 1: does the code compile well when you use -fllvm? (seems like > >> the discussion so far has been NCG focused). > >> how does the generated assembly fair that way vs the workaroudn path on > >> NCG? > >> > >> > >> > >> > >> On Tue, Jan 7, 2014 at 6:57 PM, Adam Wick wrote: > >>> > >>> On Jan 7, 2014, at 2:27 AM, Ben Lippmeier wrote: > >>> > On 07/01/2014, at 9:26 , Adam Wick wrote: > >>> > > >>> >>> Not if we just have this one test. I'd be keen to blame excessive > use > >>> >>> of inline pragmas in the SHA library itself, or excessive > optimisation > >>> >>> flags. It's not really a bug in GHC until there are two tests that > exhibit > >>> >>> the same problem. > >>> >> > >>> >> The SHA library uses SPECIALIZE, INLINE, and bang patterns in fairly > >>> >> standard ways. There?s nothing too exotic in there, I just basically > >>> >> sprinkled hints in places I thought would be useful, and then > backed those > >>> >> up with benchmarking. > >>> > > >>> > Ahh. It's the "sprinkled hints in places I thought would be useful" > >>> > which is what I'm concerned about. If you just add pragmas without > >>> > understanding their effect on the core program then it'll bite > further down > >>> > the line. Did you compare the object code size as well as wall clock > >>> > speedup? > >>> > >>> I understand the pragmas and what they do with my code. I use > SPECIALIZE > >>> twice for two functions. In both functions, it was clearer to write the > >>> function as (a -> a -> a -> a), but I wanted specialized versions for > the > >>> two versions that were going to be used, in which (a == Word32) or (a > == > >>> Word64). This benchmarked as faster while maintaining code clarity and > >>> concision. I use INLINE in five places, each of them a SHA step > function, > >>> with the understanding that it would generate ideal code for a > compiler for > >>> the performance-critical parts of the algorithm: straight line, > single-block > >>> code with no conditionals. > >>> > >>> When I did my original performance work, several versions of GHC ago, I > >>> did indeed consider compile time, runtime performance, and space > usage. I > >>> picked what I thought was a reasonable balance at the time. > >>> > >>> I also just performed an experiment in which I took the SHA library, > >>> deleted all instances of INLINE and SPECIALIZE, and compiled it with > HEAD on > >>> 32-bit Linux. You get the same crash. So my usage of SPECIALIZE and > INLINE > >>> is beside the point. > >>> > >>> > Sadly, "valid input" isn't a well defined concept in practice. You > >>> > could write a "valid" 10GB Haskell source file that obeyed the > Haskell > >>> > standard grammar, but I wouldn't expect that to compile either. > >>> > >>> I would. I?m a little disappointed that ghc-devs does not. I wouldn?t > >>> expect it to compile quickly, but I would expect it to run. > >>> > >>> > You could also write small (< 1k) source programs that trigger > >>> > complexity problems in Hindley-Milner style type inference. You > could also > >>> > use compile-time meta programming (like Template Haskell) to generate > >>> > intermediate code that is well formed but much too big to compile. > The fact > >>> > that a program obeys a published grammar is not sufficient to expect > it to > >>> > compile with a particular implementation (sorry to say). > >>> > >>> If I write a broken Template Haskell macro, then yes, I agree. This is > >>> not the case in this example. > >>> > >>> > Adding an INLINE pragma is akin to using compile-time meta > programming. > >>> > >>> Is it? I find that a strange point of view. Isn?t INLINE just a strong > >>> hint to the compiler that this function should be inlined? How is using > >>> INLINE any different from simply manually inserting the code at every > call > >>> site? > >>> > >>> > >>> - Adam > >>> _______________________________________________ > >>> ghc-devs mailing list > >>> ghc-devs at haskell.org > >>> http://www.haskell.org/mailman/listinfo/ghc-devs > >>> > >> > >> > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > >> > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Jan 8 08:12:32 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 8 Jan 2014 03:12:32 -0500 Subject: LLVM and dynamic linking In-Reply-To: References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <0D8E2221-2F91-4DFA-836F-3AA2DB1F53BD@gmail.com> <87c5ff3fd1264e9e9763a943718324e6@BN1PR05MB171.namprd05.prod.outlook.com> Message-ID: well said points. Theres a lot we can do, and i think I many of those active in GHC have discussed various ideas to explore in this area for after the ghc 7.8 release. I believe someone did an experiment with llvm-general as an alternative ghc backend a few months back, who was it who did that? (llvm-general only makes sense for stage-2 ghc, but it does provide the advantage of statically linking LLVM as a haskell lib.) On Wed, Jan 8, 2014 at 3:01 AM, Austin Seipp wrote: > Personally I'd be in favor of that to keep it easy, but there hasn't > really been any poll about what to do. For the most part it tends to > work fine, but I think it's the wrong thing to do in any case. > > IMO the truly 'correct' thing to do, is not to rely on the system LLVM > at all, but a version specifically tested with and distributed with > GHC. This can be a private binary only we use. We already do this with > MinGW on Windows actually, because in practice, relying on versions > 'in the wild' is somewhat troublesome. In our case, we really just > need the bitcode compiler and optimizer, which are pretty small > pieces. > > Relying on a moving target like the system install or whatever > possible random XYZ install from SVN or (some derivative forked > toolchain!) is problematic for developers, and users invariably want > to try new combinations, which can break in subtle or odd ways. > > I think it's more sensible and straightforward - for the vast majority > of users and use-cases - for us to pick version that is tested, > reliably works and optimizes code well, and ship that. Then users just > know '-fasm is faster for compiling, -fllvm will optimize better for > some code.' That's all they really need to know. > > If LLVM is to be considered 'stable' for Tier 1 GHC platforms, I'm > sympathetic to Aaron's argument, and I'd say it should be held to the > same standards as the NCG. That means it should be considered a > reliable option and we should vet it to reasonable standards, even if > it's a bit more work. > > It's just really hard to do that right now. But I think implementing > this wouldn't be difficult, it just has some sticky bits about how to > do it. > > We can of course upgrade it over time - but I think trying to hit > moving targets in the wild is a bad long-term solution. > > > > On Tue, Jan 7, 2014 at 3:07 PM, George Colpitts > wrote: > > wrt > > > > We support a wide range of LLVM versions > > > > Why can't we stop doing that and only support one or two, e.g. GHC 7.8 > would > > only support llvm 3.3 and perhaps 3.4? > > > > > > > > > > > > On Tue, Jan 7, 2014 at 4:54 PM, Austin Seipp > wrote: > >> > >> Hi all, > >> > >> Apologies for the late reply. > >> > >> First off, one thing to note wrt GMP: GMP is an LGPL library which we > >> link against. Technically, we need to allow relinking to be compliant > >> and free of of the LGPL for our own executables, but this should be > >> reasonably possible - on systems where there is a system-wide GMP > >> installed, we use that copy (this occurs mostly on OSX and Linux.) And > >> so do executables compiled by GHC. Even when GHC uses static linking > >> or dynamic linking for haskell code in this case, it will still always > >> dynamically link to libgmp - meaning replacing the shared object > >> should be possible. This is just the way modern Linux/OSX systems > >> distribute system-wide C libraries, as you expect. > >> > >> In the case where we don't have this, we build our own copy of libgmp > >> inside the source tree and use that instead. That said there are other > >> reasons why we might want to be free of GMP entirely, but that's > >> neither here nor there. In any case, the issue is pretty orthogonal to > >> LLVM, dynamic haskell linking, etc - on a Linux system, you should > >> reasonably be able to swap out a `libgmp.so` for another modified > >> copy[1], and your Haskell programs should be compliant in this > >> regard.[2] > >> > >> Now, as for LLVM. > >> > >> For one, LLVM actually is a 'relatively' cheap backend to have around. > >> I say LLVM is 'relatively' cheap because All External Dependencies > >> Have A Cost. The code is reasonably small, and in any case GHC still > >> does most of the heavy lifting - the LLVM backend and native code > >> generator share a very large amount of code. We don't really duplicate > >> optimizations ourselves, for example, and some optimizations we do > >> perform on our IR can't be done by LLVM anyway (it doesn't have enough > >> information.) > >> > >> But LLVM has some very notable costs for GHC developers: > >> > >> * It's slower to compile with, because it tries to re-optimize the > >> code we give it, but it mostly accomplishes nothing beyond advanced > >> optimizations like vectorization/scalar evolution. > >> * We support a wide range of LLVM versions (a nightmare IMO) which > >> means pinning down specific versions and supporting them all is rather > >> difficult. Combined with e.g. distro maintainers who may patch bugs > >> themselves, and the things you're depending on in the wild (or what > >> users might report bugs with) aren't as solid or well understood. > >> * LLVM is extremely large, extremely complex, and the amount of > >> people who can sensibly work on both GHC and LLVM are few and far > >> inbetween. So fixing these issues is time consuming, difficult, and > >> mostly tedious grunt work. > >> > >> All this basically sums up to the fact that dealing with LLVM comes > >> with complications all on its own that makes it a different kind of > >> beast to handle. > >> > >> So, the LLVM backend definitely needs some love. All of these things > >> are solveable (and I have some ideas for solving most of them,) but > >> none of them will quite come for free. But there are some real > >> improvements that can be made here I think, and make LLVM much more > >> smoothly supported for GHC itself. If you'd like to help it'd be > >> really appreciated - I'd like to see LLVM have more love put forth, > >> but it's a lot of work of course!. > >> > >> (Finally, in reference to the last point: I am in the obvious > >> minority, but I am favorable to having the native code generator > >> around, even if it's a bit old and crufty these days - at least it's > >> small, fast and simple enough to be grokked and hacked on, and I don't > >> think it fragments development all that much. By comparison, LLVM is a > >> mammoth beast of incredible size with a sizeable entry barrier IMO. I > >> think there's merit to having both a simple, 'obviously working' > >> option in addition to the heavy duty one.) > >> > >> [1] Relevant tool: http://nixos.org/patchelf.html > >> [2] Of course, IANAL, but there you go. > >> > >> On Wed, Jan 1, 2014 at 9:03 PM, Aaron Friel wrote: > >> > Because I think it?s going to be an organizational issue and a > >> > duplication > >> > of effort if GHC is built one way but the future direction of LLVM is > >> > another. > >> > > >> > Imagine if GCC started developing a new engine and it didn?t work with > >> > one > >> > of the biggest, most regular consumers of GCC. Say, the Linux kernel, > or > >> > itself. At first, the situation is optimistic - if this engine doesn?t > >> > work > >> > for the project that has the smartest, brightest GCC hackers > potentially > >> > looking at it, then it should fix itself soon enough. Suppose the > >> > situation > >> > lingers though, and continues for months without fix. The new GCC > >> > backend > >> > starts to become the default, and the community around GCC advocates > for > >> > end-users to use it to optimize code for their projects and it even > >> > becomes > >> > the default for some platforms, such as ARM. > >> > > >> > What I?ve described is analogous to the GHC situation - and the result > >> > is > >> > that GHC isn?t self-hosting on some platforms and the inertia that > used > >> > to > >> > be behind the LLVM backend seems to have stagnated. Whereas LLVM used > to > >> > be > >> > the ?new hotness?, I?ve noticed that issues like Trac #7787 no longer > >> > have a > >> > lot of eyes on them and externally it seems like GHC has accepted a > >> > bifurcated approach for development. > >> > > >> > I dramatize the situation above, but there?s some truth to it. The > LLVM > >> > backend needs some care and attention and if the majority of GHC devs > >> > can?t > >> > build GHC with LLVM, then that means the smartest, brightest GHC > hackers > >> > won?t have their attention turned toward fixing those problems. If a > >> > patch > >> > to GHC-HEAD broke compilation for every backend, it would be fixed in > >> > short > >> > order. If a new version of GCC did not work with GHC, I can imagine it > >> > would > >> > be only hours before the first patches came in resolving the issue. On > >> > OS X > >> > Mavericks, an incompatibility with GHC has led to a swift reaction and > >> > strong support for resolving platform issues. The attention to the > LLVM > >> > backend is visibly smaller, but I don?t know enough about the people > >> > working > >> > on GHC to know if it is actually smaller. > >> > > >> > The way I am trying to change this is by making it easier for people > to > >> > start using GHC (by putting images on Docker.io) and, in the process, > >> > learning about GHC?s build process and trying to make things work for > my > >> > own > >> > projects. The Docker image allows anyone with a Linux kernel to build > >> > and > >> > play with GHC HEAD. The information about building GHC yourself is > >> > difficult > >> > to approach and I found it hard to get started, and I want to improve > >> > that > >> > too, so I?m learning and asking questions. > >> > > >> > From: Carter Schonwald > >> > Sent: Wednesday, January 1, 2014 5:54 PM > >> > >> > To: Aaron Friel > >> > Cc: ghc-devs at haskell.org > >> > > >> > 7.8 should have working dylib support on the llvm backend. (i believe > >> > some > >> > of the relevant patches are in head already, though Ben Gamari can > opine > >> > on > >> > that) > >> > > >> > why do you want ghc to be built with llvm? (i know i've tried myself > in > >> > the > >> > past, and it should be doable with 7.8 using 7.8 soon too) > >> > > >> > > >> > On Wed, Jan 1, 2014 at 5:38 PM, Aaron Friel > wrote: > >> >> > >> >> Replying to include the email list. You?re right, the llvm backend > and > >> >> the > >> >> gmp licensing issues are orthogonal - or should be. The problem is I > >> >> get > >> >> build errors when trying to build GHC with LLVM and dynamic > libraries. > >> >> > >> >> The result is that I get a few different choices when producing a > >> >> platform > >> >> image for development, with some uncomfortable tradeoffs: > >> >> > >> >> LLVM-built GHC, dynamic libs - doesn?t build. > >> >> LLVM-built GHC, static libs - potential licensing oddities with me > >> >> shipping a statically linked ghc binary that is now gpled. I am not a > >> >> lawyer, but the situation makes me uncomfortable. > >> >> GCC/ASM-built GHC, dynamic libs - this is the *standard* for most > >> >> platforms shipping ghc binaries, but it means that one of the biggest > >> >> and > >> >> most critical users of the LLVM backend is neglecting it. It also > >> >> bifurcates > >> >> development resources for GHC. Optimization work is duplicated and > >> >> already > >> >> devs are getting into the uncomfortable position of suggesting to > users > >> >> that > >> >> they should trust GHC to build your programs in a particular way, but > >> >> not > >> >> itself. > >> >> GCC/ASM-built GHC, static libs - worst of all possible worlds. > >> >> > >> >> > >> >> Because of this, the libgmp and llvm-backend issues aren?t entirely > >> >> orthogonal. Trac ticket #7885 is exactly the issue I get when trying > to > >> >> compile #1. > >> >> > >> >> From: Carter Schonwald > >> >> Sent: Monday, December 30, 2013 1:05 PM > >> > >> >> To: Aaron Friel > >> >> > >> >> Good question but you forgot to email the mailing list too :-) > >> >> > >> >> Using llvm has nothing to do with Gmp. Use the native code gen (it's > >> >> simper) and integer-simple. > >> >> > >> >> That said, standard ghc dylinks to a system copy of Gmp anyways (I > >> >> think > >> >> ). Building ghc as a Dylib is orthogonal. > >> >> > >> >> -Carter > >> >> > >> >> On Dec 30, 2013, at 1:58 PM, Aaron Friel wrote: > >> >> > >> >> Excellent research - I?m curious if this is the right thread to > inquire > >> >> about the status of trying to link GHC itself dynamically. > >> >> > >> >> I?ve been attempting to do so with various LLVM versions (3.2, 3.3, > >> >> 3.4) > >> >> using snapshot builds of GHC (within the past week) from git, and I > hit > >> >> ticket #7885 [https://ghc.haskell.org/trac/ghc/ticket/7885] every > time > >> >> (even > >> >> the exact same error message). > >> >> > >> >> I?m interested in dynamically linking GHC with LLVM to avoid the > >> >> entanglement with libgmp?s license. > >> >> > >> >> If this is the wrong thread or if I should reply instead to the trac > >> >> item, > >> >> please let me know. > >> > > >> > > >> > _______________________________________________ > >> > ghc-devs mailing list > >> > ghc-devs at haskell.org > >> > http://www.haskell.org/mailman/listinfo/ghc-devs > >> > > >> > >> > >> > >> -- > >> Regards, > >> > >> Austin Seipp, Haskell Consultant > >> Well-Typed LLP, http://www.well-typed.com/ > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Wed Jan 8 08:17:03 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Wed, 08 Jan 2014 08:17:03 +0000 Subject: Validating with Haddock In-Reply-To: References: <52BF0209.6020000@fuuzetsu.co.uk> <52CB70A6.90105@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707E70@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD21A.1020900@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148707F2F@DB3EX14MBXC306.europe.corp.microsoft.com> <52CBD6FC.1080405@fuuzetsu.co.uk> <52CC43C6.9020701@fuuzetsu.co.uk> <52CC49E8.4040407@fuuzetsu.co.uk> <20140107201547.GA15588@matrix.chaos.earth.li> <52CC6C94.1080800@fuuzetsu.co.uk> <52CC99D8.9040508@fuuzetsu.co.uk> Message-ID: <52CD097F.2080300@fuuzetsu.co.uk> On 08/01/14 07:46, Austin Seipp wrote: > Excellent, thank you. We should really fix the 32bit performance > numbers too I think, based on what we discussed on IRC earlier. Would > you like to submit a patch for that too please? You can find the > numbers in testsuite/tests/perf/haddock/all.T. I have no idea how to determine the new numbers. I could probably do it with some guidance. Is there a wiki link or something of the sort? Is there a special set up I need? I suppose you want it in a Trac ticket. > Also, is there any new documentation we should need for this? Is all > the new stuff properly documented somewhere? Etc. There are some slight semantics changes between the old and new parser. A good example is the ability to now escape things properly. In the past, ?/foo\/bar/? would actually only treat ?foo? as italics. As this should have been the original behaviour to begin with, the documentation is now actually correct. We were very careful to not make changes which would compromise old documentation. In fact, the first few commits include a parser which pretty much replicates the behaviour of the old one, with all the bugs and such! For the features (not in the new-parser branch, I will prepare the appropriate branch after I can validate it and give it a final look; the perf tests should probably be update after this is merged rather than before), I have updated Haddock's own documentation. It will be included in the branch. Anything that's no longer correct/relevant has been changed. Both Simon H and myself have access to the the Haddock documentation hosted on haskell.org so we can update that after the merges. I also hope to create a (web) tool which would allow one to write/paste in some Haskell to allow the user to compare between the Haddock versions as well as probably writing up a quick ?migration? guide but it's nothing that needs to be in the repository itself. After the features are merged in, it's all done from our side and anyone is free to hack on top of the changes, which is great. Perhaps I can get back to filling feature requests and bug fixes. Regarding PatternSynonyms, Erdi seems to be happy to make sure his changes to Haddock merge on top of what we send so we don't have to worry about that. So for now, hold tight until I can prepare the feature branch. I imagine you should be hearing about me with a link in about 12-16 hours. Thanks -- Mateusz K. From simonpj at microsoft.com Wed Jan 8 08:23:51 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 8 Jan 2014 08:23:51 +0000 Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> Message-ID: <59543203684B2244980D7E4057D5FBC148709398@DB3EX14MBXC306.europe.corp.microsoft.com> Any time GHC simply falls over, it's a bug. Even if you give GHC a 100 Gbyte input program and try to compile it on a 1 Gbyte machine, it's a bug if GHC simply falls over with "heap exhausted". It would be better if it chopped the program into pieces and compiled them one at a time; or failed with a civilised message like "I'm afraid this input program is too big for me to compile". But some bugs are more pressing than others, and with slender resources we have to concentrate on the ones that affect most people most seriously. There seems to be consensus that this one falls into the "does not affect many people" category, and it has a workaround. So I'm fine with leaving it open, but I think the priority is probably low. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Adam | Wick | Sent: 07 January 2014 23:57 | To: Ben Lippmeier | Cc: ghc-devs at haskell.org | Subject: Re: panic when compiling SHA | | On Jan 7, 2014, at 2:27 AM, Ben Lippmeier wrote: | > On 07/01/2014, at 9:26 , Adam Wick wrote: | > | >>> Not if we just have this one test. I'd be keen to blame excessive | use of inline pragmas in the SHA library itself, or excessive | optimisation flags. It's not really a bug in GHC until there are two | tests that exhibit the same problem. | >> | >> The SHA library uses SPECIALIZE, INLINE, and bang patterns in fairly | standard ways. There's nothing too exotic in there, I just basically | sprinkled hints in places I thought would be useful, and then backed | those up with benchmarking. | > | > Ahh. It's the "sprinkled hints in places I thought would be useful" | which is what I'm concerned about. If you just add pragmas without | understanding their effect on the core program then it'll bite further | down the line. Did you compare the object code size as well as wall | clock speedup? | | I understand the pragmas and what they do with my code. I use SPECIALIZE | twice for two functions. In both functions, it was clearer to write the | function as (a -> a -> a -> a), but I wanted specialized versions for | the two versions that were going to be used, in which (a == Word32) or | (a == Word64). This benchmarked as faster while maintaining code clarity | and concision. I use INLINE in five places, each of them a SHA step | function, with the understanding that it would generate ideal code for a | compiler for the performance-critical parts of the algorithm: straight | line, single-block code with no conditionals. | | When I did my original performance work, several versions of GHC ago, I | did indeed consider compile time, runtime performance, and space usage. | I picked what I thought was a reasonable balance at the time. | | I also just performed an experiment in which I took the SHA library, | deleted all instances of INLINE and SPECIALIZE, and compiled it with | HEAD on 32-bit Linux. You get the same crash. So my usage of SPECIALIZE | and INLINE is beside the point. | | > Sadly, "valid input" isn't a well defined concept in practice. You | could write a "valid" 10GB Haskell source file that obeyed the Haskell | standard grammar, but I wouldn't expect that to compile either. | | I would. I'm a little disappointed that ghc-devs does not. I wouldn't | expect it to compile quickly, but I would expect it to run. | | > You could also write small (< 1k) source programs that trigger | complexity problems in Hindley-Milner style type inference. You could | also use compile-time meta programming (like Template Haskell) to | generate intermediate code that is well formed but much too big to | compile. The fact that a program obeys a published grammar is not | sufficient to expect it to compile with a particular implementation | (sorry to say). | | If I write a broken Template Haskell macro, then yes, I agree. This is | not the case in this example. | | > Adding an INLINE pragma is akin to using compile-time meta | programming. | | Is it? I find that a strange point of view. Isn't INLINE just a strong | hint to the compiler that this function should be inlined? How is using | INLINE any different from simply manually inserting the code at every | call site? | | | - Adam From simonpj at microsoft.com Wed Jan 8 08:42:22 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 8 Jan 2014 08:42:22 +0000 Subject: GHC API: Using runGhc twice or from multiple threads? In-Reply-To: References: Message-ID: <59543203684B2244980D7E4057D5FBC148709407@DB3EX14MBXC306.europe.corp.microsoft.com> Benno I think that both ought to be ok, but I'd welcome other input. GHC.runGHC calls initGhcMonad, which allocates an entirely new session (in newHscEnv). So the two will work entirely independently. Unfortunately that's not 100% true. If you search for GLOBAL_VAR you'll see a handful of disgusting global state variables, and they *will* be shared between GHC sessions. There really aren't many. I'd love someone to eliminate them! (NB ghc-devs) Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Benno F?nfst?ck Sent: 07 January 2014 13:56 To: ghc-devs Subject: GHC API: Using runGhc twice or from multiple threads? Hello, is the following safe to do? main = do runGhc libdir $ do ... runGhc libdir $ do ... Or will this cause trouble? Is there state that is shared between the two calls? And what about this one: main = do forkIO $ runGhc libdir $ do ... forkIO $ runGhc libdir $ do ... -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Wed Jan 8 09:26:05 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 08 Jan 2014 09:26:05 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <59543203684B2244980D7E4057D5FBC148708D03@DB3EX14MBXC306.europe.corp.microsoft.com> References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> <59543203684B2244980D7E4057D5FBC148708D03@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52CD19AD.7030503@gmail.com> On 07/01/14 22:53, Simon Peyton Jones wrote: > | Yes, this is technically wrong but luckily works. I'd very much like > | to > | have a better solution, preferably one that doesn't add any extra > | overhead. > > | __decodeFloat_Int is a C function, so it will not touch the Haskell > | stack. > > This all seems terribly fragile to me. At least it ought to be surrounded with massive comments pointing out how terribly fragile it is, breaking all the rules that we carefully document elsewhere. > > Can't we just allocate a Cmm "area"? The address of an area is a perfectly well-defined Cmm value. It is fragile, yes. We can't use static memory because it needs to be thread-local. This particular hack has gone through several iterations over the years: first we had static memory, which broke when we did the parallel runtime, then we had special storage in the Capability, which we gave up when GMP was split out into a separate library, because it didn't seem right to have magic fields in the Capability for one library. I'm looking into whether we can do temporary allocation on the heap for this instead. Cheers, Simon > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon > | Marlow > | Sent: 07 January 2014 16:05 > | To: Herbert Valerio Riedel; ghc-devs at haskell.org > | Subject: Re: High-level Cmm code and stack allocation > | > | On 04/01/2014 23:26, Herbert Valerio Riedel wrote: > | > Hello, > | > > | > According to Note [Syntax of .cmm files], > | > > | > | There are two ways to write .cmm code: > | > | > | > | (1) High-level Cmm code delegates the stack handling to GHC, and > | > | never explicitly mentions Sp or registers. > | > | > | > | (2) Low-level Cmm manages the stack itself, and must know about > | > | calling conventions. > | > | > | > | Whether you want high-level or low-level Cmm is indicated by the > | > | presence of an argument list on a procedure. > | > > | > However, while working on integer-gmp I've been noticing in > | > integer-gmp/cbits/gmp-wrappers.cmm that even though all Cmm > | procedures > | > have been converted to high-level Cmm, they still reference the 'Sp' > | > register, e.g. > | > > | > > | > #define GMP_TAKE1_RET1(name,mp_fun) \ > | > name (W_ ws1, P_ d1) \ > | > { \ > | > W_ mp_tmp1; \ > | > W_ mp_result1; \ > | > \ > | > again: \ > | > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ > | > MAYBE_GC(again); \ > | > \ > | > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ > | > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ > | > ... \ > | > > | > > | > So is this valid high-level Cmm code? What's the proper way to > | allocate > | > Stack (and/or Heap) memory from high-level Cmm code? > | > | Yes, this is technically wrong but luckily works. I'd very much like > | to > | have a better solution, preferably one that doesn't add any extra > | overhead. > | > | The problem here is that we need to allocate a couple of temporary > | words > | and take their address; that's an unusual thing to do in Cmm, so it > | only > | occurs in a few places (mainly interacting with gmp). Usually if you > | want some temporary storage you can use local variables or some > | heap-allocated memory. > | > | Cheers, > | Simon > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Wed Jan 8 10:07:26 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 8 Jan 2014 10:07:26 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <52CD19AD.7030503@gmail.com> References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> <59543203684B2244980D7E4057D5FBC148708D03@DB3EX14MBXC306.europe.corp.microsoft.com> <52CD19AD.7030503@gmail.com> Message-ID: <59543203684B2244980D7E4057D5FBC148709591@DB3EX14MBXC306.europe.corp.microsoft.com> | > Can't we just allocate a Cmm "area"? The address of an area is a | perfectly well-defined Cmm value. What about this idea? Simon | -----Original Message----- | From: Simon Marlow [mailto:marlowsd at gmail.com] | Sent: 08 January 2014 09:26 | To: Simon Peyton Jones; Herbert Valerio Riedel | Cc: ghc-devs at haskell.org | Subject: Re: High-level Cmm code and stack allocation | | On 07/01/14 22:53, Simon Peyton Jones wrote: | > | Yes, this is technically wrong but luckily works. I'd very much | > | like to have a better solution, preferably one that doesn't add any | > | extra overhead. | > | > | __decodeFloat_Int is a C function, so it will not touch the Haskell | > | stack. | > | > This all seems terribly fragile to me. At least it ought to be | surrounded with massive comments pointing out how terribly fragile it | is, breaking all the rules that we carefully document elsewhere. | > | > Can't we just allocate a Cmm "area"? The address of an area is a | perfectly well-defined Cmm value. | | It is fragile, yes. We can't use static memory because it needs to be | thread-local. This particular hack has gone through several iterations | over the years: first we had static memory, which broke when we did the | parallel runtime, then we had special storage in the Capability, which | we gave up when GMP was split out into a separate library, because it | didn't seem right to have magic fields in the Capability for one | library. | | I'm looking into whether we can do temporary allocation on the heap for | this instead. | | Cheers, | Simon | | | > Simon | > | > | -----Original Message----- | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | > | Simon Marlow | > | Sent: 07 January 2014 16:05 | > | To: Herbert Valerio Riedel; ghc-devs at haskell.org | > | Subject: Re: High-level Cmm code and stack allocation | > | | > | On 04/01/2014 23:26, Herbert Valerio Riedel wrote: | > | > Hello, | > | > | > | > According to Note [Syntax of .cmm files], | > | > | > | > | There are two ways to write .cmm code: | > | > | | > | > | (1) High-level Cmm code delegates the stack handling to GHC, | and | > | > | never explicitly mentions Sp or registers. | > | > | | > | > | (2) Low-level Cmm manages the stack itself, and must know about | > | > | calling conventions. | > | > | | > | > | Whether you want high-level or low-level Cmm is indicated by the | > | > | presence of an argument list on a procedure. | > | > | > | > However, while working on integer-gmp I've been noticing in | > | > integer-gmp/cbits/gmp-wrappers.cmm that even though all Cmm | > | procedures | > | > have been converted to high-level Cmm, they still reference the | 'Sp' | > | > register, e.g. | > | > | > | > | > | > #define GMP_TAKE1_RET1(name,mp_fun) \ | > | > name (W_ ws1, P_ d1) \ | > | > { \ | > | > W_ mp_tmp1; \ | > | > W_ mp_result1; \ | > | > \ | > | > again: \ | > | > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ | > | > MAYBE_GC(again); \ | > | > \ | > | > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ | > | > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ | > | > ... \ | > | > | > | > | > | > So is this valid high-level Cmm code? What's the proper way to | > | allocate | > | > Stack (and/or Heap) memory from high-level Cmm code? | > | | > | Yes, this is technically wrong but luckily works. I'd very much | > | like to have a better solution, preferably one that doesn't add any | > | extra overhead. | > | | > | The problem here is that we need to allocate a couple of temporary | > | words and take their address; that's an unusual thing to do in Cmm, | > | so it only occurs in a few places (mainly interacting with gmp). | > | Usually if you want some temporary storage you can use local | > | variables or some heap-allocated memory. | > | | > | Cheers, | > | Simon | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | http://www.haskell.org/mailman/listinfo/ghc-devs | > From marlowsd at gmail.com Wed Jan 8 10:19:40 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 08 Jan 2014 10:19:40 +0000 Subject: LLVM and dynamic linking In-Reply-To: <87a9fm2gfr.fsf@gmail.com> References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> Message-ID: <52CD263C.6020809@gmail.com> On 27/12/13 20:21, Ben Gamari wrote: > Simon Marlow writes: > >> This sounds right to me. Did you submit a patch? >> >> Note that dynamic linking with LLVM is likely to produce significantly >> worse code that with the NCG right now, because the LLVM back end uses >> dynamic references even for symbols in the same package, whereas the NCG >> back-end uses direct static references for these. >> > Today with the help of Edward Yang I examined the code produced by the > LLVM backend in light of this statement. I was surprised to find that > LLVM's code appears to be no worse than the NCG with respect to > intra-package references. > > My test case can be found here[2] and can be built with the included > `build.sh` script. The test consists of two modules build into a shared > library. One module, `LibTest`, exports a few simple members while the > other module (`LibTest2`) defines members that consume them. Care is > taken to ensure the members are not inlined. > > The tests were done on x86_64 running LLVM 3.4 and GHC HEAD with the > patches[1] I referred to in my last message. Please let me know if I've > missed something. This is good news, however what worries me is that I still don't understand *why* you got these results. Where in the LLVM backend is the magic that does something special for intra-package references? I know where it is in the NCG backend - CLabel.labelDynamic - but I can't see this function used at all in the LLVM backend. So what is the mechanism that lets LLVM optimise these calls? Is it happening magically in the linker, perhaps? But that would only be possible when using -Bsymbolic or -Bsymbolic-functions, which is a choice made at link time. As far as I can tell, all we do is pass a flag to llc to tell it to compile for dynamic/PIC, in DriverPipeline.runPhase. Cheers, Simon > > > # Evaluation > > ## First example ## > > The first member is a simple `String` (defined in `LibTest`), > > helloWorld :: String > helloWorld = "Hello World!" > > The use-site is quite straightforward, > > testHelloWorld :: IO String > testHelloWorld = return helloWorld > > With `-O1` the code looks reasonable in both cases. Most importantly, > both backends use IP relative addressing to find the symbol. > > ### LLVM ### > > 0000000000000ef8 : > ef8: 48 8b 45 00 mov 0x0(%rbp),%rax > efc: 48 8d 1d cd 11 20 00 lea 0x2011cd(%rip),%rbx # 2020d0 > f03: ff e0 jmpq *%rax > > 0000000000000f28 : > f28: eb ce jmp ef8 > f2a: 66 0f 1f 44 00 00 nopw 0x0(%rax,%rax,1) > > ### NCG ### > > 0000000000000d58 : > d58: 48 8d 1d 71 13 20 00 lea 0x201371(%rip),%rbx # 2020d0 > d5f: ff 65 00 jmpq *0x0(%rbp) > > 0000000000000d88 : > d88: eb ce jmp d58 > > > With `-O0` the code is substantially longer but the relocation behavior > is still correct, as one would expect. > > Looking at the definition of `helloWorld`[3] itself it becomes clear that > the LLVM backend is more likely to use PLT relocations over GOT. In > general, `stg_*` primitives are called through the PLT. As far as I can > tell, both of these call mechanisms will incur two memory > accesses. However, in the case of the PLT the call will consist of two > JMPs whereas the GOT will consist of only one. Is this a cause for > concern? Could these two jumps interfere with prediction? > > In general the LLVM backend produces a few more instructions than the > NCG although this doesn't appear to be related to handling of > relocations. For instance, the inexplicable (to me) `mov` at the > beginning of LLVM's `rKw_info`. > > > ## Second example ## > > The second example demonstrates an actual call, > > -- Definition (in LibTest) > infoRef :: Int -> Int > infoRef n = n + 1 > > -- Call site > testInfoRef :: IO Int > testInfoRef = return (infoRef 2) > > With `-O1` this produces the following code, > > ### LLVM ### > > 0000000000000fb0 : > fb0: 48 8b 45 00 mov 0x0(%rbp),%rax > fb4: 48 8d 1d a5 10 20 00 lea 0x2010a5(%rip),%rbx # 202060 > fbb: ff e0 jmpq *%rax > > 0000000000000fe0 : > fe0: eb ce jmp fb0 > > ### NCG ### > > 0000000000000e10 : > e10: 48 8d 1d 51 12 20 00 lea 0x201251(%rip),%rbx # 202068 > e17: ff 65 00 jmpq *0x0(%rbp) > > 0000000000000e40 : > e40: eb ce jmp e10 > > Again, it seems that LLVM is a bit more verbose but seems to handle > intra-package calls efficiently. > > > > [1] https://github.com/bgamari/ghc/commits/llvm-dynamic > [2] https://github.com/bgamari/ghc-linking-tests/tree/master/ghc-test > [3] `helloWorld` definitions: > > LLVM: > 00000000000010a8 : > 10a8: 50 push %rax > 10a9: 4c 8d 75 f0 lea -0x10(%rbp),%r14 > 10ad: 4d 39 fe cmp %r15,%r14 > 10b0: 73 07 jae 10b9 > 10b2: 49 8b 45 f0 mov -0x10(%r13),%rax > 10b6: 5a pop %rdx > 10b7: ff e0 jmpq *%rax > 10b9: 4c 89 ef mov %r13,%rdi > 10bc: 48 89 de mov %rbx,%rsi > 10bf: e8 0c fd ff ff callq dd0 > 10c4: 48 85 c0 test %rax,%rax > 10c7: 74 22 je 10eb > 10c9: 48 8b 0d 18 0f 20 00 mov 0x200f18(%rip),%rcx # 201fe8 <_DYNAMIC+0x228> > 10d0: 48 89 4d f0 mov %rcx,-0x10(%rbp) > 10d4: 48 89 45 f8 mov %rax,-0x8(%rbp) > 10d8: 48 8d 05 21 00 00 00 lea 0x21(%rip),%rax # 1100 > 10df: 4c 89 f5 mov %r14,%rbp > 10e2: 49 89 c6 mov %rax,%r14 > 10e5: 58 pop %rax > 10e6: e9 b5 fc ff ff jmpq da0 > 10eb: 48 8b 03 mov (%rbx),%rax > 10ee: 5a pop %rdx > 10ef: ff e0 jmpq *%rax > > > NCG: > > 0000000000000ef8 : > ef8: 48 8d 45 f0 lea -0x10(%rbp),%rax > efc: 4c 39 f8 cmp %r15,%rax > eff: 72 3f jb f40 > f01: 4c 89 ef mov %r13,%rdi > f04: 48 89 de mov %rbx,%rsi > f07: 48 83 ec 08 sub $0x8,%rsp > f0b: b8 00 00 00 00 mov $0x0,%eax > f10: e8 1b fd ff ff callq c30 > f15: 48 83 c4 08 add $0x8,%rsp > f19: 48 85 c0 test %rax,%rax > f1c: 74 20 je f3e > f1e: 48 8b 1d cb 10 20 00 mov 0x2010cb(%rip),%rbx # 201ff0 <_DYNAMIC+0x238> > f25: 48 89 5d f0 mov %rbx,-0x10(%rbp) > f29: 48 89 45 f8 mov %rax,-0x8(%rbp) > f2d: 4c 8d 35 1c 00 00 00 lea 0x1c(%rip),%r14 # f50 > f34: 48 83 c5 f0 add $0xfffffffffffffff0,%rbp > f38: ff 25 7a 10 20 00 jmpq *0x20107a(%rip) # 201fb8 <_DYNAMIC+0x200> > f3e: ff 23 jmpq *(%rbx) > f40: 41 ff 65 f0 jmpq *-0x10(%r13) > From marlowsd at gmail.com Wed Jan 8 10:37:47 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 08 Jan 2014 10:37:47 +0000 Subject: panic when compiling SHA In-Reply-To: References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> Message-ID: <52CD2A7B.2000206@gmail.com> On 08/01/14 07:35, Carter Schonwald wrote: > well said iavor. > It perhaps hints at the register allocators needing some love? I hope to > dig deep into those myself later this year, but maybe it needs some > wibbles to clean up for 7.8 right now? There's a bit of confusion here. Let me try to clarify: - the graph-colouring register allocator now trips the spill slot limit with SHA-1, where it didn't previously. This may be because earlier compiler stages are generating worse code, or it may be because this allocator has bitrotted (see #7679). - The code compiles fine without the flag -fregs-graph. - The limitation on spill slots that existed in all versions prior to 7.8 has been lifted in 7.8, but only for the linear register allocator (the default one that you get without -fregs-graph). So, let's just disable -fregs-graph in 7.8.1. Ben is right that avoiding -fregs-graph doesn't really fix the problem, because we'll probably get crappy code for SHA-1 now. But someone needs to work on -fregs-graph. This ticket is for the performance issue: https://ghc.haskell.org/trac/ghc/ticket/7679 And I just created this one for the spill slot issue: https://ghc.haskell.org/trac/ghc/ticket/8657 Cheers, Simon > > On Wed, Jan 8, 2014 at 2:14 AM, Iavor Diatchki > wrote: > > Hello, > > I find it a bit perplexing (and not at all constructive) that we are > arguing over semantics here. We have a program (1 module, ~1000 > lines of "no fancy extension Haskell"), which causes GHC to panic. > This is a bug. An invariant that we were assuming did not > actually hold. Hence the message that the "impossible" happened. > If GHC decides to refuse to compile a program, it should not panic > but, rather, explain what happened and maybe suggest a workaround. > > I am not familiar with GHC's back-end, but it seems that there might > be something interesting that's going on here. The SHA library > works fine with 7.6.3, and it compiles (admittedly very slowly) > using GHC head on my 64-bit machine. So something has changed, and > it'd be nice if we understood what's causing the problem. > > Ben suggested that the issue might be the INLINE pragmas, but > clearly that's not the problem, as Adam reproduced the same behavior > without those pragmas. If the issue is indeed with the built-in > inline heuristics, it sounds like we either should fix the > heuristics, or come up with some suggestions about what to avoid in > user programs. Or, perhaps, the issue something completely > unrelated (e.g., a bug in the register allocator). Either way, I > think this deserves a ticket. > > -Iavor > > > > > > > > > On Tue, Jan 7, 2014 at 10:11 PM, Carter Schonwald > > wrote: > > Adam, > I agree that it should be considered a misfeature (or at the > very least a good stress test that currently breaks the register > allocator). That said, > INLINE / INLINEABLE are only needed for intermodule > optimization, have you tried using the special "inline" primop > selectively, or using INLINEABLE plus selective inline? I think > inline should work in the defining module even if you don't > provide an INLINE or INLINEABLE. > > question 1: does the code compile well when you use -fllvm? > (seems like the discussion so far has been NCG focused). > how does the generated assembly fair that way vs the workaroudn > path on NCG? > > > > > On Tue, Jan 7, 2014 at 6:57 PM, Adam Wick > wrote: > > On Jan 7, 2014, at 2:27 AM, Ben Lippmeier > > wrote: > > On 07/01/2014, at 9:26 , Adam Wick > wrote: > > > >>> Not if we just have this one test. I'd be keen to blame > excessive use of inline pragmas in the SHA library itself, > or excessive optimisation flags. It's not really a bug in > GHC until there are two tests that exhibit the same problem. > >> > >> The SHA library uses SPECIALIZE, INLINE, and bang > patterns in fairly standard ways. There?s nothing too exotic > in there, I just basically sprinkled hints in places I > thought would be useful, and then backed those up with > benchmarking. > > > > Ahh. It's the "sprinkled hints in places I thought would > be useful" which is what I'm concerned about. If you just > add pragmas without understanding their effect on the core > program then it'll bite further down the line. Did you > compare the object code size as well as wall clock speedup? > > I understand the pragmas and what they do with my code. I > use SPECIALIZE twice for two functions. In both functions, > it was clearer to write the function as (a -> a -> a -> a), > but I wanted specialized versions for the two versions that > were going to be used, in which (a == Word32) or (a == > Word64). This benchmarked as faster while maintaining code > clarity and concision. I use INLINE in five places, each of > them a SHA step function, with the understanding that it > would generate ideal code for a compiler for the > performance-critical parts of the algorithm: straight line, > single-block code with no conditionals. > > When I did my original performance work, several versions of > GHC ago, I did indeed consider compile time, runtime > performance, and space usage. I picked what I thought was a > reasonable balance at the time. > > I also just performed an experiment in which I took the SHA > library, deleted all instances of INLINE and SPECIALIZE, and > compiled it with HEAD on 32-bit Linux. You get the same > crash. So my usage of SPECIALIZE and INLINE is beside the point. > > > Sadly, "valid input" isn't a well defined concept in > practice. You could write a "valid" 10GB Haskell source file > that obeyed the Haskell standard grammar, but I wouldn't > expect that to compile either. > > I would. I?m a little disappointed that ghc-devs does not. I > wouldn?t expect it to compile quickly, but I would expect it > to run. > > > You could also write small (< 1k) source programs that > trigger complexity problems in Hindley-Milner style type > inference. You could also use compile-time meta programming > (like Template Haskell) to generate intermediate code that > is well formed but much too big to compile. The fact that a > program obeys a published grammar is not sufficient to > expect it to compile with a particular implementation (sorry > to say). > > If I write a broken Template Haskell macro, then yes, I > agree. This is not the case in this example. > > > Adding an INLINE pragma is akin to using compile-time > meta programming. > > Is it? I find that a strange point of view. Isn?t INLINE > just a strong hint to the compiler that this function should > be inlined? How is using INLINE any different from simply > manually inserting the code at every call site? > > > - Adam > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From marlowsd at gmail.com Wed Jan 8 10:42:03 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 08 Jan 2014 10:42:03 +0000 Subject: panic when compiling SHA In-Reply-To: <2E9BAE47-AE0B-4189-89BC-A01FF8DE499B@ouroborus.net> References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <59543203684B2244980D7E4057D5FBC148707206@DB3EX14MBXC306.europe.corp.microsoft.com> <2E9BAE47-AE0B-4189-89BC-A01FF8DE499B@ouroborus.net> Message-ID: <52CD2B7B.2030501@gmail.com> On 07/01/14 09:59, Ben Lippmeier wrote: > > On 06/01/2014, at 19:43 , Simon Peyton-Jones wrote: > >> | Note that removing the flag isn't a "solution" to the underlying problem >> | of the intermediate code being awful. Switching to the linear allocator >> | just permits compilation of core code that was worse than before. Now it >> | needs to spill more registers when compiling the same source code. >> >> In what way is the intermediate code awful? > > Because the error message from the register allocator tells us that > there are over 1000 live variables at a particular point the assembly > code, but the "biggest" SHA hashing algorithm (SHA-3) should only > need to maintain 25 words of state (says Wikipedia). Neither of the register allocators reuse spill slots for variables that have disjoint live ranges, so the fact that we ran out of spill slots is not necessarily indicative of terrible code (but I agree that it's a strong hint). Cheers, Simon From p.k.f.holzenspies at utwente.nl Wed Jan 8 11:24:47 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Wed, 8 Jan 2014 11:24:47 +0000 Subject: GHC API: Using runGhc twice or from multiple threads? Message-ID: Dear Benno, Simon, > I think that both ought to be ok, but I'd welcome other input. > > GHC.runGHC calls initGhcMonad, which allocates an entirely new session (in > newHscEnv). So the two will work entirely independently. > > Unfortunately that's not 100% true. If you search for GLOBAL_VAR you'll see > a handful of disgusting global state variables, and they *will* be shared > between GHC sessions. > > There really aren't many. I'd love someone to eliminate them! (NB ghc- > devs) There is one open question about what you intend to do with the results of running those GHC-sessions. Correct me if I'm wrong (Simon, or anyone), but methinks you should be very careful with assigned Uniques, i.e. (Rdr/Occ)Name-things from one session shouldn't be used in the other and vice versa. I was wondering about this earlier; it would be nice to have some more explicit combinators for manipulating GHC API states (e.g. combining two HscEnvs, such that any NamedThing defined in either will also exist in the result). I thinks some updated documentation on some of the states (HscEnv, GblEnv, LclEnv, etc.) would help people not well-versed with the GHC-innerts to bend their mind around the API. Regards, Philip From gergo at erdi.hu Wed Jan 8 14:20:20 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Wed, 8 Jan 2014 22:20:20 +0800 (SGT) Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> Message-ID: On Tue, 7 Jan 2014, Austin Seipp wrote: > Hi Gergo, > > Thanks for rebasing your changes. Unfortunately, they do not compile > cleanly with ./validate, which we really need to have working for all > incoming patches. I've fixed validation (but the commit history is a bit of a mess -- expect another history rewrite shortly. But the code is at the same location on github. Thanks, Gergo -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' Glob thinkally, loc actally From simonpj at microsoft.com Wed Jan 8 14:24:04 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 8 Jan 2014 14:24:04 +0000 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> Message-ID: <59543203684B2244980D7E4057D5FBC148709AD4@DB3EX14MBXC306.europe.corp.microsoft.com> It'd be good if what we finally commit to HEAD has a sensible history S | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Dr. | ERDI Gergo | Sent: 08 January 2014 14:20 | To: Austin Seipp | Cc: Joachim Breitner; ghc-devs at haskell.org | Subject: Re: Pattern synonyms for 7.8? | | On Tue, 7 Jan 2014, Austin Seipp wrote: | | > Hi Gergo, | > | > Thanks for rebasing your changes. Unfortunately, they do not compile | > cleanly with ./validate, which we really need to have working for all | > incoming patches. | | I've fixed validation (but the commit history is a bit of a mess -- | expect another history rewrite shortly. But the code is at the same | location on github. | | Thanks, | Gergo | | -- | | .--= ULLA! =-----------------. | \ http://gergo.erdi.hu \ | `---= gergo at erdi.hu =-------' | Glob thinkally, loc actally | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From gergo at erdi.hu Wed Jan 8 14:33:57 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Wed, 8 Jan 2014 22:33:57 +0800 (SGT) Subject: Pattern synonyms for 7.8? In-Reply-To: <59543203684B2244980D7E4057D5FBC148709AD4@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <59543203684B2244980D7E4057D5FBC148709AD4@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: On Wed, 8 Jan 2014, Simon Peyton Jones wrote: > It'd be good if what we finally commit to HEAD has a sensible history Yes, of course. I'll clean that up next week. I just have to run now. -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' Ha paranoi?sokkal akarsz megismerkedni, kezdd el k?vetni ?ket. From gergo at erdi.hu Wed Jan 8 15:13:54 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Wed, 8 Jan 2014 23:13:54 +0800 (SGT) Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <59543203684B2244980D7E4057D5FBC148709AD4@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: On Wed, 8 Jan 2014, Austin Seipp wrote: > Oh, to be honest, I was just going to squash it into a single Big > Commit, with you set as the author, Gergo. This seems to be the > general way we do it for new features. > > If you'd like I can just go ahead and do this for you. Sure, thanks. I'm getting 8 unexpected test failures but I haven't tried them on master yet to see if they are caused by my code: driver T4437 [bad stdout] (normal) generics GenDerivOutput [stderr mismatch] (normal) generics GenDerivOutput1_0 [stderr mismatch] (normal) generics GenDerivOutput1_1 [stderr mismatch] (normal) perf/haddock haddock.Cabal [stat not good enough] (normal) perf/should_run T5237 [stat not good enough] (normal) rename/should_compile T7336 [stderr mismatch] (normal) th T8633 [bad exit code] (normal) I'll look at these next week when I'm back. Bye, Gergo From gergo at erdi.hu Wed Jan 8 15:36:25 2014 From: gergo at erdi.hu (=?UTF-8?B?RHIuIMOJUkRJIEdlcmfFkQ==?=) Date: Wed, 8 Jan 2014 23:36:25 +0800 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <59543203684B2244980D7E4057D5FBC148709AD4@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Hi, please don't commit it just yet, I'd like the eventual single commit to also include the user documentation. Thanks, Gergo On Jan 8, 2014 10:46 PM, "Austin Seipp" wrote: > Oh, to be honest, I was just going to squash it into a single Big > Commit, with you set as the author, Gergo. This seems to be the > general way we do it for new features. > > If you'd like I can just go ahead and do this for you. > > On Wed, Jan 8, 2014 at 8:33 AM, Dr. ERDI Gergo wrote: > > On Wed, 8 Jan 2014, Simon Peyton Jones wrote: > > > >> It'd be good if what we finally commit to HEAD has a sensible history > > > > > > Yes, of course. I'll clean that up next week. I just have to run now. > > > > > > -- > > > > .--= ULLA! =-----------------. > > \ http://gergo.erdi.hu \ > > `---= gergo at erdi.hu =-------' > > Ha paranoi?sokkal akarsz megismerkedni, kezdd el k?vetni ?ket. > > > > -- > Regards, > Austin - PGP: 4096R/0x91384671 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 8 17:11:23 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 8 Jan 2014 17:11:23 +0000 Subject: Changing GHC Error Message Wrapping In-Reply-To: References: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> <59543203684B2244980D7E4057D5FBC14870765F@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC148707DFC@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <59543203684B2244980D7E4057D5FBC148709F1E@DB3EX14MBXC306.europe.corp.microsoft.com> Well, the Show instance for a type (any type) cannot possibly respect pprCols. It can't: show :: a -> String! No command-line inputs. I suggest something more like doc sdoc = do { dflags <- getDynFlags; unqual <- getPrintUnqual; return (showSDocForUser dflags unqual doc } Simon From: Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] Sent: 08 January 2014 00:09 To: Simon Peyton Jones Cc: Erik de Castro Lopo; ghc-devs at haskell.org Subject: Re: Changing GHC Error Message Wrapping Hello all, I figured out that this isn't quite a bug and figured out how to do what I wanted. It turns out that the `Show` instance for SourceError does not respect `pprCols` - I don't know if that's a reasonable expectation (although it's what I expected). I ended up using the following code to print these messages: flip gcatch handler $ do runStmt "let f (x, y, z, w, e, r, d , ax, b ,c,ex ,g ,h) = (x :: Int) + y + z" RunToCompletion runStmt "f (1, 2, 3)" RunToCompletion return () where handler :: SourceError -> Ghc () handler srcerr = do let msgs = bagToList $ srcErrorMessages srcerr forM_ msgs $ \msg -> do s <- doc $ errMsgShortDoc msg liftIO $ putStrLn s doc :: GhcMonad m => SDoc -> m String doc sdoc = do flags <- getSessionDynFlags let cols = pprCols flags d = runSDoc sdoc (initSDocContext flags defaultUserStyle) return $ Pretty.fullRender Pretty.PageMode cols 1.5 string_txt "" d where string_txt :: Pretty.TextDetails -> String -> String string_txt (Pretty.Chr c) s = c:s string_txt (Pretty.Str s1) s2 = s1 ++ s2 string_txt (Pretty.PStr s1) s2 = unpackFS s1 ++ s2 string_txt (Pretty.LStr s1 _) s2 = unpackLitString s1 ++ s2 As far as I can tell, there is no simpler way, every function in `Pretty` except for `fullRender` just assumes a default of 100-char lines. -- Andrew On Tue, Jan 7, 2014 at 11:29 AM, Andrew Gibiansky > wrote: Simon, That's exactly what I'm looking for! But it seems that doing it dynamically in the GHC API doesn't work (as in my first email where I tried to adjust pprCols via setSessionDynFlags). I'm going to look into the source as what ppr-cols=N actually sets and probably file a bug - because this seems like buggy behaviour... Andrew On Tue, Jan 7, 2014 at 4:14 AM, Simon Peyton Jones > wrote: -dppr-cols=N changes the width of the output page; you could try a large number there. There isn't a setting meaning "infinity", sadly. Simon From: Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] Sent: 07 January 2014 03:04 To: Simon Peyton Jones Cc: Erik de Castro Lopo; ghc-devs at haskell.org Subject: Re: Changing GHC Error Message Wrapping Thanks Simon. In general I think multiline tuples should have many elements per line, but honestly the tuple case was a very specific example. If possible, I'd like to change the *overall* wrapping for *all* error messages - how does `sep` know when to break lines? there's clearly a numeric value for the number of columns somewhere, but where is it, and is it user-adjustable? For now I am just hacking around this by special-casing some error messages and "un-doing" the line wrapping by parsing the messages and joining lines back together. Thanks, Andrew On Mon, Jan 6, 2014 at 7:44 AM, Simon Peyton-Jones > wrote: I think it's line 705 in types/TypeRep.lhs pprTcApp p pp tc tys | isTupleTyCon tc && tyConArity tc == length tys = pprPromotionQuote tc <> tupleParens (tupleTyConSort tc) (sep (punctuate comma (map (pp TopPrec) tys))) If you change 'sep' to 'fsep', you'll get behaviour more akin to paragraph-filling (hence the "f"). Give it a try. You'll get validation failure from the testsuite, but you can see whether you think the result is better or worse. In general, should multi-line tuples be printed with many elements per line, or just one? Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Andrew Gibiansky Sent: 04 January 2014 17:30 To: Erik de Castro Lopo Cc: ghc-devs at haskell.org Subject: Re: Changing GHC Error Message Wrapping Apologize for the broken image formatting. With the code I posted above, I get the following output: Couldn't match expected type `(GHC.Types.Int, GHC.Types.Int, GHC.Types.Int, t0, t10, t20, t30, t40, t50, t60, t70, t80, t90)' with actual type `(t1, t2, t3)' I would like the types to be on the same line, or at least wrapped to a larger number of columns. Does anyone know how to do this, or where in the GHC source this wrapping is done? Thanks! Andrew On Sat, Jan 4, 2014 at 2:55 AM, Erik de Castro Lopo > wrote: Carter Schonwald wrote: > hey andrew, your image link isn't working (i'm using gmail) I think the list software filters out image attachments. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.gibiansky at gmail.com Wed Jan 8 17:22:46 2014 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Wed, 8 Jan 2014 12:22:46 -0500 Subject: Changing GHC Error Message Wrapping In-Reply-To: <59543203684B2244980D7E4057D5FBC148709F1E@DB3EX14MBXC306.europe.corp.microsoft.com> References: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> <59543203684B2244980D7E4057D5FBC14870765F@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC148707DFC@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC148709F1E@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Of course :) It made sense once I realized that the `show` was generating the string, and that it was not generated when the datatype was being constructed. However, I don't think the `showSDocForUser` call works (I tested). It uses `runSDoc` to generate a `Doc`. It then uses `show` on that Doc: instance Show Doc where showsPrec _ doc cont = showDoc doc cont Looking at `showDoc` we see: showDoc :: Doc -> String -> String showDoc doc rest = showDocWithAppend PageMode doc rest showDocWithAppend :: Mode -> Doc -> String -> String showDocWithAppend mode doc rest = fullRender mode 100 1.5 string_txt rest doc It ultimately calls `showDocWithAppend`, which calls `fullRender` with a hard-coded 100-column limit. -- Andrew On Wed, Jan 8, 2014 at 12:11 PM, Simon Peyton Jones wrote: > Well, the Show instance for a type (any type) cannot possibly respect > pprCols. It can?t: show :: a -> String! No command-line inputs. > > > > I suggest something more like > > > > doc sdoc = do { dflags <- getDynFlags; unqual <- getPrintUnqual; return > (showSDocForUser dflags unqual doc } > > > > Simon > > > > *From:* Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] > *Sent:* 08 January 2014 00:09 > > *To:* Simon Peyton Jones > *Cc:* Erik de Castro Lopo; ghc-devs at haskell.org > *Subject:* Re: Changing GHC Error Message Wrapping > > > > Hello all, > > > > I figured out that this isn't quite a bug and figured out how to do what I > wanted. It turns out that the `Show` instance for SourceError does not > respect `pprCols` - I don't know if that's a reasonable expectation > (although it's what I expected). I ended up using the following code to > print these messages: > > > > flip gcatch handler $ do > > runStmt "let f (x, y, z, w, e, r, d , ax, b ,c,ex ,g ,h) = (x :: Int) > + y + z" RunToCompletion > > runStmt "f (1, 2, 3)" RunToCompletion > > return () > > where > > handler :: SourceError -> Ghc () > > handler srcerr = do > > let msgs = bagToList $ srcErrorMessages srcerr > > forM_ msgs $ \msg -> do > > s <- doc $ errMsgShortDoc msg > > liftIO $ putStrLn s > > > > doc :: GhcMonad m => SDoc -> m String > > doc sdoc = do > > flags <- getSessionDynFlags > > let cols = pprCols flags > > d = runSDoc sdoc (initSDocContext flags defaultUserStyle) > > return $ Pretty.fullRender Pretty.PageMode cols 1.5 string_txt "" d > > where > > string_txt :: Pretty.TextDetails -> String -> String > > string_txt (Pretty.Chr c) s = c:s > > string_txt (Pretty.Str s1) s2 = s1 ++ s2 > > string_txt (Pretty.PStr s1) s2 = unpackFS s1 ++ s2 > > string_txt (Pretty.LStr s1 _) s2 = unpackLitString s1 ++ s2 > > > > As far as I can tell, there is no simpler way, every function in `Pretty` > except for `fullRender` just assumes a default of 100-char lines. > > > > -- Andrew > > > > On Tue, Jan 7, 2014 at 11:29 AM, Andrew Gibiansky < > andrew.gibiansky at gmail.com> wrote: > > Simon, > > > > That's exactly what I'm looking for! But it seems that doing it > dynamically in the GHC API doesn't work (as in my first email where I tried > to adjust pprCols via setSessionDynFlags). > > > > I'm going to look into the source as what ppr-cols=N actually sets and > probably file a bug - because this seems like buggy behaviour... > > > > Andrew > > > > On Tue, Jan 7, 2014 at 4:14 AM, Simon Peyton Jones > wrote: > > -dppr-cols=N changes the width of the output page; you could try a large > number there. There isn?t a setting meaning ?infinity?, sadly. > > > > Simon > > > > *From:* Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] > *Sent:* 07 January 2014 03:04 > *To:* Simon Peyton Jones > *Cc:* Erik de Castro Lopo; ghc-devs at haskell.org > > > *Subject:* Re: Changing GHC Error Message Wrapping > > > > Thanks Simon. > > > > In general I think multiline tuples should have many elements per line, > but honestly the tuple case was a very specific example. If possible, I'd > like to change the *overall* wrapping for *all* error messages - how does > `sep` know when to break lines? there's clearly a numeric value for the > number of columns somewhere, but where is it, and is it user-adjustable? > > > > For now I am just hacking around this by special-casing some error > messages and "un-doing" the line wrapping by parsing the messages and > joining lines back together. > > > > Thanks, > > Andrew > > > > On Mon, Jan 6, 2014 at 7:44 AM, Simon Peyton-Jones > wrote: > > I think it?s line 705 in types/TypeRep.lhs > > > > pprTcApp p pp tc tys > > | isTupleTyCon tc && tyConArity tc == length tys > > = pprPromotionQuote tc <> > > tupleParens (tupleTyConSort tc) (sep (punctuate comma (map (pp > TopPrec) tys))) > > > > If you change ?sep? to ?fsep?, you?ll get behaviour more akin to > paragraph-filling (hence the ?f?). Give it a try. You?ll get validation > failure from the testsuite, but you can see whether you think the result is > better or worse. In general, should multi-line tuples be printed with many > elements per line, or just one? > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Andrew > Gibiansky > *Sent:* 04 January 2014 17:30 > *To:* Erik de Castro Lopo > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Changing GHC Error Message Wrapping > > > > Apologize for the broken image formatting. > > > > With the code I posted above, I get the following output: > > > > Couldn't match expected type `(GHC.Types.Int, > > GHC.Types.Int, > > GHC.Types.Int, > > t0, > > t10, > > t20, > > t30, > > t40, > > t50, > > t60, > > t70, > > t80, > > t90)' > > with actual type `(t1, t2, t3)' > > > > I would like the types to be on the same line, or at least wrapped to a > larger number of columns. > > > > Does anyone know how to do this, or where in the GHC source this wrapping > is done? > > > > Thanks! > > Andrew > > > > On Sat, Jan 4, 2014 at 2:55 AM, Erik de Castro Lopo > wrote: > > Carter Schonwald wrote: > > > hey andrew, your image link isn't working (i'm using gmail) > > I think the list software filters out image attachments. > > Erik > -- > ---------------------------------------------------------------------- > Erik de Castro Lopo > http://www.mega-nerd.com/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Wed Jan 8 19:14:11 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 8 Jan 2014 14:14:11 -0500 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> Message-ID: <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Hi Gergo, As pattern synonyms are user-facing, you should update the user manual along with (perhaps) the wiki. The file to edit is docs/users_guide/glasgow_exts.xml. You should also add a note to docs/users_guide/7.8.1_notes.xml. Apologies if someone has already said this to you. Richard On Jan 7, 2014, at 6:05 PM, Dr. ?RDI Gerg? wrote: > Hi, > > Wow, so, I thought there would be some back-and-forth, then a decision, then I would go and walk the last mile and then formally submit the patch for review - and now I see that in <2 days all that has passed... > > Of course I'll make validate pass, I just didn't even know about it. Likewise, I needed the carrot of 7.8 inclusion dangling before me to start writing the user docs. > > One problem, though, is that I'll be on holiday from tomorrow, so I'll only have time to look into this tonight before next weekend. I'll try my best to fix up validate tonight, and I'll write the docs (which I hope will mostly be an editing job on the wiki) next week. How does that sound? > > Thanks, > Gergo > > On Jan 8, 2014 3:41 AM, "Austin Seipp" wrote: > Hi Gergo, > > Thanks for rebasing your changes. Unfortunately, they do not compile > cleanly with ./validate, which we really need to have working for all > incoming patches. > > In particular, ./validate enables -Werror and a slew of warnings that > you won't normally see during development, which greatly aids in > keeping the code clean. One, for example, is that some of your commits > introduce tabs - we ban tabs and validate errors on them! > > Another: the problem is that in > https://github.com/gergoerdi/ghc/commit/afefa7ac948b1d7801d622824fbdd75ade2ada3f, > you added a Monoid instance for UniqSet - but this doesn't work > correctly. The problem is that UniqSet is just an alias for UniqFM > (type UniqSet a = UniqFM a), so the instance is technically seen as an > orphan. Orphan instances cause -Werror failures with ./validate > (unless you disable them for that module, but here we really > shouldn't.) > > The fix is to write the Monoid instance for UniqFM directly in > UniqFM.hs instead. > > Likewise, here's a real bug that -Werror found in your patch in the > renamer (by building with ./validate): > > compiler/rename/RnBinds.lhs:744:1: Warning: > Pattern match(es) are non-exhaustive > In an equation for `renameSig': > Patterns not matched: _ (PatSynSig _ _ _ _ _) > > Indeed, renameSig in RnBinds doesn't check the PatSynSig case! The > missing instance looks straightforward to implement, but this could > have been a nasty bug waiting. > > If you could please take the time to clean up the ./validate failures, > I'd really appreciate it. I imagine it'll take very little time, and > it will make merging much easier for me. An easy way to do it is just > to check out your pattern-synonyms branches, then say: > > $ CPUS=X sh ./validate > > where 'X' is the number of cores, similar to 'make -jX' > > If it fails, you can make a change, and keep going with: > > $ CPUS=X sh ./validate --no-clean > > and rinse and repeat until it's done. > > Note the --no-clean is required, since `./validate` will immediately > run `make distclean` by default if you do not specify it. > > On Tue, Jan 7, 2014 at 5:50 AM, Dr. ERDI Gergo wrote: > > On Mon, 6 Jan 2014, Carter Schonwald wrote: > > > >> as long as we clearly communicate that there may be refinements / breaking > >> changes > >> subsequently, i'm all for it, unless merging it in slows down 7.8 hitting > >> RC . (its > >> taken long enough for RC to happen... don't want to drag it out further) > > > > > > If that helps, I've updated the version at https://github.com/gergoerdi/ghc > > (and the two sister repos https://github.com/gergoerdi/ghc-testsuite and > > https://github.com/gergoerdi/ghc-haddock) to be based on top of master as of > > today. > > > > Bye, > > Gergo > > > > -- > > > > .--= ULLA! =-----------------. > > \ http://gergo.erdi.hu \ > > `---= gergo at erdi.hu =-------' > > Elvis is dead and I don't feel so good either. > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > -- > Regards, > Austin - PGP: 4096R/0x91384671 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgamari.foss at gmail.com Wed Jan 8 20:09:43 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Wed, 08 Jan 2014 15:09:43 -0500 Subject: LLVM and dynamic linking In-Reply-To: <52CD263C.6020809@gmail.com> References: <877gb7ulmi.fsf@gmail.com> <52B418EC.8090308@gmail.com> <87a9fm2gfr.fsf@gmail.com> <52CD263C.6020809@gmail.com> Message-ID: <87zjn6xmk8.fsf@gmail.com> Simon Marlow writes: > On 27/12/13 20:21, Ben Gamari wrote: >> Simon Marlow writes: >> >>> This sounds right to me. Did you submit a patch? >>> >>> Note that dynamic linking with LLVM is likely to produce significantly >>> worse code that with the NCG right now, because the LLVM back end uses >>> dynamic references even for symbols in the same package, whereas the NCG >>> back-end uses direct static references for these. >>> >> Today with the help of Edward Yang I examined the code produced by the >> LLVM backend in light of this statement. I was surprised to find that >> LLVM's code appears to be no worse than the NCG with respect to >> intra-package references. >> >> My test case can be found here[2] and can be built with the included >> `build.sh` script. The test consists of two modules build into a shared >> library. One module, `LibTest`, exports a few simple members while the >> other module (`LibTest2`) defines members that consume them. Care is >> taken to ensure the members are not inlined. >> >> The tests were done on x86_64 running LLVM 3.4 and GHC HEAD with the >> patches[1] I referred to in my last message. Please let me know if I've >> missed something. > > This is good news, however what worries me is that I still don't > understand *why* you got these results. Where in the LLVM backend is > the magic that does something special for intra-package references? > As far as I can tell, the backend itself does nothing in particular to handle this. > I know where it is in the NCG backend - CLabel.labelDynamic - but I > can't see this function used at all in the LLVM backend. > Right. For the record, I took a first stab at implementing[1] the logic that I thought would needed to get LLVM to do efficient dynamic linking before taking this measurement. I probably should have reused more of the machinery used by the NCG however. I don't believe I managed to get this code stable before dropping it when I realized that LLVM already somehow did the right thing. > So what is the mechanism that lets LLVM optimise these calls? Is it > happening magically in the linker, perhaps? But that would only be > possible when using -Bsymbolic or -Bsymbolic-functions, which is a > choice made at link time. > This seems like the most likely explanation but given we don't pass this flag I really don't see why the linker would do this. More research is necessary it seems. > As far as I can tell, all we do is pass a flag to llc to tell it to > compile for dynamic/PIC, in DriverPipeline.runPhase. > Right. Very mysterious. Cheers, - Ben [1] https://github.com/bgamari/ghc/tree/llvm-intra-package -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 489 bytes Desc: not available URL: From yo.eight at gmail.com Wed Jan 8 21:52:00 2014 From: yo.eight at gmail.com (Yorick Laupa) Date: Wed, 8 Jan 2014 22:52:00 +0100 Subject: Tuple predicates in Template Haskell In-Reply-To: References: Message-ID: Thanks Richard ! I followed your advice and posted a new comment to the ticket Yorick 2014/1/6 Richard Eisenberg > Hello Yorick, > > Thanks for taking this one on! > > First off, this kind of question/post is appropriate for putting right > into the ticket itself. Posting a comment to the ticket makes it more > likely that you'll get a response and saves your thoughts for posterity. > > Now, on to your question: > > That seems somewhat reasonable, but I think your work could go a little > further. It looks like you've introduced TupleP as a new constructor for > Pred. This, I believe, would work. But, I think it would be better to have > a way of using *any* type as a predicate in TH, as allowed by > ConstraintKinds. Perhaps one way to achieve this is to make Pred a synonym > of Type, or there could be a TypeP constructor for Pred. > > In any case, I would recommend writing a wiki page up with a proposed new > TH syntax for predicates and then posting a link to the proposal on the > #7021 ticket. Then, it will be easier to debate the merits of any > particular approach. > > Once again, thanks! > Richard > > On Jan 3, 2014, at 6:13 PM, Yorick Laupa wrote: > > Hi, > > I try to make my way through #7021 [1]. Unfortunately, there is nothing in > the ticket about what should be expected from the code given as example. > > I came with an implementation and I would like feedback from you guys. So, > considering this snippet: > > -- > > {-# LANGUAGE ConstraintKinds #-} > > type IOable a = (Show a, Read a) > > foo :: IOable a => a > foo = undefined > > -- > > This is what I got now when pretty-printing TH.Info after reify "foo" > call: > > VarI Tuple.foo (ForallT [PlainTV a_1627398594] [TupleP 2 [AppT (ConT > GHC.Show.Show) (VarT a_1627398594),AppT (ConT GHC.Read.Read) (VarT > a_1627398594)]] (VarT a_1627398594)) Nothing (Fixity 9 InfixL) > > Does that sound right to you ? > > Thanks for your time > > -- Yorick > > [1] https://ghc.haskell.org/trac/ghc/ticket/7021 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Thu Jan 9 08:48:55 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 9 Jan 2014 02:48:55 -0600 Subject: Pattern synonyms for 7.8? In-Reply-To: <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Message-ID: Hi Gergo, I went ahead and pushed the preliminary work to a new branch in the official repositories. GHC, haddock and testsuite now have a 'wip/pattern-synonyms' branch, where you can test the code: https://github.com/ghc/ghc/commits/wip/pattern-synonyms https://github.com/ghc/haddock/commits/wip/pattern-synonyms https://github.com/ghc/testsuite/commits/wip/pattern-synonyms Any intrepid parties are welcome to try it. A few things of note: 1) As Richard pointed out, the docs are under docs/users_guide, as well as the release notes. Please feel free to elaborate however you want on the feature and the bulletpoint for the release notes. 2) The failures are indeed a result of your code, in particular: driver T4437 [bad stdout] (normal) generics GenDerivOutput [stderr mismatch] (normal) generics GenDerivOutput1_0 [stderr mismatch] (normal) generics GenDerivOutput1_1 [stderr mismatch] (normal) rename/should_compile T7336 [stderr mismatch] (normal) The first four are just tests that need to be updated. T4437 needs to have PatternSynonyms listed (it tests the available extensions,) and the generics test have had their output slightly changed. This is because the generated terms are now annotated with the Origin type, specifying where they come from. Here's an example from GenDerivOutput1_1: ---------------------------- instance GHC.Generics.Selector CanDoRep1_1.S1_1_0Dd where - GHC.Generics.selName _ = "d11d" + (Generated, GHC.Generics.selName _ = "d11d") ---------------------------- I'm not actually sure if this is what we want. Should -ddump-deriv print this? I'm not sure we guarantee the output is syntactically valid anyway, but it's worth considering. Removing this from the output would mean these tests don't need any tweaks. Perhaps Simon or Pedro have something to say. 3) It seems GHCi does not support declaring pattern synonyms at the REPL. I'm not sure if it's intentional, but if it goes in like this, please be sure to document it in the release notes. We can file a ticket later for supporting pattern synonyms at the REPL. On Wed, Jan 8, 2014 at 1:14 PM, Richard Eisenberg wrote: > Hi Gergo, > > As pattern synonyms are user-facing, you should update the user manual along > with (perhaps) the wiki. The file to edit is > docs/users_guide/glasgow_exts.xml. You should also add a note to > docs/users_guide/7.8.1_notes.xml. > > Apologies if someone has already said this to you. > > Richard > > On Jan 7, 2014, at 6:05 PM, Dr. ?RDI Gerg? wrote: > > Hi, > > Wow, so, I thought there would be some back-and-forth, then a decision, then > I would go and walk the last mile and then formally submit the patch for > review - and now I see that in <2 days all that has passed... > > Of course I'll make validate pass, I just didn't even know about it. > Likewise, I needed the carrot of 7.8 inclusion dangling before me to start > writing the user docs. > > One problem, though, is that I'll be on holiday from tomorrow, so I'll only > have time to look into this tonight before next weekend. I'll try my best to > fix up validate tonight, and I'll write the docs (which I hope will mostly > be an editing job on the wiki) next week. How does that sound? > > Thanks, > Gergo > > On Jan 8, 2014 3:41 AM, "Austin Seipp" wrote: >> >> Hi Gergo, >> >> Thanks for rebasing your changes. Unfortunately, they do not compile >> cleanly with ./validate, which we really need to have working for all >> incoming patches. >> >> In particular, ./validate enables -Werror and a slew of warnings that >> you won't normally see during development, which greatly aids in >> keeping the code clean. One, for example, is that some of your commits >> introduce tabs - we ban tabs and validate errors on them! >> >> Another: the problem is that in >> >> https://github.com/gergoerdi/ghc/commit/afefa7ac948b1d7801d622824fbdd75ade2ada3f, >> you added a Monoid instance for UniqSet - but this doesn't work >> correctly. The problem is that UniqSet is just an alias for UniqFM >> (type UniqSet a = UniqFM a), so the instance is technically seen as an >> orphan. Orphan instances cause -Werror failures with ./validate >> (unless you disable them for that module, but here we really >> shouldn't.) >> >> The fix is to write the Monoid instance for UniqFM directly in >> UniqFM.hs instead. >> >> Likewise, here's a real bug that -Werror found in your patch in the >> renamer (by building with ./validate): >> >> compiler/rename/RnBinds.lhs:744:1: Warning: >> Pattern match(es) are non-exhaustive >> In an equation for `renameSig': >> Patterns not matched: _ (PatSynSig _ _ _ _ _) >> >> Indeed, renameSig in RnBinds doesn't check the PatSynSig case! The >> missing instance looks straightforward to implement, but this could >> have been a nasty bug waiting. >> >> If you could please take the time to clean up the ./validate failures, >> I'd really appreciate it. I imagine it'll take very little time, and >> it will make merging much easier for me. An easy way to do it is just >> to check out your pattern-synonyms branches, then say: >> >> $ CPUS=X sh ./validate >> >> where 'X' is the number of cores, similar to 'make -jX' >> >> If it fails, you can make a change, and keep going with: >> >> $ CPUS=X sh ./validate --no-clean >> >> and rinse and repeat until it's done. >> >> Note the --no-clean is required, since `./validate` will immediately >> run `make distclean` by default if you do not specify it. >> >> On Tue, Jan 7, 2014 at 5:50 AM, Dr. ERDI Gergo wrote: >> > On Mon, 6 Jan 2014, Carter Schonwald wrote: >> > >> >> as long as we clearly communicate that there may be refinements / >> >> breaking >> >> changes >> >> subsequently, i'm all for it, unless merging it in slows down 7.8 >> >> hitting >> >> RC . (its >> >> taken long enough for RC to happen... don't want to drag it out >> >> further) >> > >> > >> > If that helps, I've updated the version at >> > https://github.com/gergoerdi/ghc >> > (and the two sister repos https://github.com/gergoerdi/ghc-testsuite and >> > https://github.com/gergoerdi/ghc-haddock) to be based on top of master >> > as of >> > today. >> > >> > Bye, >> > Gergo >> > >> > -- >> > >> > .--= ULLA! =-----------------. >> > \ http://gergo.erdi.hu \ >> > `---= gergo at erdi.hu =-------' >> > Elvis is dead and I don't feel so good either. >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> > >> >> >> >> -- >> Regards, >> Austin - PGP: 4096R/0x91384671 > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From jpm at cs.uu.nl Thu Jan 9 09:10:09 2014 From: jpm at cs.uu.nl (=?ISO-8859-1?Q?Jos=E9_Pedro_Magalh=E3es?=) Date: Thu, 9 Jan 2014 10:10:09 +0100 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Message-ID: On Thu, Jan 9, 2014 at 9:48 AM, Austin Seipp wrote: > > > The first four are just tests that need to be updated. T4437 needs to > have PatternSynonyms listed (it tests the available extensions,) and > the generics test have had their output slightly changed. This is > because the generated terms are now annotated with the Origin type, > specifying where they come from. Here's an example from > GenDerivOutput1_1: > > ---------------------------- > instance GHC.Generics.Selector CanDoRep1_1.S1_1_0Dd where > - GHC.Generics.selName _ = "d11d" > + (Generated, GHC.Generics.selName _ = "d11d") > ---------------------------- > > I'm not actually sure if this is what we want. Should -ddump-deriv > print this? I'm not sure we guarantee the output is syntactically > valid anyway, but it's worth considering. Removing this from the > output would mean these tests don't need any tweaks. > I think it is preferable not to show this Generated stuff. Even if -ddump-deriv is not entirely syntactically valid, it often is, and I found myself copy-pasting from it multiple times before. Cheers, Pedro -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Jan 9 08:39:28 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 09 Jan 2014 08:39:28 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <59543203684B2244980D7E4057D5FBC148709591@DB3EX14MBXC306.europe.corp.microsoft.com> References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> <59543203684B2244980D7E4057D5FBC148708D03@DB3EX14MBXC306.europe.corp.microsoft.com> <52CD19AD.7030503@gmail.com> <59543203684B2244980D7E4057D5FBC148709591@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52CE6040.30705@gmail.com> On 08/01/2014 10:07, Simon Peyton Jones wrote: > | > Can't we just allocate a Cmm "area"? The address of an area is a > | perfectly well-defined Cmm value. > > What about this idea? We don't really have a general concept of areas (any more), and areas aren't exposed in the concrete Cmm syntax at all. The current semantics is that areas may overlap with each other, so there should only be one active area at any point. I found that this was important to ensure that we could generate good code from the stack layout algorithm, otherwise it had to make pessimistic assumptions and use too much stack. You're going to ask me where this is documented, and I think I have to admit to slacking off, sorry :-) We did discuss it at the time, and I made copious notes, but I didn't transfer those to the code. I'll add a Note. Cheers, Simon > Simon > > | -----Original Message----- > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | Sent: 08 January 2014 09:26 > | To: Simon Peyton Jones; Herbert Valerio Riedel > | Cc: ghc-devs at haskell.org > | Subject: Re: High-level Cmm code and stack allocation > | > | On 07/01/14 22:53, Simon Peyton Jones wrote: > | > | Yes, this is technically wrong but luckily works. I'd very much > | > | like to have a better solution, preferably one that doesn't add any > | > | extra overhead. > | > > | > | __decodeFloat_Int is a C function, so it will not touch the Haskell > | > | stack. > | > > | > This all seems terribly fragile to me. At least it ought to be > | surrounded with massive comments pointing out how terribly fragile it > | is, breaking all the rules that we carefully document elsewhere. > | > > | > Can't we just allocate a Cmm "area"? The address of an area is a > | perfectly well-defined Cmm value. > | > | It is fragile, yes. We can't use static memory because it needs to be > | thread-local. This particular hack has gone through several iterations > | over the years: first we had static memory, which broke when we did the > | parallel runtime, then we had special storage in the Capability, which > | we gave up when GMP was split out into a separate library, because it > | didn't seem right to have magic fields in the Capability for one > | library. > | > | I'm looking into whether we can do temporary allocation on the heap for > | this instead. > | > | Cheers, > | Simon > | > | > | > Simon > | > > | > | -----Original Message----- > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | > | Simon Marlow > | > | Sent: 07 January 2014 16:05 > | > | To: Herbert Valerio Riedel; ghc-devs at haskell.org > | > | Subject: Re: High-level Cmm code and stack allocation > | > | > | > | On 04/01/2014 23:26, Herbert Valerio Riedel wrote: > | > | > Hello, > | > | > > | > | > According to Note [Syntax of .cmm files], > | > | > > | > | > | There are two ways to write .cmm code: > | > | > | > | > | > | (1) High-level Cmm code delegates the stack handling to GHC, > | and > | > | > | never explicitly mentions Sp or registers. > | > | > | > | > | > | (2) Low-level Cmm manages the stack itself, and must know about > | > | > | calling conventions. > | > | > | > | > | > | Whether you want high-level or low-level Cmm is indicated by the > | > | > | presence of an argument list on a procedure. > | > | > > | > | > However, while working on integer-gmp I've been noticing in > | > | > integer-gmp/cbits/gmp-wrappers.cmm that even though all Cmm > | > | procedures > | > | > have been converted to high-level Cmm, they still reference the > | 'Sp' > | > | > register, e.g. > | > | > > | > | > > | > | > #define GMP_TAKE1_RET1(name,mp_fun) \ > | > | > name (W_ ws1, P_ d1) \ > | > | > { \ > | > | > W_ mp_tmp1; \ > | > | > W_ mp_result1; \ > | > | > \ > | > | > again: \ > | > | > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ > | > | > MAYBE_GC(again); \ > | > | > \ > | > | > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ > | > | > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ > | > | > ... \ > | > | > > | > | > > | > | > So is this valid high-level Cmm code? What's the proper way to > | > | allocate > | > | > Stack (and/or Heap) memory from high-level Cmm code? > | > | > | > | Yes, this is technically wrong but luckily works. I'd very much > | > | like to have a better solution, preferably one that doesn't add any > | > | extra overhead. > | > | > | > | The problem here is that we need to allocate a couple of temporary > | > | words and take their address; that's an unusual thing to do in Cmm, > | > | so it only occurs in a few places (mainly interacting with gmp). > | > | Usually if you want some temporary storage you can use local > | > | variables or some heap-allocated memory. > | > | > | > | Cheers, > | > | Simon > | > | _______________________________________________ > | > | ghc-devs mailing list > | > | ghc-devs at haskell.org > | > | http://www.haskell.org/mailman/listinfo/ghc-devs > | > > From marlowsd at gmail.com Thu Jan 9 09:20:49 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 09 Jan 2014 09:20:49 +0000 Subject: Cannot find normal object file when compiling TH code In-Reply-To: References: Message-ID: <52CE69F1.4000704@gmail.com> There's a ticket for this: https://ghc.haskell.org/trac/ghc/ticket/8180 On 02/01/2014 22:36, Yorick Laupa wrote: > Hi Carter, > > Someone figured it out on #ghc. It seems we need to compile with > -dynamic when having TH code now > (https://ghc.haskell.org/trac/ghc/ticket/8180) > > About a snippet, I working on that ticket > (https://ghc.haskell.org/trac/ghc/ticket/7021) so it's based on the > given sample: > > -- Tuple.hs > {-# LANGUAGE ConstraintKinds, TemplateHaskell #-} > > module Tuple where > > import Language.Haskell.TH > > type IOable a = (Show a, Read a) > > foo :: IOable a => a > foo = undefined > > test :: Q Exp > test = do > Just fooName <- lookupValueName "foo" > info <- reify fooName > runIO $ print info > [| \_ -> 0 |] > -- > > -- Main.hs > {-# LANGUAGE TemplateHaskell #-} > module Main where > > import Tuple > > func :: a -> Int > func = $(test) > > main :: IO () > main = print "hello" > > -- > > > 2014/1/2 Carter Schonwald > > > Did you build ghc with both static and dynamic libs? Starting in > 7.7/HEAD, ghci uses Dylib versions of libraries, and thus TH does > too. What OS and architecture is this, and what commit is your ghc > build from? > > Last but most importantly, if you don't share the code, we can't > really help isolate the problem. > > > On Thursday, January 2, 2014, Yorick Laupa wrote: > > Hi, > > Oddly I can't compile code using TH with GHC HEAD. Here's what I > get: > > cannot find normal object file ?./Tuple.dyn_o? > while linking an interpreted expression > > I'm currently working on a issue so I compile the code with > ghc-stage2 for convenience. > > I found an old ticket related to my problem > (https://ghc.haskell.org/trac/ghc/ticket/8443) but adding > -XTemplateHaskell didn't work out. > > The code compiles with ghc 7.6.3. > > Here's my setup: Archlinux (3.12.6-1) > > Any suggestions ? > > --Yorick > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From hvr at gnu.org Thu Jan 9 10:31:13 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Thu, 09 Jan 2014 11:31:13 +0100 Subject: Folding ghc/testsuite repos *now*, 2nd attempt (was: Repository Reorganization Question) In-Reply-To: (Austin Seipp's message of "Wed, 4 Dec 2013 15:24:40 -0600") References: Message-ID: <87y52pzbta.fsf@gnu.org> Hello All, It seems to me, there were no major obstacles left unaddressed in the previous discussion[1] (see summary below) to merging testsuite.git into ghc.git. So here's one last attempt to get testsuite.git folded into ghc.git before Austin branches off 7.8 Please speak up *now*, if you have any objections to folding testsuite.git into ghc.git *soon* (with *soon* meaning upcoming Sunday, 12th Jan 2014) ---- A summary of the previous thread so far: - Let's fold testsuite into ghc before branching off 7.8RC - ghc/testsuite have the most coupled commits - make's it a bit easier to cherry pick ghc/testsuite between branches - while being low-risk, will provide empiric value for deciding how to proceed with folding in other Git repos - Proof of concept in http://git.haskell.org/ghc.git/shortlog/refs/heads/wip/T8545 - general support for it; consensus that it will be beneficial and shouldn't be a huge disruption - sync-all is adapted to abort operation if `testsuite/.git` is detected, and advising the user to remove (or move-out-of-the-way) - Concern about broken commit-refs in Trac and other places: - old testsuite.git repo will remain available (more or less) read-only; so old commit-shas will still be resolvable - (old) Trac commit-links which work currently will continue to work, as they refer specifically to the testsuite.git repo, and Trac will know they point to the old testsuite.git - If one doesn't know which Git repo a commit-id is in, there's still the SHA1 look-up service at http://git.haskell.org/ which will search all repos hosted at git.haskell.org for a commit SHA1 prefix. Or alternatively, just ask google about the SHA1. - Binary blobs (a few compiled executables) that were committed by accident and removed right away again are removed from history to avoid carrying around useless garbage in the Git history (saves ~20MiB) - Path names are rewritten to be based in testsuite/, in order to make it easier for Git operations (git log et al.) to follow history for folders/filenames - Old Commit-ids will *not* be written into the rewritten commits' messages in order not to add noise (old commit ids can be resolved via the remaining old testsuite.git repo) [1] http://permalink.gmane.org/gmane.comp.lang.haskell.ghc.devel/3099 From simonpj at microsoft.com Thu Jan 9 12:48:30 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 9 Jan 2014 12:48:30 +0000 Subject: Folding ghc/testsuite repos *now*, 2nd attempt (was: Repository Reorganization Question) In-Reply-To: <87y52pzbta.fsf@gnu.org> References: <87y52pzbta.fsf@gnu.org> Message-ID: <59543203684B2244980D7E4057D5FBC14870B40C@DB3EX14MBXC306.europe.corp.microsoft.com> I'm all for it! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Herbert Valerio Riedel | Sent: 09 January 2014 10:31 | To: ghc-devs | Subject: Folding ghc/testsuite repos *now*, 2nd attempt (was: Repository | Reorganization Question) | | Hello All, | | It seems to me, there were no major obstacles left unaddressed in the | previous discussion[1] (see summary below) to merging testsuite.git into | ghc.git. | | So here's one last attempt to get testsuite.git folded into ghc.git | before Austin branches off 7.8 | | Please speak up *now*, if you have any objections to folding | testsuite.git into ghc.git *soon* (with *soon* meaning upcoming Sunday, | 12th Jan 2014) | | ---- | | A summary of the previous thread so far: | | - Let's fold testsuite into ghc before branching off 7.8RC | - ghc/testsuite have the most coupled commits | - make's it a bit easier to cherry pick ghc/testsuite between | branches | - while being low-risk, will provide empiric value for deciding how | to proceed with folding in other Git repos | | - Proof of concept in | http://git.haskell.org/ghc.git/shortlog/refs/heads/wip/T8545 | | - general support for it; consensus that it will be beneficial and | shouldn't be a huge disruption | | - sync-all is adapted to abort operation if `testsuite/.git` is | detected, and advising the user to remove (or move-out-of-the-way) | | - Concern about broken commit-refs in Trac and other places: | | - old testsuite.git repo will remain available (more or less) | read-only; so old commit-shas will still be resolvable | | - (old) Trac commit-links which work currently will continue to | work, as they refer specifically to the testsuite.git repo, and | Trac will know they point to the old testsuite.git | | - If one doesn't know which Git repo a commit-id is in, there's | still the SHA1 look-up service at http://git.haskell.org/ which | will search all repos hosted at git.haskell.org for a commit | SHA1 prefix. Or alternatively, just ask google about the SHA1. | | - Binary blobs (a few compiled executables) that were committed by | accident and removed right away again are removed from history to | avoid carrying around useless garbage in the Git history (saves | ~20MiB) | | - Path names are rewritten to be based in testsuite/, in order to | make it easier for Git operations (git log et al.) to follow | history for folders/filenames | | - Old Commit-ids will *not* be written into the rewritten commits' | messages in order not to add noise (old commit ids can be resolved | via the remaining old testsuite.git repo) | | | | [1] http://permalink.gmane.org/gmane.comp.lang.haskell.ghc.devel/3099 | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From johan.tibell at gmail.com Thu Jan 9 12:57:58 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 9 Jan 2014 13:57:58 +0100 Subject: Folding ghc/testsuite repos *now*, 2nd attempt (was: Repository Reorganization Question) In-Reply-To: <59543203684B2244980D7E4057D5FBC14870B40C@DB3EX14MBXC306.europe.corp.microsoft.com> References: <87y52pzbta.fsf@gnu.org> <59543203684B2244980D7E4057D5FBC14870B40C@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: +1 On Thu, Jan 9, 2014 at 1:48 PM, Simon Peyton Jones wrote: > I'm all for it! > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Herbert Valerio Riedel > | Sent: 09 January 2014 10:31 > | To: ghc-devs > | Subject: Folding ghc/testsuite repos *now*, 2nd attempt (was: Repository > | Reorganization Question) > | > | Hello All, > | > | It seems to me, there were no major obstacles left unaddressed in the > | previous discussion[1] (see summary below) to merging testsuite.git into > | ghc.git. > | > | So here's one last attempt to get testsuite.git folded into ghc.git > | before Austin branches off 7.8 > | > | Please speak up *now*, if you have any objections to folding > | testsuite.git into ghc.git *soon* (with *soon* meaning upcoming Sunday, > | 12th Jan 2014) > | > | ---- > | > | A summary of the previous thread so far: > | > | - Let's fold testsuite into ghc before branching off 7.8RC > | - ghc/testsuite have the most coupled commits > | - make's it a bit easier to cherry pick ghc/testsuite between > | branches > | - while being low-risk, will provide empiric value for deciding how > | to proceed with folding in other Git repos > | > | - Proof of concept in > | http://git.haskell.org/ghc.git/shortlog/refs/heads/wip/T8545 > | > | - general support for it; consensus that it will be beneficial and > | shouldn't be a huge disruption > | > | - sync-all is adapted to abort operation if `testsuite/.git` is > | detected, and advising the user to remove (or move-out-of-the-way) > | > | - Concern about broken commit-refs in Trac and other places: > | > | - old testsuite.git repo will remain available (more or less) > | read-only; so old commit-shas will still be resolvable > | > | - (old) Trac commit-links which work currently will continue to > | work, as they refer specifically to the testsuite.git repo, and > | Trac will know they point to the old testsuite.git > | > | - If one doesn't know which Git repo a commit-id is in, there's > | still the SHA1 look-up service at http://git.haskell.org/ which > | will search all repos hosted at git.haskell.org for a commit > | SHA1 prefix. Or alternatively, just ask google about the SHA1. > | > | - Binary blobs (a few compiled executables) that were committed by > | accident and removed right away again are removed from history to > | avoid carrying around useless garbage in the Git history (saves > | ~20MiB) > | > | - Path names are rewritten to be based in testsuite/, in order to > | make it easier for Git operations (git log et al.) to follow > | history for folders/filenames > | > | - Old Commit-ids will *not* be written into the rewritten commits' > | messages in order not to add noise (old commit ids can be resolved > | via the remaining old testsuite.git repo) > | > | > | > | [1] http://permalink.gmane.org/gmane.comp.lang.haskell.ghc.devel/3099 > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkn.akio at gmail.com Thu Jan 9 13:25:04 2014 From: tkn.akio at gmail.com (Akio Takano) Date: Thu, 9 Jan 2014 22:25:04 +0900 Subject: Extending fold/build fusion In-Reply-To: References: Message-ID: Any input on this is appreciated. In particular, I'd like to know: if I implement the idea as a patch to the base package, is there a chance it is considered for merge? -- Takano Akio On Fri, Jan 3, 2014 at 11:20 PM, Akio Takano wrote: > Hi, > > I have been thinking about how foldl' can be turned into a good consumer, > and I came up with something that I thought would work. So I'd like to ask > for opinions from the ghc devs: if this idea looks good, if it is a known > bad idea, if there is a better way to do it, etc. > > The main idea is to have an extended version of foldr: > > -- | A mapping between @a@ and @b at . > data Wrap a b = Wrap (a -> b) (b -> a) > > foldrW > :: (forall e. Wrap (f e) (e -> b -> b)) > -> (a -> b -> b) -> b -> [a] -> b > foldrW (Wrap wrap unwrap) f z0 list0 = wrap go list0 z0 > where > go = unwrap $ \list z' -> case list of > [] -> z' > x:xs -> f x $ wrap go xs z' > > This allows the user to apply an arbitrary "worker-wrapper" transformation > to the loop. > > Using this, foldl' can be defined as > > newtype Simple b e = Simple { runSimple :: e -> b -> b } > > foldl' :: (b -> a -> b) -> b -> [a] -> b > foldl' f initial xs = foldrW (Wrap wrap unwrap) g id xs initial > where > wrap (Simple s) e k a = k $ s e a > unwrap u = Simple $ \e -> u e id > g x next acc = next $! f acc x > > The wrap and unwrap functions here ensure that foldl' gets compiled into a > loop that returns a value of 'b', rather than a function 'b -> b', > effectively un-CPS-transforming the loop. > > I put preliminary code and some more explanation on Github: > > https://github.com/takano-akio/ww-fusion > > Thank you, > Takano Akio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jan 9 13:40:14 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 9 Jan 2014 13:40:14 +0000 Subject: Optimisation flags at -O0 In-Reply-To: References: Message-ID: <59543203684B2244980D7E4057D5FBC14870B554@DB3EX14MBXC306.europe.corp.microsoft.com> It appears I get the same output wether I use `-fspec-constr` or not. I'm afraid so. Look in simplCore/SimplCore.lhs, function getCoreToDo. This builds the main optimisation pipeline. You'll see that it has a -O0 path and a -O1/-O2 path. The flag Opt_SpecConstr is consulted only in the latter. One could perhaps do it differently but that's the way it is right now. I test an isolated optimisation by switching it on and off with -O1. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Chris Heller Sent: 06 January 2014 02:44 To: ghc-devs at haskell.org Subject: Optimisation flags at -O0 I wanted to understand better what `-fspec-constr` does. So I compiled the User Guide example with `-O0 -fspec-constr` to isolate the effects of call-pattern specialization, and nothing else (I used ghc-core to pretty-print the resulting Core syntax). It appears I get the same output wether I use `-fspec-constr` or not. Does this mean that compiling with `-O0` even explicitly enabled optimizations are turned off? If that is the case, how does one test an isolated optimization? -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 9 16:01:08 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 9 Jan 2014 11:01:08 -0500 Subject: Extending fold/build fusion In-Reply-To: References: Message-ID: Hey akio, it's certainly an interesting idea. If you implement it, the first step would be to run a nofib before and after to benchmark the impact of the change. On Thursday, January 9, 2014, Akio Takano wrote: > Any input on this is appreciated. In particular, I'd like to know: if I > implement the idea as a patch to the base package, is there a chance it is > considered for merge? > > -- Takano Akio > > On Fri, Jan 3, 2014 at 11:20 PM, Akio Takano > > wrote: > >> Hi, >> >> I have been thinking about how foldl' can be turned into a good consumer, >> and I came up with something that I thought would work. So I'd like to ask >> for opinions from the ghc devs: if this idea looks good, if it is a known >> bad idea, if there is a better way to do it, etc. >> >> The main idea is to have an extended version of foldr: >> >> -- | A mapping between @a@ and @b at . >> data Wrap a b = Wrap (a -> b) (b -> a) >> >> foldrW >> :: (forall e. Wrap (f e) (e -> b -> b)) >> -> (a -> b -> b) -> b -> [a] -> b >> foldrW (Wrap wrap unwrap) f z0 list0 = wrap go list0 z0 >> where >> go = unwrap $ \list z' -> case list of >> [] -> z' >> x:xs -> f x $ wrap go xs z' >> >> This allows the user to apply an arbitrary "worker-wrapper" >> transformation to the loop. >> >> Using this, foldl' can be defined as >> >> newtype Simple b e = Simple { runSimple :: e -> b -> b } >> >> foldl' :: (b -> a -> b) -> b -> [a] -> b >> foldl' f initial xs = foldrW (Wrap wrap unwrap) g id xs initial >> where >> wrap (Simple s) e k a = k $ s e a >> unwrap u = Simple $ \e -> u e id >> g x next acc = next $! f acc x >> >> The wrap and unwrap functions here ensure that foldl' gets compiled into >> a loop that returns a value of 'b', rather than a function 'b -> b', >> effectively un-CPS-transforming the loop. >> >> I put preliminary code and some more explanation on Github: >> >> https://github.com/takano-akio/ww-fusion >> >> Thank you, >> Takano Akio >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From awick at galois.com Thu Jan 9 19:17:55 2014 From: awick at galois.com (Adam Wick) Date: Thu, 9 Jan 2014 11:17:55 -0800 Subject: panic when compiling SHA In-Reply-To: <52CD2B7B.2030501@gmail.com> References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <59543203684B2244980D7E4057D5FBC148707206@DB3EX14MBXC306.europe.corp.microsoft.com> <2E9BAE47-AE0B-4189-89BC-A01FF8DE499B@ouroborus.net> <52CD2B7B.2030501@gmail.com> Message-ID: On Jan 8, 2014, at 2:42 AM, Simon Marlow wrote: > Neither of the register allocators reuse spill slots for variables that have disjoint live ranges, so the fact that we ran out of spill slots is not necessarily indicative of terrible code (but I agree that it's a strong hint). That?s the problem with SHA, then. The implementation (and the spec, really) is essentially a long combination of the form: let x_n5 = small_computation x_n1 x_n2 x_n3 x_n4 x_n6 = small_computation x_n2 x_n3 x_n4 x_n5 ? Which has ~70 entries. The actual number of live variables alive at any time should be relatively small, but if slots aren?t getting reused there?s going to be some significant blowup. (To be honest, I had figured ? and thought I had validated ? that doing it this way would give the compiler the best chance at generating optimal code, but it appears I merely set myself up to hit this limitation several years later.) - Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2199 bytes Desc: not available URL: From george.colpitts at gmail.com Thu Jan 9 20:08:11 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 9 Jan 2014 16:08:11 -0400 Subject: panic when compiling SHA In-Reply-To: References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <59543203684B2244980D7E4057D5FBC148707206@DB3EX14MBXC306.europe.corp.microsoft.com> <2E9BAE47-AE0B-4189-89BC-A01FF8DE499B@ouroborus.net> <52CD2B7B.2030501@gmail.com> Message-ID: Does LLVM have the same limitation that its register allocator does not reuse spill slots for variables that have disjoint live ranges? If not, could the library be compiled with llvm? On Thu, Jan 9, 2014 at 3:17 PM, Adam Wick wrote: > On Jan 8, 2014, at 2:42 AM, Simon Marlow wrote: > > Neither of the register allocators reuse spill slots for variables that > have disjoint live ranges, so the fact that we ran out of spill slots is > not necessarily indicative of terrible code (but I agree that it's a strong > hint). > > > That?s the problem with SHA, then. The implementation (and the spec, > really) is essentially a long combination of the form: > > let x_n5 = small_computation x_n1 x_n2 x_n3 x_n4 > x_n6 = small_computation x_n2 x_n3 x_n4 x_n5 > ? > > Which has ~70 entries. The actual number of live variables alive at any > time should be relatively small, but if slots aren?t getting reused there?s > going to be some significant blowup. (To be honest, I had figured ? and > thought I had validated ? that doing it this way would give the compiler > the best chance at generating optimal code, but it appears I merely set > myself up to hit this limitation several years later.) > > > - Adam > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 9 20:22:22 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 9 Jan 2014 15:22:22 -0500 Subject: panic when compiling SHA In-Reply-To: References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <59543203684B2244980D7E4057D5FBC148707206@DB3EX14MBXC306.europe.corp.microsoft.com> <2E9BAE47-AE0B-4189-89BC-A01FF8DE499B@ouroborus.net> <52CD2B7B.2030501@gmail.com> Message-ID: george, the problem with that is not all targets that ghc support have an llvm backend. (in fact, one problem I hope to help resolve (not in full mind you) for 7.10 is that theres a semi disjoint coverage across the backends) On Thu, Jan 9, 2014 at 3:08 PM, George Colpitts wrote: > Does LLVM have the same limitation that its register allocator does not > reuse spill slots for variables that have disjoint live ranges? If not, > could the library be compiled with llvm? > > > On Thu, Jan 9, 2014 at 3:17 PM, Adam Wick wrote: > >> On Jan 8, 2014, at 2:42 AM, Simon Marlow wrote: >> >> Neither of the register allocators reuse spill slots for variables that >> have disjoint live ranges, so the fact that we ran out of spill slots is >> not necessarily indicative of terrible code (but I agree that it's a strong >> hint). >> >> >> That?s the problem with SHA, then. The implementation (and the spec, >> really) is essentially a long combination of the form: >> >> let x_n5 = small_computation x_n1 x_n2 x_n3 x_n4 >> x_n6 = small_computation x_n2 x_n3 x_n4 x_n5 >> ? >> >> Which has ~70 entries. The actual number of live variables alive at any >> time should be relatively small, but if slots aren?t getting reused there?s >> going to be some significant blowup. (To be honest, I had figured ? and >> thought I had validated ? that doing it this way would give the compiler >> the best chance at generating optimal code, but it appears I merely set >> myself up to hit this limitation several years later.) >> >> >> - Adam >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Jan 9 20:49:34 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 09 Jan 2014 20:49:34 +0000 Subject: panic when compiling SHA In-Reply-To: References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <59543203684B2244980D7E4057D5FBC148707206@DB3EX14MBXC306.europe.corp.microsoft.com> <2E9BAE47-AE0B-4189-89BC-A01FF8DE499B@ouroborus.net> <52CD2B7B.2030501@gmail.com> Message-ID: <52CF0B5E.2010106@gmail.com> On 09/01/14 20:08, George Colpitts wrote: > Does LLVM have the same limitation that its register allocator does not > reuse spill slots for variables that have disjoint live ranges? If not, > could the library be compiled with llvm? Let me reiterate: the SHA-1 library compiles just fine, provided the -fregs-graph flag is not used with GHC 7.8.1. As far as I know it compiles when using the LLVM backend too, but you don't have to use LLVM: the NCG works fine. Cheers, Simon > > On Thu, Jan 9, 2014 at 3:17 PM, Adam Wick > wrote: > > On Jan 8, 2014, at 2:42 AM, Simon Marlow > wrote: >> Neither of the register allocators reuse spill slots for variables >> that have disjoint live ranges, so the fact that we ran out of >> spill slots is not necessarily indicative of terrible code (but I >> agree that it's a strong hint). > > That?s the problem with SHA, then. The implementation (and the spec, > really) is essentially a long combination of the form: > > let x_n5 = small_computation x_n1 x_n2 x_n3 x_n4 > x_n6 = small_computation x_n2 x_n3 x_n4 x_n5 > ? > > Which has ~70 entries. The actual number of live variables alive at > any time should be relatively small, but if slots aren?t getting > reused there?s going to be some significant blowup. (To be honest, I > had figured ? and thought I had validated ? that doing it this way > would give the compiler the best chance at generating optimal code, > but it appears I merely set myself up to hit this limitation several > years later.) > > > - Adam > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > From fuuzetsu at fuuzetsu.co.uk Fri Jan 10 10:01:28 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 10 Jan 2014 10:01:28 +0000 Subject: Validating with Haddock In-Reply-To: <52BF0209.6020000@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> Message-ID: <52CFC4F8.60000@fuuzetsu.co.uk> Hi all, I have now merged in the new parser and new features onto a single branch. I'm having some issues validating with HEAD at the moment (#8661, unrelated problem) but while I get that sorted out, someone might want to try validating with Haddock changes on their own platform. The full branch is at [1]. I have squashed the changes to what I feel is the minimum number of commits until they completely stop making sense. It should apply cleanly on top of current Haddock master branch. The documentation is updated so you can read about what changed. Feel free to ask any questions. I will post again once I can confirm that the branch validates for me without any new test failures. Thanks for your patience. [1]: https://github.com/Fuuzetsu/haddock/tree/new-features -- Mateusz K. From simonpj at microsoft.com Fri Jan 10 14:21:16 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 10 Jan 2014 14:21:16 +0000 Subject: Changing GHC Error Message Wrapping In-Reply-To: References: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> <59543203684B2244980D7E4057D5FBC14870765F@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC148707DFC@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC148709F1E@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <59543203684B2244980D7E4057D5FBC14870E838@DB3EX14MBXC306.europe.corp.microsoft.com> Crumbs. You are absolutely right. I'll fix that. (It's a relic from when the flags weren't available to the show functions.) Simon From: Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] Sent: 08 January 2014 17:23 To: Simon Peyton Jones Cc: Erik de Castro Lopo; ghc-devs at haskell.org Subject: Re: Changing GHC Error Message Wrapping Of course :) It made sense once I realized that the `show` was generating the string, and that it was not generated when the datatype was being constructed. However, I don't think the `showSDocForUser` call works (I tested). It uses `runSDoc` to generate a `Doc`. It then uses `show` on that Doc: instance Show Doc where showsPrec _ doc cont = showDoc doc cont Looking at `showDoc` we see: showDoc :: Doc -> String -> String showDoc doc rest = showDocWithAppend PageMode doc rest showDocWithAppend :: Mode -> Doc -> String -> String showDocWithAppend mode doc rest = fullRender mode 100 1.5 string_txt rest doc It ultimately calls `showDocWithAppend`, which calls `fullRender` with a hard-coded 100-column limit. -- Andrew On Wed, Jan 8, 2014 at 12:11 PM, Simon Peyton Jones > wrote: Well, the Show instance for a type (any type) cannot possibly respect pprCols. It can't: show :: a -> String! No command-line inputs. I suggest something more like doc sdoc = do { dflags <- getDynFlags; unqual <- getPrintUnqual; return (showSDocForUser dflags unqual doc } Simon From: Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] Sent: 08 January 2014 00:09 To: Simon Peyton Jones Cc: Erik de Castro Lopo; ghc-devs at haskell.org Subject: Re: Changing GHC Error Message Wrapping Hello all, I figured out that this isn't quite a bug and figured out how to do what I wanted. It turns out that the `Show` instance for SourceError does not respect `pprCols` - I don't know if that's a reasonable expectation (although it's what I expected). I ended up using the following code to print these messages: flip gcatch handler $ do runStmt "let f (x, y, z, w, e, r, d , ax, b ,c,ex ,g ,h) = (x :: Int) + y + z" RunToCompletion runStmt "f (1, 2, 3)" RunToCompletion return () where handler :: SourceError -> Ghc () handler srcerr = do let msgs = bagToList $ srcErrorMessages srcerr forM_ msgs $ \msg -> do s <- doc $ errMsgShortDoc msg liftIO $ putStrLn s doc :: GhcMonad m => SDoc -> m String doc sdoc = do flags <- getSessionDynFlags let cols = pprCols flags d = runSDoc sdoc (initSDocContext flags defaultUserStyle) return $ Pretty.fullRender Pretty.PageMode cols 1.5 string_txt "" d where string_txt :: Pretty.TextDetails -> String -> String string_txt (Pretty.Chr c) s = c:s string_txt (Pretty.Str s1) s2 = s1 ++ s2 string_txt (Pretty.PStr s1) s2 = unpackFS s1 ++ s2 string_txt (Pretty.LStr s1 _) s2 = unpackLitString s1 ++ s2 As far as I can tell, there is no simpler way, every function in `Pretty` except for `fullRender` just assumes a default of 100-char lines. -- Andrew On Tue, Jan 7, 2014 at 11:29 AM, Andrew Gibiansky > wrote: Simon, That's exactly what I'm looking for! But it seems that doing it dynamically in the GHC API doesn't work (as in my first email where I tried to adjust pprCols via setSessionDynFlags). I'm going to look into the source as what ppr-cols=N actually sets and probably file a bug - because this seems like buggy behaviour... Andrew On Tue, Jan 7, 2014 at 4:14 AM, Simon Peyton Jones > wrote: -dppr-cols=N changes the width of the output page; you could try a large number there. There isn't a setting meaning "infinity", sadly. Simon From: Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] Sent: 07 January 2014 03:04 To: Simon Peyton Jones Cc: Erik de Castro Lopo; ghc-devs at haskell.org Subject: Re: Changing GHC Error Message Wrapping Thanks Simon. In general I think multiline tuples should have many elements per line, but honestly the tuple case was a very specific example. If possible, I'd like to change the *overall* wrapping for *all* error messages - how does `sep` know when to break lines? there's clearly a numeric value for the number of columns somewhere, but where is it, and is it user-adjustable? For now I am just hacking around this by special-casing some error messages and "un-doing" the line wrapping by parsing the messages and joining lines back together. Thanks, Andrew On Mon, Jan 6, 2014 at 7:44 AM, Simon Peyton-Jones > wrote: I think it's line 705 in types/TypeRep.lhs pprTcApp p pp tc tys | isTupleTyCon tc && tyConArity tc == length tys = pprPromotionQuote tc <> tupleParens (tupleTyConSort tc) (sep (punctuate comma (map (pp TopPrec) tys))) If you change 'sep' to 'fsep', you'll get behaviour more akin to paragraph-filling (hence the "f"). Give it a try. You'll get validation failure from the testsuite, but you can see whether you think the result is better or worse. In general, should multi-line tuples be printed with many elements per line, or just one? Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Andrew Gibiansky Sent: 04 January 2014 17:30 To: Erik de Castro Lopo Cc: ghc-devs at haskell.org Subject: Re: Changing GHC Error Message Wrapping Apologize for the broken image formatting. With the code I posted above, I get the following output: Couldn't match expected type `(GHC.Types.Int, GHC.Types.Int, GHC.Types.Int, t0, t10, t20, t30, t40, t50, t60, t70, t80, t90)' with actual type `(t1, t2, t3)' I would like the types to be on the same line, or at least wrapped to a larger number of columns. Does anyone know how to do this, or where in the GHC source this wrapping is done? Thanks! Andrew On Sat, Jan 4, 2014 at 2:55 AM, Erik de Castro Lopo > wrote: Carter Schonwald wrote: > hey andrew, your image link isn't working (i'm using gmail) I think the list software filters out image attachments. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.gibiansky at gmail.com Fri Jan 10 14:26:38 2014 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Fri, 10 Jan 2014 09:26:38 -0500 Subject: Changing GHC Error Message Wrapping In-Reply-To: <59543203684B2244980D7E4057D5FBC14870E838@DB3EX14MBXC306.europe.corp.microsoft.com> References: <20140104185507.5a1b9b490d052db8ca579fc3@mega-nerd.com> <59543203684B2244980D7E4057D5FBC14870765F@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC148707DFC@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC148709F1E@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC14870E838@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Thanks! On Fri, Jan 10, 2014 at 9:21 AM, Simon Peyton Jones wrote: > Crumbs. You are absolutely right. I?ll fix that. (It?s a relic from > when the flags weren?t available to the show functions.) > > > > Simon > > > > *From:* Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] > *Sent:* 08 January 2014 17:23 > > *To:* Simon Peyton Jones > *Cc:* Erik de Castro Lopo; ghc-devs at haskell.org > *Subject:* Re: Changing GHC Error Message Wrapping > > > > Of course :) It made sense once I realized that the `show` was generating > the string, and that it was not generated when the datatype was being > constructed. > > > > However, I don't think the `showSDocForUser` call works (I tested). It > uses `runSDoc` to generate a `Doc`. It then uses `show` on that Doc: > > > > instance Show Doc where > > showsPrec _ doc cont = showDoc doc cont > > > > Looking at `showDoc` we see: > > > > showDoc :: Doc -> String -> String > > showDoc doc rest = showDocWithAppend PageMode doc rest > > > > showDocWithAppend :: Mode -> Doc -> String -> String > > showDocWithAppend mode doc rest = fullRender mode 100 1.5 string_txt rest > doc > > > > It ultimately calls `showDocWithAppend`, which calls `fullRender` with a > hard-coded 100-column limit. > > > > -- Andrew > > > > > > On Wed, Jan 8, 2014 at 12:11 PM, Simon Peyton Jones > wrote: > > Well, the Show instance for a type (any type) cannot possibly respect > pprCols. It can?t: show :: a -> String! No command-line inputs. > > > > I suggest something more like > > > > doc sdoc = do { dflags <- getDynFlags; unqual <- getPrintUnqual; return > (showSDocForUser dflags unqual doc } > > > > Simon > > > > *From:* Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] > *Sent:* 08 January 2014 00:09 > > > *To:* Simon Peyton Jones > *Cc:* Erik de Castro Lopo; ghc-devs at haskell.org > *Subject:* Re: Changing GHC Error Message Wrapping > > > > Hello all, > > > > I figured out that this isn't quite a bug and figured out how to do what I > wanted. It turns out that the `Show` instance for SourceError does not > respect `pprCols` - I don't know if that's a reasonable expectation > (although it's what I expected). I ended up using the following code to > print these messages: > > > > flip gcatch handler $ do > > runStmt "let f (x, y, z, w, e, r, d , ax, b ,c,ex ,g ,h) = (x :: Int) > + y + z" RunToCompletion > > runStmt "f (1, 2, 3)" RunToCompletion > > return () > > where > > handler :: SourceError -> Ghc () > > handler srcerr = do > > let msgs = bagToList $ srcErrorMessages srcerr > > forM_ msgs $ \msg -> do > > s <- doc $ errMsgShortDoc msg > > liftIO $ putStrLn s > > > > doc :: GhcMonad m => SDoc -> m String > > doc sdoc = do > > flags <- getSessionDynFlags > > let cols = pprCols flags > > d = runSDoc sdoc (initSDocContext flags defaultUserStyle) > > return $ Pretty.fullRender Pretty.PageMode cols 1.5 string_txt "" d > > where > > string_txt :: Pretty.TextDetails -> String -> String > > string_txt (Pretty.Chr c) s = c:s > > string_txt (Pretty.Str s1) s2 = s1 ++ s2 > > string_txt (Pretty.PStr s1) s2 = unpackFS s1 ++ s2 > > string_txt (Pretty.LStr s1 _) s2 = unpackLitString s1 ++ s2 > > > > As far as I can tell, there is no simpler way, every function in `Pretty` > except for `fullRender` just assumes a default of 100-char lines. > > > > -- Andrew > > > > On Tue, Jan 7, 2014 at 11:29 AM, Andrew Gibiansky < > andrew.gibiansky at gmail.com> wrote: > > Simon, > > > > That's exactly what I'm looking for! But it seems that doing it > dynamically in the GHC API doesn't work (as in my first email where I tried > to adjust pprCols via setSessionDynFlags). > > > > I'm going to look into the source as what ppr-cols=N actually sets and > probably file a bug - because this seems like buggy behaviour... > > > > Andrew > > > > On Tue, Jan 7, 2014 at 4:14 AM, Simon Peyton Jones > wrote: > > -dppr-cols=N changes the width of the output page; you could try a large > number there. There isn?t a setting meaning ?infinity?, sadly. > > > > Simon > > > > *From:* Andrew Gibiansky [mailto:andrew.gibiansky at gmail.com] > *Sent:* 07 January 2014 03:04 > *To:* Simon Peyton Jones > *Cc:* Erik de Castro Lopo; ghc-devs at haskell.org > > > *Subject:* Re: Changing GHC Error Message Wrapping > > > > Thanks Simon. > > > > In general I think multiline tuples should have many elements per line, > but honestly the tuple case was a very specific example. If possible, I'd > like to change the *overall* wrapping for *all* error messages - how does > `sep` know when to break lines? there's clearly a numeric value for the > number of columns somewhere, but where is it, and is it user-adjustable? > > > > For now I am just hacking around this by special-casing some error > messages and "un-doing" the line wrapping by parsing the messages and > joining lines back together. > > > > Thanks, > > Andrew > > > > On Mon, Jan 6, 2014 at 7:44 AM, Simon Peyton-Jones > wrote: > > I think it?s line 705 in types/TypeRep.lhs > > > > pprTcApp p pp tc tys > > | isTupleTyCon tc && tyConArity tc == length tys > > = pprPromotionQuote tc <> > > tupleParens (tupleTyConSort tc) (sep (punctuate comma (map (pp > TopPrec) tys))) > > > > If you change ?sep? to ?fsep?, you?ll get behaviour more akin to > paragraph-filling (hence the ?f?). Give it a try. You?ll get validation > failure from the testsuite, but you can see whether you think the result is > better or worse. In general, should multi-line tuples be printed with many > elements per line, or just one? > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Andrew > Gibiansky > *Sent:* 04 January 2014 17:30 > *To:* Erik de Castro Lopo > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Changing GHC Error Message Wrapping > > > > Apologize for the broken image formatting. > > > > With the code I posted above, I get the following output: > > > > Couldn't match expected type `(GHC.Types.Int, > > GHC.Types.Int, > > GHC.Types.Int, > > t0, > > t10, > > t20, > > t30, > > t40, > > t50, > > t60, > > t70, > > t80, > > t90)' > > with actual type `(t1, t2, t3)' > > > > I would like the types to be on the same line, or at least wrapped to a > larger number of columns. > > > > Does anyone know how to do this, or where in the GHC source this wrapping > is done? > > > > Thanks! > > Andrew > > > > On Sat, Jan 4, 2014 at 2:55 AM, Erik de Castro Lopo > wrote: > > Carter Schonwald wrote: > > > hey andrew, your image link isn't working (i'm using gmail) > > I think the list software filters out image attachments. > > Erik > -- > ---------------------------------------------------------------------- > Erik de Castro Lopo > http://www.mega-nerd.com/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 10 15:22:49 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 10 Jan 2014 15:22:49 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <52CE6040.30705@gmail.com> References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> <59543203684B2244980D7E4057D5FBC148708D03@DB3EX14MBXC306.europe.corp.microsoft.com> <52CD19AD.7030503@gmail.com> <59543203684B2244980D7E4057D5FBC148709591@DB3EX14MBXC306.europe.corp.microsoft.com> <52CE6040.30705@gmail.com> Message-ID: <59543203684B2244980D7E4057D5FBC14870E940@DB3EX14MBXC306.europe.corp.microsoft.com> That documentation would be good, yes! I don't know what it means to say "we don't really have a general concept of areas any more". We did before, and I didn't know that it had gone away. Urk! We can have lots of live areas, notably the old area (for the current call/return parameters, the call area for a call we are preparing, and the one-slot areas for variables we are saving on the stack. Here's he current story https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/StackAreas I agree that we have no concrete syntax for talking about areas, but that is something we could fix. But I'm worried that they may not mean what they used to mean. Simon | -----Original Message----- | From: Simon Marlow [mailto:marlowsd at gmail.com] | Sent: 09 January 2014 08:39 | To: Simon Peyton Jones; Herbert Valerio Riedel | Cc: ghc-devs at haskell.org | Subject: Re: High-level Cmm code and stack allocation | | On 08/01/2014 10:07, Simon Peyton Jones wrote: | > | > Can't we just allocate a Cmm "area"? The address of an area is a | > | perfectly well-defined Cmm value. | > | > What about this idea? | | We don't really have a general concept of areas (any more), and areas | aren't exposed in the concrete Cmm syntax at all. The current semantics | is that areas may overlap with each other, so there should only be one | active area at any point. I found that this was important to ensure | that we could generate good code from the stack layout algorithm, | otherwise it had to make pessimistic assumptions and use too much stack. | | You're going to ask me where this is documented, and I think I have to | admit to slacking off, sorry :-) We did discuss it at the time, and I | made copious notes, but I didn't transfer those to the code. I'll add a | Note. | | Cheers, | Simon | | | > Simon | > | > | -----Original Message----- | > | From: Simon Marlow [mailto:marlowsd at gmail.com] | > | Sent: 08 January 2014 09:26 | > | To: Simon Peyton Jones; Herbert Valerio Riedel | > | Cc: ghc-devs at haskell.org | > | Subject: Re: High-level Cmm code and stack allocation | > | | > | On 07/01/14 22:53, Simon Peyton Jones wrote: | > | > | Yes, this is technically wrong but luckily works. I'd very much | > | > | like to have a better solution, preferably one that doesn't add | > | > | any extra overhead. | > | > | > | > | __decodeFloat_Int is a C function, so it will not touch the | > | > | Haskell stack. | > | > | > | > This all seems terribly fragile to me. At least it ought to be | > | surrounded with massive comments pointing out how terribly fragile | > | it is, breaking all the rules that we carefully document elsewhere. | > | > | > | > Can't we just allocate a Cmm "area"? The address of an area is a | > | perfectly well-defined Cmm value. | > | | > | It is fragile, yes. We can't use static memory because it needs to | > | be thread-local. This particular hack has gone through several | > | iterations over the years: first we had static memory, which broke | > | when we did the parallel runtime, then we had special storage in the | > | Capability, which we gave up when GMP was split out into a separate | > | library, because it didn't seem right to have magic fields in the | > | Capability for one library. | > | | > | I'm looking into whether we can do temporary allocation on the heap | > | for this instead. | > | | > | Cheers, | > | Simon | > | | > | | > | > Simon | > | > | > | > | -----Original Message----- | > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf | > | > | Of Simon Marlow | > | > | Sent: 07 January 2014 16:05 | > | > | To: Herbert Valerio Riedel; ghc-devs at haskell.org | > | > | Subject: Re: High-level Cmm code and stack allocation | > | > | | > | > | On 04/01/2014 23:26, Herbert Valerio Riedel wrote: | > | > | > Hello, | > | > | > | > | > | > According to Note [Syntax of .cmm files], | > | > | > | > | > | > | There are two ways to write .cmm code: | > | > | > | | > | > | > | (1) High-level Cmm code delegates the stack handling to | > | > | > | GHC, | > | and | > | > | > | never explicitly mentions Sp or registers. | > | > | > | | > | > | > | (2) Low-level Cmm manages the stack itself, and must know | about | > | > | > | calling conventions. | > | > | > | | > | > | > | Whether you want high-level or low-level Cmm is indicated by | > | > | > | the presence of an argument list on a procedure. | > | > | > | > | > | > However, while working on integer-gmp I've been noticing in | > | > | > integer-gmp/cbits/gmp-wrappers.cmm that even though all Cmm | > | > | procedures | > | > | > have been converted to high-level Cmm, they still reference | > | > | > the | > | 'Sp' | > | > | > register, e.g. | > | > | > | > | > | > | > | > | > #define GMP_TAKE1_RET1(name,mp_fun) \ | > | > | > name (W_ ws1, P_ d1) \ | > | > | > { \ | > | > | > W_ mp_tmp1; \ | > | > | > W_ mp_result1; \ | > | > | > \ | > | > | > again: \ | > | > | > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ | > | > | > MAYBE_GC(again); \ | > | > | > \ | > | > | > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ | > | > | > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ | > | > | > ... \ | > | > | > | > | > | > | > | > | > So is this valid high-level Cmm code? What's the proper way to | > | > | allocate | > | > | > Stack (and/or Heap) memory from high-level Cmm code? | > | > | | > | > | Yes, this is technically wrong but luckily works. I'd very much | > | > | like to have a better solution, preferably one that doesn't add | > | > | any extra overhead. | > | > | | > | > | The problem here is that we need to allocate a couple of | > | > | temporary words and take their address; that's an unusual thing | > | > | to do in Cmm, so it only occurs in a few places (mainly | interacting with gmp). | > | > | Usually if you want some temporary storage you can use local | > | > | variables or some heap-allocated memory. | > | > | | > | > | Cheers, | > | > | Simon | > | > | _______________________________________________ | > | > | ghc-devs mailing list | > | > | ghc-devs at haskell.org | > | > | http://www.haskell.org/mailman/listinfo/ghc-devs | > | > | > From marlowsd at gmail.com Fri Jan 10 16:23:33 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 10 Jan 2014 16:23:33 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <59543203684B2244980D7E4057D5FBC14870E940@DB3EX14MBXC306.europe.corp.microsoft.com> References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> <59543203684B2244980D7E4057D5FBC148708D03@DB3EX14MBXC306.europe.corp.microsoft.com> <52CD19AD.7030503@gmail.com> <59543203684B2244980D7E4057D5FBC148709591@DB3EX14MBXC306.europe.corp.microsoft.com> <52CE6040.30705@gmail.com> <59543203684B2244980D7E4057D5FBC14870E940@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52D01E85.2010900@gmail.com> There are no one-slot areas any more, I ditched those when I rewrote the stack allocator. There is only ever one live area: either the old area or the young area for a call we are about to make or have just made. (see the data type: I removed the one-slot areas) I struggled for a long time with this. The problem is that with the semantics of non-overlapping areas, code motion optimisations would tend to increase the stack requirements of the function by overlapping the live ranges of the areas. I concluded that actually what we wanted was areas that really do overlap, and optimisations that respect that, so that we get more efficient stack usage. Cheers, Simon On 10/01/2014 15:22, Simon Peyton Jones wrote: > That documentation would be good, yes! I don't know what it means to say "we don't really have a general concept of areas any more". We did before, and I didn't know that it had gone away. Urk! We can have lots of live areas, notably the old area (for the current call/return parameters, the call area for a call we are preparing, and the one-slot areas for variables we are saving on the stack. > > Here's he current story https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/StackAreas > > I agree that we have no concrete syntax for talking about areas, but that is something we could fix. But I'm worried that they may not mean what they used to mean. > > Simon > > | -----Original Message----- > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | Sent: 09 January 2014 08:39 > | To: Simon Peyton Jones; Herbert Valerio Riedel > | Cc: ghc-devs at haskell.org > | Subject: Re: High-level Cmm code and stack allocation > | > | On 08/01/2014 10:07, Simon Peyton Jones wrote: > | > | > Can't we just allocate a Cmm "area"? The address of an area is a > | > | perfectly well-defined Cmm value. > | > > | > What about this idea? > | > | We don't really have a general concept of areas (any more), and areas > | aren't exposed in the concrete Cmm syntax at all. The current semantics > | is that areas may overlap with each other, so there should only be one > | active area at any point. I found that this was important to ensure > | that we could generate good code from the stack layout algorithm, > | otherwise it had to make pessimistic assumptions and use too much stack. > | > | You're going to ask me where this is documented, and I think I have to > | admit to slacking off, sorry :-) We did discuss it at the time, and I > | made copious notes, but I didn't transfer those to the code. I'll add a > | Note. > | > | Cheers, > | Simon > | > | > | > Simon > | > > | > | -----Original Message----- > | > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | > | Sent: 08 January 2014 09:26 > | > | To: Simon Peyton Jones; Herbert Valerio Riedel > | > | Cc: ghc-devs at haskell.org > | > | Subject: Re: High-level Cmm code and stack allocation > | > | > | > | On 07/01/14 22:53, Simon Peyton Jones wrote: > | > | > | Yes, this is technically wrong but luckily works. I'd very much > | > | > | like to have a better solution, preferably one that doesn't add > | > | > | any extra overhead. > | > | > > | > | > | __decodeFloat_Int is a C function, so it will not touch the > | > | > | Haskell stack. > | > | > > | > | > This all seems terribly fragile to me. At least it ought to be > | > | surrounded with massive comments pointing out how terribly fragile > | > | it is, breaking all the rules that we carefully document elsewhere. > | > | > > | > | > Can't we just allocate a Cmm "area"? The address of an area is a > | > | perfectly well-defined Cmm value. > | > | > | > | It is fragile, yes. We can't use static memory because it needs to > | > | be thread-local. This particular hack has gone through several > | > | iterations over the years: first we had static memory, which broke > | > | when we did the parallel runtime, then we had special storage in the > | > | Capability, which we gave up when GMP was split out into a separate > | > | library, because it didn't seem right to have magic fields in the > | > | Capability for one library. > | > | > | > | I'm looking into whether we can do temporary allocation on the heap > | > | for this instead. > | > | > | > | Cheers, > | > | Simon > | > | > | > | > | > | > Simon > | > | > > | > | > | -----Original Message----- > | > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf > | > | > | Of Simon Marlow > | > | > | Sent: 07 January 2014 16:05 > | > | > | To: Herbert Valerio Riedel; ghc-devs at haskell.org > | > | > | Subject: Re: High-level Cmm code and stack allocation > | > | > | > | > | > | On 04/01/2014 23:26, Herbert Valerio Riedel wrote: > | > | > | > Hello, > | > | > | > > | > | > | > According to Note [Syntax of .cmm files], > | > | > | > > | > | > | > | There are two ways to write .cmm code: > | > | > | > | > | > | > | > | (1) High-level Cmm code delegates the stack handling to > | > | > | > | GHC, > | > | and > | > | > | > | never explicitly mentions Sp or registers. > | > | > | > | > | > | > | > | (2) Low-level Cmm manages the stack itself, and must know > | about > | > | > | > | calling conventions. > | > | > | > | > | > | > | > | Whether you want high-level or low-level Cmm is indicated by > | > | > | > | the presence of an argument list on a procedure. > | > | > | > > | > | > | > However, while working on integer-gmp I've been noticing in > | > | > | > integer-gmp/cbits/gmp-wrappers.cmm that even though all Cmm > | > | > | procedures > | > | > | > have been converted to high-level Cmm, they still reference > | > | > | > the > | > | 'Sp' > | > | > | > register, e.g. > | > | > | > > | > | > | > > | > | > | > #define GMP_TAKE1_RET1(name,mp_fun) \ > | > | > | > name (W_ ws1, P_ d1) \ > | > | > | > { \ > | > | > | > W_ mp_tmp1; \ > | > | > | > W_ mp_result1; \ > | > | > | > \ > | > | > | > again: \ > | > | > | > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ > | > | > | > MAYBE_GC(again); \ > | > | > | > \ > | > | > | > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ > | > | > | > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ > | > | > | > ... \ > | > | > | > > | > | > | > > | > | > | > So is this valid high-level Cmm code? What's the proper way to > | > | > | allocate > | > | > | > Stack (and/or Heap) memory from high-level Cmm code? > | > | > | > | > | > | Yes, this is technically wrong but luckily works. I'd very much > | > | > | like to have a better solution, preferably one that doesn't add > | > | > | any extra overhead. > | > | > | > | > | > | The problem here is that we need to allocate a couple of > | > | > | temporary words and take their address; that's an unusual thing > | > | > | to do in Cmm, so it only occurs in a few places (mainly > | interacting with gmp). > | > | > | Usually if you want some temporary storage you can use local > | > | > | variables or some heap-allocated memory. > | > | > | > | > | > | Cheers, > | > | > | Simon > | > | > | _______________________________________________ > | > | > | ghc-devs mailing list > | > | > | ghc-devs at haskell.org > | > | > | http://www.haskell.org/mailman/listinfo/ghc-devs > | > | > > | > > From simonpj at microsoft.com Fri Jan 10 16:35:47 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 10 Jan 2014 16:35:47 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <52D01E85.2010900@gmail.com> References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> <59543203684B2244980D7E4057D5FBC148708D03@DB3EX14MBXC306.europe.corp.microsoft.com> <52CD19AD.7030503@gmail.com> <59543203684B2244980D7E4057D5FBC148709591@DB3EX14MBXC306.europe.corp.microsoft.com> <52CE6040.30705@gmail.com> <59543203684B2244980D7E4057D5FBC14870E940@DB3EX14MBXC306.europe.corp.microsoft.com> <52D01E85.2010900@gmail.com> Message-ID: <59543203684B2244980D7E4057D5FBC14870EA68@DB3EX14MBXC306.europe.corp.microsoft.com> Oh, ok. Alas, a good chunk of my model of Cmm has just gone out of the window. I thought that areas were such a lovely, well-behaved abstraction. I was thrilled when we came up with them, and I'm very sorry to see them go. There are no many things that I no longer understand. I now have no idea how we save live variables over a call, or how multiple returned values from one call (returned on the stack) stay right where they are if they are live across the next call. What was the actual problem? That functions used too much stack, so the stack was getting too big? But a one slot area corresponds exactly to a live variable, so I don't see how the area abstraction could possibly increase stack size. And is stack size a crucial issue anyway? Apart from anything else, areas would have given a lovely solution to the problem this thread started with! I guess we can talk about this when you next visit? But some documentation would be welcome. Simon | -----Original Message----- | From: Simon Marlow [mailto:marlowsd at gmail.com] | Sent: 10 January 2014 16:24 | To: Simon Peyton Jones; Herbert Valerio Riedel | Cc: ghc-devs at haskell.org | Subject: Re: High-level Cmm code and stack allocation | | There are no one-slot areas any more, I ditched those when I rewrote the | stack allocator. There is only ever one live area: either the old area | or the young area for a call we are about to make or have just made. | (see the data type: I removed the one-slot areas) | | I struggled for a long time with this. The problem is that with the | semantics of non-overlapping areas, code motion optimisations would tend | to increase the stack requirements of the function by overlapping the | live ranges of the areas. I concluded that actually what we wanted was | areas that really do overlap, and optimisations that respect that, so | that we get more efficient stack usage. | | Cheers, | Simon | | On 10/01/2014 15:22, Simon Peyton Jones wrote: | > That documentation would be good, yes! I don't know what it means to | say "we don't really have a general concept of areas any more". We did | before, and I didn't know that it had gone away. Urk! We can have lots | of live areas, notably the old area (for the current call/return | parameters, the call area for a call we are preparing, and the one-slot | areas for variables we are saving on the stack. | > | > Here's he current story | > https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/StackAreas | > | > I agree that we have no concrete syntax for talking about areas, but | that is something we could fix. But I'm worried that they may not mean | what they used to mean. | > | > Simon | > | > | -----Original Message----- | > | From: Simon Marlow [mailto:marlowsd at gmail.com] | > | Sent: 09 January 2014 08:39 | > | To: Simon Peyton Jones; Herbert Valerio Riedel | > | Cc: ghc-devs at haskell.org | > | Subject: Re: High-level Cmm code and stack allocation | > | | > | On 08/01/2014 10:07, Simon Peyton Jones wrote: | > | > | > Can't we just allocate a Cmm "area"? The address of an area is | > | > | > a | > | > | perfectly well-defined Cmm value. | > | > | > | > What about this idea? | > | | > | We don't really have a general concept of areas (any more), and | > | areas aren't exposed in the concrete Cmm syntax at all. The current | > | semantics is that areas may overlap with each other, so there should | > | only be one active area at any point. I found that this was | > | important to ensure that we could generate good code from the stack | > | layout algorithm, otherwise it had to make pessimistic assumptions | and use too much stack. | > | | > | You're going to ask me where this is documented, and I think I have | > | to admit to slacking off, sorry :-) We did discuss it at the time, | > | and I made copious notes, but I didn't transfer those to the code. | > | I'll add a Note. | > | | > | Cheers, | > | Simon | > | | > | | > | > Simon | > | > | > | > | -----Original Message----- | > | > | From: Simon Marlow [mailto:marlowsd at gmail.com] | > | > | Sent: 08 January 2014 09:26 | > | > | To: Simon Peyton Jones; Herbert Valerio Riedel | > | > | Cc: ghc-devs at haskell.org | > | > | Subject: Re: High-level Cmm code and stack allocation | > | > | | > | > | On 07/01/14 22:53, Simon Peyton Jones wrote: | > | > | > | Yes, this is technically wrong but luckily works. I'd very | > | > | > | much like to have a better solution, preferably one that | > | > | > | doesn't add any extra overhead. | > | > | > | > | > | > | __decodeFloat_Int is a C function, so it will not touch the | > | > | > | Haskell stack. | > | > | > | > | > | > This all seems terribly fragile to me. At least it ought to | > | > | > be | > | > | surrounded with massive comments pointing out how terribly | > | > | fragile it is, breaking all the rules that we carefully document | elsewhere. | > | > | > | > | > | > Can't we just allocate a Cmm "area"? The address of an area is | > | > | > a | > | > | perfectly well-defined Cmm value. | > | > | | > | > | It is fragile, yes. We can't use static memory because it needs | > | > | to be thread-local. This particular hack has gone through | > | > | several iterations over the years: first we had static memory, | > | > | which broke when we did the parallel runtime, then we had | > | > | special storage in the Capability, which we gave up when GMP was | > | > | split out into a separate library, because it didn't seem right | > | > | to have magic fields in the Capability for one library. | > | > | | > | > | I'm looking into whether we can do temporary allocation on the | > | > | heap for this instead. | > | > | | > | > | Cheers, | > | > | Simon | > | > | | > | > | | > | > | > Simon | > | > | > | > | > | > | -----Original Message----- | > | > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On | > | > | > | Behalf Of Simon Marlow | > | > | > | Sent: 07 January 2014 16:05 | > | > | > | To: Herbert Valerio Riedel; ghc-devs at haskell.org | > | > | > | Subject: Re: High-level Cmm code and stack allocation | > | > | > | | > | > | > | On 04/01/2014 23:26, Herbert Valerio Riedel wrote: | > | > | > | > Hello, | > | > | > | > | > | > | > | > According to Note [Syntax of .cmm files], | > | > | > | > | > | > | > | > | There are two ways to write .cmm code: | > | > | > | > | | > | > | > | > | (1) High-level Cmm code delegates the stack handling to | > | > | > | > | GHC, | > | > | and | > | > | > | > | never explicitly mentions Sp or registers. | > | > | > | > | | > | > | > | > | (2) Low-level Cmm manages the stack itself, and must | > | > | > | > | know | > | about | > | > | > | > | calling conventions. | > | > | > | > | | > | > | > | > | Whether you want high-level or low-level Cmm is | > | > | > | > | indicated by the presence of an argument list on a | procedure. | > | > | > | > | > | > | > | > However, while working on integer-gmp I've been noticing | > | > | > | > in integer-gmp/cbits/gmp-wrappers.cmm that even though all | > | > | > | > Cmm | > | > | > | procedures | > | > | > | > have been converted to high-level Cmm, they still | > | > | > | > reference the | > | > | 'Sp' | > | > | > | > register, e.g. | > | > | > | > | > | > | > | > | > | > | > | > #define GMP_TAKE1_RET1(name,mp_fun) \ | > | > | > | > name (W_ ws1, P_ d1) \ | > | > | > | > { \ | > | > | > | > W_ mp_tmp1; \ | > | > | > | > W_ mp_result1; \ | > | > | > | > \ | > | > | > | > again: \ | > | > | > | > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ | > | > | > | > MAYBE_GC(again); \ | > | > | > | > \ | > | > | > | > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ | > | > | > | > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ | > | > | > | > ... \ | > | > | > | > | > | > | > | > | > | > | > | > So is this valid high-level Cmm code? What's the proper | > | > | > | > way to | > | > | > | allocate | > | > | > | > Stack (and/or Heap) memory from high-level Cmm code? | > | > | > | | > | > | > | Yes, this is technically wrong but luckily works. I'd very | > | > | > | much like to have a better solution, preferably one that | > | > | > | doesn't add any extra overhead. | > | > | > | | > | > | > | The problem here is that we need to allocate a couple of | > | > | > | temporary words and take their address; that's an unusual | > | > | > | thing to do in Cmm, so it only occurs in a few places | > | > | > | (mainly | > | interacting with gmp). | > | > | > | Usually if you want some temporary storage you can use local | > | > | > | variables or some heap-allocated memory. | > | > | > | | > | > | > | Cheers, | > | > | > | Simon | > | > | > | _______________________________________________ | > | > | > | ghc-devs mailing list | > | > | > | ghc-devs at haskell.org | > | > | > | http://www.haskell.org/mailman/listinfo/ghc-devs | > | > | > | > | > | > From marlowsd at gmail.com Fri Jan 10 17:00:29 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 10 Jan 2014 17:00:29 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <59543203684B2244980D7E4057D5FBC14870EA68@DB3EX14MBXC306.europe.corp.microsoft.com> References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> <59543203684B2244980D7E4057D5FBC148708D03@DB3EX14MBXC306.europe.corp.microsoft.com> <52CD19AD.7030503@gmail.com> <59543203684B2244980D7E4057D5FBC148709591@DB3EX14MBXC306.europe.corp.microsoft.com> <52CE6040.30705@gmail.com> <59543203684B2244980D7E4057D5FBC14870E940@DB3EX14MBXC306.europe.corp.microsoft.com> <52D01E85.2010900@gmail.com> <59543203684B2244980D7E4057D5FBC14870EA68@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52D0272D.30909@gmail.com> So stack areas are still a great abstraction, the only change is that they now overlap. It's not just about stack getting too big, I've copied the notes I made about it below (which I will paste into the code in due course). The nice property that we can generate well-defined Cmm without knowing explicit stack offsets is intact. What is different is that there used to be an intermediate state where live variables were saved to abstract stack areas across calls, but Sp was still not manifest. This intermediate state doesn't exist any more, the stack layout algorithm does it all in one pass. To me this was far simpler, and I think it ended up being fewer lines of code than the old multi-phase stack layout algorithm (it's also much faster). Of course you can always change this. My goal was to get code that was at least as good as the old code generator and in a reasonable amount of time, and this was the shortest path I could find to that goal. Cheers, Simon e.g. if we had x = Sp[old + 8] y = Sp[old + 16] Sp[young(L) + 8] = L Sp[young(L) + 16] = y Sp[young(L) + 24] = x call f() returns to L if areas semantically do not overlap, then we might optimise this to Sp[young(L) + 8] = L Sp[young(L) + 16] = Sp[old + 8] Sp[young(L) + 24] = Sp[old + 16] call f() returns to L and now young(L) cannot be allocated at the same place as old, and we are doomed to use more stack. - old+8 conflicts with young(L)+8 - old+16 conflicts with young(L)+16 and young(L)+8 so young(L)+8 == old+24 and we get Sp[-8] = L Sp[-16] = Sp[8] Sp[-24] = Sp[0] Sp -= 24 call f() returns to L However, if areas are defined to be "possibly overlapping" in the semantics, then we cannot commute any loads/stores of old with young(L), and we will be able to re-use both old+8 and old+16 for young(L). x = Sp[8] y = Sp[0] Sp[8] = L Sp[0] = y Sp[-8] = x Sp = Sp - 8 call f() returns to L Now, the assignments of y go away, x = Sp[8] Sp[8] = L Sp[-8] = x Sp = Sp - 8 call f() returns to L Conclusion: - T[old+N] aliases with U[young(L)+M] for all T, U, L, N and M - T[old+N] aliases with U[old+M] only if the areas actually overlap this ensures that we will not commute any accesses to old with young(L) or young(L) with young(L'), and the stack allocator will get the maximum opportunity to overlap these areas, keeping the stack use to a minimum and possibly avoiding some assignments. On 10/01/2014 16:35, Simon Peyton Jones wrote: > Oh, ok. Alas, a good chunk of my model of Cmm has just gone out of the window. I thought that areas were such a lovely, well-behaved abstraction. I was thrilled when we came up with them, and I'm very sorry to see them go. > > There are no many things that I no longer understand. I now have no idea how we save live variables over a call, or how multiple returned values from one call (returned on the stack) stay right where they are if they are live across the next call. > > What was the actual problem? That functions used too much stack, so the stack was getting too big? But a one slot area corresponds exactly to a live variable, so I don't see how the area abstraction could possibly increase stack size. And is stack size a crucial issue anyway? > > Apart from anything else, areas would have given a lovely solution to the problem this thread started with! > > I guess we can talk about this when you next visit? But some documentation would be welcome. > > Simon > > | -----Original Message----- > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | Sent: 10 January 2014 16:24 > | To: Simon Peyton Jones; Herbert Valerio Riedel > | Cc: ghc-devs at haskell.org > | Subject: Re: High-level Cmm code and stack allocation > | > | There are no one-slot areas any more, I ditched those when I rewrote the > | stack allocator. There is only ever one live area: either the old area > | or the young area for a call we are about to make or have just made. > | (see the data type: I removed the one-slot areas) > | > | I struggled for a long time with this. The problem is that with the > | semantics of non-overlapping areas, code motion optimisations would tend > | to increase the stack requirements of the function by overlapping the > | live ranges of the areas. I concluded that actually what we wanted was > | areas that really do overlap, and optimisations that respect that, so > | that we get more efficient stack usage. > | > | Cheers, > | Simon > | > | On 10/01/2014 15:22, Simon Peyton Jones wrote: > | > That documentation would be good, yes! I don't know what it means to > | say "we don't really have a general concept of areas any more". We did > | before, and I didn't know that it had gone away. Urk! We can have lots > | of live areas, notably the old area (for the current call/return > | parameters, the call area for a call we are preparing, and the one-slot > | areas for variables we are saving on the stack. > | > > | > Here's he current story > | > https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/StackAreas > | > > | > I agree that we have no concrete syntax for talking about areas, but > | that is something we could fix. But I'm worried that they may not mean > | what they used to mean. > | > > | > Simon > | > > | > | -----Original Message----- > | > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | > | Sent: 09 January 2014 08:39 > | > | To: Simon Peyton Jones; Herbert Valerio Riedel > | > | Cc: ghc-devs at haskell.org > | > | Subject: Re: High-level Cmm code and stack allocation > | > | > | > | On 08/01/2014 10:07, Simon Peyton Jones wrote: > | > | > | > Can't we just allocate a Cmm "area"? The address of an area is > | > | > | > a > | > | > | perfectly well-defined Cmm value. > | > | > > | > | > What about this idea? > | > | > | > | We don't really have a general concept of areas (any more), and > | > | areas aren't exposed in the concrete Cmm syntax at all. The current > | > | semantics is that areas may overlap with each other, so there should > | > | only be one active area at any point. I found that this was > | > | important to ensure that we could generate good code from the stack > | > | layout algorithm, otherwise it had to make pessimistic assumptions > | and use too much stack. > | > | > | > | You're going to ask me where this is documented, and I think I have > | > | to admit to slacking off, sorry :-) We did discuss it at the time, > | > | and I made copious notes, but I didn't transfer those to the code. > | > | I'll add a Note. > | > | > | > | Cheers, > | > | Simon > | > | > | > | > | > | > Simon > | > | > > | > | > | -----Original Message----- > | > | > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | > | > | Sent: 08 January 2014 09:26 > | > | > | To: Simon Peyton Jones; Herbert Valerio Riedel > | > | > | Cc: ghc-devs at haskell.org > | > | > | Subject: Re: High-level Cmm code and stack allocation > | > | > | > | > | > | On 07/01/14 22:53, Simon Peyton Jones wrote: > | > | > | > | Yes, this is technically wrong but luckily works. I'd very > | > | > | > | much like to have a better solution, preferably one that > | > | > | > | doesn't add any extra overhead. > | > | > | > > | > | > | > | __decodeFloat_Int is a C function, so it will not touch the > | > | > | > | Haskell stack. > | > | > | > > | > | > | > This all seems terribly fragile to me. At least it ought to > | > | > | > be > | > | > | surrounded with massive comments pointing out how terribly > | > | > | fragile it is, breaking all the rules that we carefully document > | elsewhere. > | > | > | > > | > | > | > Can't we just allocate a Cmm "area"? The address of an area is > | > | > | > a > | > | > | perfectly well-defined Cmm value. > | > | > | > | > | > | It is fragile, yes. We can't use static memory because it needs > | > | > | to be thread-local. This particular hack has gone through > | > | > | several iterations over the years: first we had static memory, > | > | > | which broke when we did the parallel runtime, then we had > | > | > | special storage in the Capability, which we gave up when GMP was > | > | > | split out into a separate library, because it didn't seem right > | > | > | to have magic fields in the Capability for one library. > | > | > | > | > | > | I'm looking into whether we can do temporary allocation on the > | > | > | heap for this instead. > | > | > | > | > | > | Cheers, > | > | > | Simon > | > | > | > | > | > | > | > | > | > Simon > | > | > | > > | > | > | > | -----Original Message----- > | > | > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On > | > | > | > | Behalf Of Simon Marlow > | > | > | > | Sent: 07 January 2014 16:05 > | > | > | > | To: Herbert Valerio Riedel; ghc-devs at haskell.org > | > | > | > | Subject: Re: High-level Cmm code and stack allocation > | > | > | > | > | > | > | > | On 04/01/2014 23:26, Herbert Valerio Riedel wrote: > | > | > | > | > Hello, > | > | > | > | > > | > | > | > | > According to Note [Syntax of .cmm files], > | > | > | > | > > | > | > | > | > | There are two ways to write .cmm code: > | > | > | > | > | > | > | > | > | > | (1) High-level Cmm code delegates the stack handling to > | > | > | > | > | GHC, > | > | > | and > | > | > | > | > | never explicitly mentions Sp or registers. > | > | > | > | > | > | > | > | > | > | (2) Low-level Cmm manages the stack itself, and must > | > | > | > | > | know > | > | about > | > | > | > | > | calling conventions. > | > | > | > | > | > | > | > | > | > | Whether you want high-level or low-level Cmm is > | > | > | > | > | indicated by the presence of an argument list on a > | procedure. > | > | > | > | > > | > | > | > | > However, while working on integer-gmp I've been noticing > | > | > | > | > in integer-gmp/cbits/gmp-wrappers.cmm that even though all > | > | > | > | > Cmm > | > | > | > | procedures > | > | > | > | > have been converted to high-level Cmm, they still > | > | > | > | > reference the > | > | > | 'Sp' > | > | > | > | > register, e.g. > | > | > | > | > > | > | > | > | > > | > | > | > | > #define GMP_TAKE1_RET1(name,mp_fun) \ > | > | > | > | > name (W_ ws1, P_ d1) \ > | > | > | > | > { \ > | > | > | > | > W_ mp_tmp1; \ > | > | > | > | > W_ mp_result1; \ > | > | > | > | > \ > | > | > | > | > again: \ > | > | > | > | > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ > | > | > | > | > MAYBE_GC(again); \ > | > | > | > | > \ > | > | > | > | > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ > | > | > | > | > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ > | > | > | > | > ... \ > | > | > | > | > > | > | > | > | > > | > | > | > | > So is this valid high-level Cmm code? What's the proper > | > | > | > | > way to > | > | > | > | allocate > | > | > | > | > Stack (and/or Heap) memory from high-level Cmm code? > | > | > | > | > | > | > | > | Yes, this is technically wrong but luckily works. I'd very > | > | > | > | much like to have a better solution, preferably one that > | > | > | > | doesn't add any extra overhead. > | > | > | > | > | > | > | > | The problem here is that we need to allocate a couple of > | > | > | > | temporary words and take their address; that's an unusual > | > | > | > | thing to do in Cmm, so it only occurs in a few places > | > | > | > | (mainly > | > | interacting with gmp). > | > | > | > | Usually if you want some temporary storage you can use local > | > | > | > | variables or some heap-allocated memory. > | > | > | > | > | > | > | > | Cheers, > | > | > | > | Simon > | > | > | > | _______________________________________________ > | > | > | > | ghc-devs mailing list > | > | > | > | ghc-devs at haskell.org > | > | > | > | http://www.haskell.org/mailman/listinfo/ghc-devs > | > | > | > > | > | > > | > > From fuuzetsu at fuuzetsu.co.uk Fri Jan 10 22:06:52 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 10 Jan 2014 22:06:52 +0000 Subject: Validating with Haddock In-Reply-To: <52CFC4F8.60000@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CFC4F8.60000@fuuzetsu.co.uk> Message-ID: <52D06EFC.9030604@fuuzetsu.co.uk> On 10/01/14 10:01, Mateusz Kowalczyk wrote: > Hi all, > > I have now merged in the new parser and new features onto a single > branch. I'm having some issues validating with HEAD at the moment > (#8661, unrelated problem) but while I get that sorted out, someone > might want to try validating with Haddock changes on their own platform. > > The full branch is at [1]. I have squashed the changes to what I feel is > the minimum number of commits until they completely stop making sense. > It should apply cleanly on top of current Haddock master branch. The > documentation is updated so you can read about what changed. Feel free > to ask any questions. > > I will post again once I can confirm that the branch validates for me > without any new test failures. > > Thanks for your patience. > > [1]: https://github.com/Fuuzetsu/haddock/tree/new-features > This is just a simple follow up to say that the changes don't seem to break anything new on 32-bit Linux. I provide my validate logs before[1] and after[2] Haddock changes. Here's a word of warning: previously, when the mark-up wasn't 100% clear, we'd get a parse error and no documentation for the whole package. The new parser no longer does this and instead does its best to parse and present everything. This means that any Haddock parse failures should be reported as bugs. As you can see in [1], there were some parse failures in the past (look for ?doc comment parse failed?) and they will now be rendered. This means the documentation might look bad in those places so it's probably worth while visiting those places and having a look. On an upside, at least we now have documentation for those packages. Validation was ran on commit 15a3de1288fe9d055f3dc92d554cb59b3528fa30 including #8661 fixes. Here's the relevant tail of the logs: > Unexpected results from: > TEST="lazy-bs-alloc T1969 T3064 T4801 T3294 T5498 haddock.Cabal haddock.compiler haddock.base" > > OVERALL SUMMARY for test run started at Fri Jan 10 12:45:39 2014 GMT > 0:17:23 spent to go through > 3861 total tests, which gave rise to > 15072 test cases, of which > 11547 were skipped > > 28 had missing libraries > 3432 expected passes > 56 expected failures > > 0 caused framework failures > 0 unexpected passes > 9 unexpected failures > > Unexpected failures: > deriving/should_fail T5498 [stderr mismatch] (normal) > perf/compiler T1969 [stat too good] (normal) > perf/compiler T3064 [stat not good enough] (normal) > perf/compiler T3294 [stat not good enough] (normal) > perf/compiler T4801 [stat not good enough] (normal) > perf/haddock haddock.Cabal [stat not good enough] (normal) > perf/haddock haddock.base [stat not good enough] (normal) > perf/haddock haddock.compiler [stat not good enough] (normal) > perf/should_run lazy-bs-alloc [stat too good] (normal) > > gmake[2]: Leaving directory `/home/shana/programming/ghc/testsuite/tests' > gmake[1]: Leaving directory `/home/shana/programming/ghc/testsuite/tests' > == Start post-testsuite package check > Timestamp 2014-01-10 12:45:36.897842164 UTC for /home/shana/programming/ghc/bindisttest/install dir/lib/ghc-7.7.20140109/package.conf.d/package.cache > Timestamp 2014-01-10 12:45:36 UTC for /home/shana/programming/ghc/bindisttest/install dir/lib/ghc-7.7.20140109/package.conf.d (older than cache) > using cache: /home/shana/programming/ghc/bindisttest/install dir/lib/ghc-7.7.20140109/package.conf.d/package.cache > == End post-testsuite package check > ------------------------------------------------------------------- > Oops! Looks like you have some unexpected test results or framework failures. > Please fix them before pushing/sending patches. > ------------------------------------------------------------------- The failures are the same in both logs. Thanks! [1]: http://fuuzetsu.co.uk/misc/segfix [2]: http://fuuzetsu.co.uk/misc/segfixhaddock -- Mateusz K. From austin at well-typed.com Sat Jan 11 00:22:22 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 10 Jan 2014 18:22:22 -0600 Subject: Folding ghc/testsuite repos *now*, 2nd attempt (was: Repository Reorganization Question) In-Reply-To: <87y52pzbta.fsf@gnu.org> References: <87y52pzbta.fsf@gnu.org> Message-ID: +1 from me as well. On Thu, Jan 9, 2014 at 4:31 AM, Herbert Valerio Riedel wrote: > Hello All, > > It seems to me, there were no major obstacles left unaddressed in the > previous discussion[1] (see summary below) to merging testsuite.git into > ghc.git. > > So here's one last attempt to get testsuite.git folded into ghc.git before > Austin branches off 7.8 > > Please speak up *now*, if you have any objections to folding > testsuite.git into ghc.git *soon* (with *soon* meaning upcoming Sunday, > 12th Jan 2014) > > ---- > > A summary of the previous thread so far: > > - Let's fold testsuite into ghc before branching off 7.8RC > - ghc/testsuite have the most coupled commits > - make's it a bit easier to cherry pick ghc/testsuite between branches > - while being low-risk, will provide empiric value for deciding how > to proceed with folding in other Git repos > > - Proof of concept in > http://git.haskell.org/ghc.git/shortlog/refs/heads/wip/T8545 > > - general support for it; consensus that it will be beneficial and > shouldn't be a huge disruption > > - sync-all is adapted to abort operation if `testsuite/.git` is > detected, and advising the user to remove (or move-out-of-the-way) > > - Concern about broken commit-refs in Trac and other places: > > - old testsuite.git repo will remain available (more or less) > read-only; so old commit-shas will still be resolvable > > - (old) Trac commit-links which work currently will continue to > work, as they refer specifically to the testsuite.git repo, and > Trac will know they point to the old testsuite.git > > - If one doesn't know which Git repo a commit-id is in, there's > still the SHA1 look-up service at http://git.haskell.org/ which > will search all repos hosted at git.haskell.org for a commit > SHA1 prefix. Or alternatively, just ask google about the SHA1. > > - Binary blobs (a few compiled executables) that were committed by > accident and removed right away again are removed from history to > avoid carrying around useless garbage in the Git history (saves > ~20MiB) > > - Path names are rewritten to be based in testsuite/, in order to > make it easier for Git operations (git log et al.) to follow > history for folders/filenames > > - Old Commit-ids will *not* be written into the rewritten commits' > messages in order not to add noise (old commit ids can be resolved > via the remaining old testsuite.git repo) > > > > [1] http://permalink.gmane.org/gmane.comp.lang.haskell.ghc.devel/3099 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From benl at ouroborus.net Sun Jan 12 04:29:14 2014 From: benl at ouroborus.net (Ben Lippmeier) Date: Sun, 12 Jan 2014 15:29:14 +1100 Subject: panic when compiling SHA In-Reply-To: <52CD2A7B.2000206@gmail.com> References: <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <20140106.120834.989663188831409811.kazu@iij.ad.jp> <1E4F1419-8C89-4E2A-B0A4-542324AA15BC@galois.com> <52CD2A7B.2000206@gmail.com> Message-ID: <1CF134F1-8AA0-4803-AF57-B4D7AD7DDF13@ouroborus.net> On 08/01/2014, at 21:37 , Simon Marlow wrote: > > Ben is right that avoiding -fregs-graph doesn't really fix the problem, because we'll probably get crappy code for SHA-1 now. But someone needs to work on -fregs-graph. I wouldn't be offended if you just deleted the code from the repo. Now that the LLVM project is so well supported I don't see much point maintaining a second GHC-specific allocator. Ben. From benl at ouroborus.net Sun Jan 12 04:46:11 2014 From: benl at ouroborus.net (Ben Lippmeier) Date: Sun, 12 Jan 2014 15:46:11 +1100 Subject: panic when compiling SHA In-Reply-To: References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <59543203684B2244980D7E4057D5FBC148707206@DB3EX14MBXC306.europe.corp.microsoft.com> <2E9BAE47-AE0B-4189-89BC-A01FF8DE499B@ouroborus.net> <52CD2B7B.2030501@gmail.com> Message-ID: <76396EC0-6484-4769-A313-3D13A6C2404F@ouroborus.net> On 10/01/2014, at 6:17 , Adam Wick wrote: > On Jan 8, 2014, at 2:42 AM, Simon Marlow wrote: >> Neither of the register allocators reuse spill slots for variables that have disjoint live ranges, so the fact that we ran out of spill slots is not necessarily indicative of terrible code (but I agree that it's a strong hint). Right, I'm starting to remember more now -- it was a while ago. There are some notes here under the section "SHA from darcs", which I assume is the same code. https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Backends/NCG/RegisterAllocator The notes say the Cmm code had 30 or so live variables at a particular point, but live ranges of 1700 instructions. I remember I had to change the heuristic that chooses which register to spill, to select the one with the longest live range -- instead of using the standard one from Chatin's algorithm. With the standard heuristic the allocator was taking too long to converge. > That?s the problem with SHA, then. The implementation (and the spec, really) is essentially a long combination of the form: > > let x_n5 = small_computation x_n1 x_n2 x_n3 x_n4 > x_n6 = small_computation x_n2 x_n3 x_n4 x_n5 > ? > > Which has ~70 entries. The actual number of live variables alive at any time should be relatively small, but if slots aren?t getting reused there?s going to be some significant blowup. (To be honest, I had figured ? and thought I had validated ? that doing it this way would give the compiler the best chance at generating optimal code, but it appears I merely set myself up to hit this limitation several years later.) If you really end up with 70 copies of small_computation in the object code then that's not very friendly to the L1 instruction cache -- though perhaps it doesn't matter if the processor will be stalled reading the input data most of the time anyway. The -O2 heuristics might inline small_computation by default, assuming it's non-recursive. Ben. -------------- next part -------------- An HTML attachment was scrubbed... URL: From difrumin at gmail.com Sun Jan 12 10:25:06 2014 From: difrumin at gmail.com (Daniil Frumin) Date: Sun, 12 Jan 2014 14:25:06 +0400 Subject: Starting GHC development. In-Reply-To: <52C704D5.4050606@fuuzetsu.co.uk> References: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> <52C704D5.4050606@fuuzetsu.co.uk> Message-ID: Does anyone actually know which tests are supposed to fail on 'validate'? On Fri, Jan 3, 2014 at 10:43 PM, Mateusz Kowalczyk wrote: > On 03/01/14 13:27, Simon Peyton-Jones wrote: >> [snip] >> Thank you. We need lots of help! >> [snip] > > While I hate to interrupt this thread, I think this is a good chance to > mention something. > > I think the big issue for joining GHC development is the lack of > communication on the mailing list. There are many topics where a person > has a problem with GHC tree (can't validate/build, some tests are > failing), posts to GHC devs seeking help and never gets a reply. This is > very discouraging and often makes it outright impossible to contribute. > > An easy example is the failing tests one: unfortunately some tests are > known to fail, but they are only known to fail to existing GHC devs. A > new person tries to validate clean tree, gets test failures, asks for > help on GHC devs, doesn't get any, gives up. > > Is there any better way to get through than ghc-devs? Even myself I'd > love to get started but if I can't get help even getting the ?clean? > tree to a state where I'm confident it's not a problem with my machine, > how am I to write patches for anything? A more serious example is that > the work I did over summer on Haddock still hasn't been pushed in. Why? > Because neither Simon Hengel nor myself can ensure that we haven't > broken anything as neither of use gets a clean validate. I have in fact > asked for help recently with this but to no avail and I do know Simon > also sought help in the past to no avail. I have also tried to join the > development quite a few months in the past now but due to failing tests > on validate and lack of help, I had to give up on that. > > Please guys, try to increase responsiveness to posts on this list. It's > very easy to scroll down in your mail client and see just how many > threads never got a single reply. > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -- Sincerely yours, -- Daniil From fuuzetsu at fuuzetsu.co.uk Sun Jan 12 10:29:51 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Sun, 12 Jan 2014 10:29:51 +0000 Subject: Starting GHC development. In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> <52C704D5.4050606@fuuzetsu.co.uk> Message-ID: <52D26E9F.9010301@fuuzetsu.co.uk> On 12/01/14 10:25, Daniil Frumin wrote: > Does anyone actually know which tests are supposed to fail on 'validate'? > AFAIK the official stance is that you should see 0 failures. Unofficially it seems that there's leniency and the tree seems to be in a state with few tests failing consistently. It might be just my machine though, but whenever I post my build logs, there seems to be no sense of urgency to investigate so it does not seem like anyone cares or the issue is known/being worked on. Unfortunately, the side effect of this (and what put me off when I tried to write some stuff for GHC months ago) was that a new developer comes, tries to build clean tree and it fails. It's pretty discouraging. -- Mateusz K. From jan.stolarek at p.lodz.pl Sun Jan 12 11:27:59 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Sun, 12 Jan 2014 12:27:59 +0100 Subject: Starting GHC development. In-Reply-To: References: <52C704D5.4050606@fuuzetsu.co.uk> Message-ID: <201401121227.59418.jan.stolarek@p.lodz.pl> When you see tests failing with validate I's suggest going to #ghc IRC channel and asking there. You're most likely to get an up-to-date answer. Basically, when you're working on some changes in GHC I'd suggest to run ./validate on the master branch to see what (if any) tests are failing. When validating your changes you'll have to see whether your modifications introduced any new failures or not (or maybe fixed the existing ones). In any case I strongly suggest running the failing tests (both on master branch and on yours) to make sure that they fail in the same way. Janek Dnia niedziela, 12 stycznia 2014, Daniil Frumin napisa?: > Does anyone actually know which tests are supposed to fail on 'validate'? > > On Fri, Jan 3, 2014 at 10:43 PM, Mateusz Kowalczyk > > wrote: > > On 03/01/14 13:27, Simon Peyton-Jones wrote: > >> [snip] > >> Thank you. We need lots of help! > >> [snip] > > > > While I hate to interrupt this thread, I think this is a good chance to > > mention something. > > > > I think the big issue for joining GHC development is the lack of > > communication on the mailing list. There are many topics where a person > > has a problem with GHC tree (can't validate/build, some tests are > > failing), posts to GHC devs seeking help and never gets a reply. This is > > very discouraging and often makes it outright impossible to contribute. > > > > An easy example is the failing tests one: unfortunately some tests are > > known to fail, but they are only known to fail to existing GHC devs. A > > new person tries to validate clean tree, gets test failures, asks for > > help on GHC devs, doesn't get any, gives up. > > > > Is there any better way to get through than ghc-devs? Even myself I'd > > love to get started but if I can't get help even getting the ?clean? > > tree to a state where I'm confident it's not a problem with my machine, > > how am I to write patches for anything? A more serious example is that > > the work I did over summer on Haddock still hasn't been pushed in. Why? > > Because neither Simon Hengel nor myself can ensure that we haven't > > broken anything as neither of use gets a clean validate. I have in fact > > asked for help recently with this but to no avail and I do know Simon > > also sought help in the past to no avail. I have also tried to join the > > development quite a few months in the past now but due to failing tests > > on validate and lack of help, I had to give up on that. > > > > Please guys, try to increase responsiveness to posts on this list. It's > > very easy to scroll down in your mail client and see just how many > > threads never got a single reply. > > > > -- > > Mateusz K. > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs From hvr at gnu.org Sun Jan 12 12:32:30 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Sun, 12 Jan 2014 13:32:30 +0100 Subject: HEADS-UP: testsuite has been folded into ghc; back up your old testsuite/ folder! Message-ID: <87vbxp5qjl.fsf@gnu.org> Hello *, It's finally done; effective immediately, the testsuite/ folder is now tracked as part of ghc.git (for the master branch) *IMPORTANT* If there's any chance you have something importing lying around in your testsuite/ please move it out of the way preemptively (e.g. with 'mv testsuite/ testsuite-old/') *before* you perform a 'git pull' or 'sync-all pull' on ghc.git. Even though there are some safe-guards in place, those might not be 100% effective, so better safe than sorry. As only the `master` branch of testsuite.git has been merged, those of you who had wip/ branches or similar in testsuite.git need to handle those yourselves. Should you need assistance with bringing over those commits from testsuite.git into ghc.git (or if there's anything else that's now broken because of this change), please let me and/or Austin a line, so we can assist you. See also previous post "Folding ghc/testsuite repos *now*, 2nd attempt"[1] announcing this step. [1] http://www.haskell.org/pipermail/ghc-devs/2014-January/003730.html Greetings, hvr From austin at well-typed.com Sun Jan 12 20:52:14 2014 From: austin at well-typed.com (Austin Seipp) Date: Sun, 12 Jan 2014 14:52:14 -0600 Subject: Validating with Haddock In-Reply-To: <52D06EFC.9030604@fuuzetsu.co.uk> References: <52BF0209.6020000@fuuzetsu.co.uk> <52CFC4F8.60000@fuuzetsu.co.uk> <52D06EFC.9030604@fuuzetsu.co.uk> Message-ID: Hi Mateusz, I've pushed your work and tweaked the testsuite performance numbers on 64bit. The 32bit ones are out of date, but I'll fix them shortly. I also fixed some of the documentation errors. Thanks for all your hard work. On Fri, Jan 10, 2014 at 4:06 PM, Mateusz Kowalczyk wrote: > On 10/01/14 10:01, Mateusz Kowalczyk wrote: >> Hi all, >> >> I have now merged in the new parser and new features onto a single >> branch. I'm having some issues validating with HEAD at the moment >> (#8661, unrelated problem) but while I get that sorted out, someone >> might want to try validating with Haddock changes on their own platform. >> >> The full branch is at [1]. I have squashed the changes to what I feel is >> the minimum number of commits until they completely stop making sense. >> It should apply cleanly on top of current Haddock master branch. The >> documentation is updated so you can read about what changed. Feel free >> to ask any questions. >> >> I will post again once I can confirm that the branch validates for me >> without any new test failures. >> >> Thanks for your patience. >> >> [1]: https://github.com/Fuuzetsu/haddock/tree/new-features >> > > This is just a simple follow up to say that the changes don't seem to > break anything new on 32-bit Linux. I provide my validate logs before[1] > and after[2] Haddock changes. > > Here's a word of warning: previously, when the mark-up wasn't 100% > clear, we'd get a parse error and no documentation for the whole > package. The new parser no longer does this and instead does its best to > parse and present everything. This means that any Haddock parse failures > should be reported as bugs. As you can see in [1], there were some parse > failures in the past (look for ?doc comment parse failed?) and they will > now be rendered. This means the documentation might look bad in those > places so it's probably worth while visiting those places and having a > look. On an upside, at least we now have documentation for those packages. > > Validation was ran on commit 15a3de1288fe9d055f3dc92d554cb59b3528fa30 > including #8661 fixes. Here's the relevant tail of the logs: > >> Unexpected results from: >> TEST="lazy-bs-alloc T1969 T3064 T4801 T3294 T5498 haddock.Cabal haddock.compiler haddock.base" >> >> OVERALL SUMMARY for test run started at Fri Jan 10 12:45:39 2014 GMT >> 0:17:23 spent to go through >> 3861 total tests, which gave rise to >> 15072 test cases, of which >> 11547 were skipped >> >> 28 had missing libraries >> 3432 expected passes >> 56 expected failures >> >> 0 caused framework failures >> 0 unexpected passes >> 9 unexpected failures >> >> Unexpected failures: >> deriving/should_fail T5498 [stderr mismatch] (normal) >> perf/compiler T1969 [stat too good] (normal) >> perf/compiler T3064 [stat not good enough] (normal) >> perf/compiler T3294 [stat not good enough] (normal) >> perf/compiler T4801 [stat not good enough] (normal) >> perf/haddock haddock.Cabal [stat not good enough] (normal) >> perf/haddock haddock.base [stat not good enough] (normal) >> perf/haddock haddock.compiler [stat not good enough] (normal) >> perf/should_run lazy-bs-alloc [stat too good] (normal) >> >> gmake[2]: Leaving directory `/home/shana/programming/ghc/testsuite/tests' >> gmake[1]: Leaving directory `/home/shana/programming/ghc/testsuite/tests' >> == Start post-testsuite package check >> Timestamp 2014-01-10 12:45:36.897842164 UTC for /home/shana/programming/ghc/bindisttest/install dir/lib/ghc-7.7.20140109/package.conf.d/package.cache >> Timestamp 2014-01-10 12:45:36 UTC for /home/shana/programming/ghc/bindisttest/install dir/lib/ghc-7.7.20140109/package.conf.d (older than cache) >> using cache: /home/shana/programming/ghc/bindisttest/install dir/lib/ghc-7.7.20140109/package.conf.d/package.cache >> == End post-testsuite package check >> ------------------------------------------------------------------- >> Oops! Looks like you have some unexpected test results or framework failures. >> Please fix them before pushing/sending patches. >> ------------------------------------------------------------------- > > The failures are the same in both logs. > > Thanks! > > [1]: http://fuuzetsu.co.uk/misc/segfix > [2]: http://fuuzetsu.co.uk/misc/segfixhaddock > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From karel.gardas at centrum.cz Sun Jan 12 22:41:37 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Sun, 12 Jan 2014 23:41:37 +0100 Subject: [PATCH 1/2] add handling of Solaris linker into SysTools Message-ID: <1389566497-23212-1-git-send-email-karel.gardas@centrum.cz> --- compiler/main/DynFlags.hs | 1 + compiler/main/SysTools.lhs | 9 +++++++++ 2 files changed, 10 insertions(+), 0 deletions(-) diff --git a/compiler/main/DynFlags.hs b/compiler/main/DynFlags.hs index 70d2a81..e253bae 100644 --- a/compiler/main/DynFlags.hs +++ b/compiler/main/DynFlags.hs @@ -3721,6 +3721,7 @@ data LinkerInfo = GnuLD [Option] | GnuGold [Option] | DarwinLD [Option] + | SolarisLD [Option] | UnknownLD deriving Eq diff --git a/compiler/main/SysTools.lhs b/compiler/main/SysTools.lhs index 46f8a86..0c86c18 100644 --- a/compiler/main/SysTools.lhs +++ b/compiler/main/SysTools.lhs @@ -638,6 +638,7 @@ neededLinkArgs :: LinkerInfo -> [Option] neededLinkArgs (GnuLD o) = o neededLinkArgs (GnuGold o) = o neededLinkArgs (DarwinLD o) = o +neededLinkArgs (SolarisLD o) = o neededLinkArgs UnknownLD = [] -- Grab linker info and cache it in DynFlags. @@ -676,6 +677,14 @@ getLinkerInfo' dflags = do -- Process the executable call info <- catchIO (do case os of + OSSolaris2 -> + -- Solaris uses its own Solaris linker. Even all + -- GNU C are receommended to configure with Solaris + -- linker instead of using GNU binutils linker. Also + -- all GCC distributed with Solaris follows this rule + -- precisely so we assume here, the Solaris linker is + -- used. + return $ SolarisLD [] OSDarwin -> -- Darwin has neither GNU Gold or GNU LD, but a strange linker -- that doesn't support --version. We can just assume that's -- 1.7.3.2 From karel.gardas at centrum.cz Sun Jan 12 22:41:54 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Sun, 12 Jan 2014 23:41:54 +0100 Subject: [PATCH 2/2] fix binary linking errors on Solaris due to misplacing of -Wl, -u, option Message-ID: <1389566514-23247-1-git-send-email-karel.gardas@centrum.cz> --- compiler/main/DriverPipeline.hs | 11 ++++++++++- 1 files changed, 10 insertions(+), 1 deletions(-) diff --git a/compiler/main/DriverPipeline.hs b/compiler/main/DriverPipeline.hs index 337778e..1c593b6 100644 --- a/compiler/main/DriverPipeline.hs +++ b/compiler/main/DriverPipeline.hs @@ -1790,7 +1790,16 @@ linkBinary' staticLink dflags o_files dep_packages = do -- HS packages, because libtool doesn't accept other options. -- In the case of iOS these need to be added by hand to the -- final link in Xcode. - else package_hs_libs ++ extra_libs ++ other_flags + else other_flags ++ package_hs_libs ++ extra_libs -- -Wl,-u, contained in other_flags + -- needs to be put before -l, + -- otherwise Solaris linker fails linking + -- a binary with unresolved symbols in RTS + -- which are defined in base package + -- the reason for this is a note in ld(1) about + -- '-u' option: "The placement of this option + -- on the command line is significant. + -- This option must be placed before the library + -- that defines the symbol." pkg_framework_path_opts <- if platformUsesFrameworks platform -- 1.7.3.2 From krz.gogolewski at gmail.com Sun Jan 12 22:56:15 2014 From: krz.gogolewski at gmail.com (Krzysztof Gogolewski) Date: Sun, 12 Jan 2014 23:56:15 +0100 Subject: Enable TypeHoles by default? Message-ID: Hello, I propose to enable -XTypeHoles in GHC by default. Unlike other -X* flags, holes do not really change meaning of the program, they only change error messages. Instead of "_x not in scope", we effectively get "_x not in scope, its expected type is a -> a". You get it only if you precede the identifier not in scope with underscore, so in some sense you declare the intention of using holes. Two possible issues: (a) If you use -fdefer-type-errors, then a program might compile, while previously it did not. However, we should facilitate compiling with defer-type-errors, so I don't think this is a disadvantage. (b) The identifier _ becomes both a pattern and a hole by default, which might confuse new users. Reply: I have never seen anyone ask why code such as "Just _ -> _" does not work. IMO the productivity boost by having holes by default outweighs those two objections. I am open to hearing any other possible issues others might find. The change is trivial implementation-wise; add Opt_TypeHoles to the list in languageExtensions Nothing in DynFlags. -KG -------------- next part -------------- An HTML attachment was scrubbed... URL: From difrumin at gmail.com Sun Jan 12 23:40:16 2014 From: difrumin at gmail.com (Dan Frumin) Date: Mon, 13 Jan 2014 03:40:16 +0400 Subject: Enable TypeHoles by default? In-Reply-To: References: Message-ID: <662BFEA0-2D5F-431D-B0AF-D8970C9F0614@gmail.com> Hi! > On 13 Jan 2014, at 02:56, Krzysztof Gogolewski wrote: > > Hello, > > I propose to enable -XTypeHoles in GHC by default. > > Unlike other -X* flags, holes do not really change meaning of the program, they only change error messages. Instead of "_x not in scope", we effectively get "_x not in scope, its expected type is a -> a". You get it only if you precede the identifier not in scope with underscore, so in some sense you declare the intention of using holes. > > Two possible issues: > > (a) If you use -fdefer-type-errors, then a program might compile, while previously it did not. However, we should facilitate compiling with defer-type-errors, so I don't think this is a disadvantage. > > (b) The identifier _ becomes both a pattern and a hole by default, which might confuse new users. > Reply: I have never seen anyone ask why code such as "Just _ -> _" does not work. > I do think that having _ both as a pattern and a hole might be confusing, I can see that. However that's more of a syntax issue, than an issue about default extensions IMO > IMO the productivity boost by having holes by default outweighs those two objections. I am open to hearing any other possible issues others might find. > > The change is trivial implementation-wise; add Opt_TypeHoles to the list in languageExtensions Nothing in DynFlags. > > -KG > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From benl at ouroborus.net Mon Jan 13 03:06:57 2014 From: benl at ouroborus.net (Ben Lippmeier) Date: Mon, 13 Jan 2014 14:06:57 +1100 Subject: panic when compiling SHA In-Reply-To: References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <59543203684B2244980D7E4057D5FBC148707206@DB3EX14MBXC306.europe.corp.microsoft.com> <2E9BAE47-AE0B-4189-89BC-A01FF8DE499B@ouroborus.net> <52CD2B7B.2030501@gmail.com> Message-ID: On 10/01/2014, at 6:17 , Adam Wick wrote: > That?s the problem with SHA, then. The implementation (and the spec, really) is essentially a long combination of the form: > > let x_n5 = small_computation x_n1 x_n2 x_n3 x_n4 > x_n6 = small_computation x_n2 x_n3 x_n4 x_n5 > ? > > Which has ~70 entries. The actual number of live variables alive at any time should be relatively small, but if slots aren?t getting reused there?s going to be some significant blowup. (To be honest, I had figured ? and thought I had validated ? that doing it this way would give the compiler the best chance at generating optimal code, but it appears I merely set myself up to hit this limitation several years later.) If this [1] is the current version then I don't think there is any performance reason to manually unroll the loops like that. If you write a standard tail-recursive loop then the branch predictor in the processor should make the correct prediction for all iterations except the last one. You'll get one pipeline stall at the end due to a mis-predicted backward branch, but it won't matter in terms of absolute percentage of execution time. You generally only need to worry about branches if the branch flag flips between True and False frequently. If you care deeply about performance then on some processors it can be helpful to unfold this sort of code so that the SHA constants are represented as literals in the instruction stream instead of in static data memory -- but that ability is very processor specific and you'd need to really stare at the emitted assembly code to see if it's worthwhile. Ben. [1] https://github.com/GaloisInc/SHA/blob/master/Data/Digest/Pure/SHA.hs From carter.schonwald at gmail.com Mon Jan 13 03:25:41 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 12 Jan 2014 22:25:41 -0500 Subject: Enable TypeHoles by default? In-Reply-To: <662BFEA0-2D5F-431D-B0AF-D8970C9F0614@gmail.com> References: <662BFEA0-2D5F-431D-B0AF-D8970C9F0614@gmail.com> Message-ID: So would this *improve* error message quality for new users? Defaults that make it easier for haskellers old and new both are a tough balance to make! On Sun, Jan 12, 2014 at 6:40 PM, Dan Frumin wrote: > Hi! > > > On 13 Jan 2014, at 02:56, Krzysztof Gogolewski > wrote: > > > > Hello, > > > > I propose to enable -XTypeHoles in GHC by default. > > > > Unlike other -X* flags, holes do not really change meaning of the > program, they only change error messages. Instead of "_x not in scope", we > effectively get "_x not in scope, its expected type is a -> a". You get it > only if you precede the identifier not in scope with underscore, so in some > sense you declare the intention of using holes. > > > > Two possible issues: > > > > (a) If you use -fdefer-type-errors, then a program might compile, while > previously it did not. However, we should facilitate compiling with > defer-type-errors, so I don't think this is a disadvantage. > > > > (b) The identifier _ becomes both a pattern and a hole by default, which > might confuse new users. > > Reply: I have never seen anyone ask why code such as "Just _ -> _" does > not work. > > > > I do think that having _ both as a pattern and a hole might be confusing, > I can see that. However that's more of a syntax issue, than an issue about > default extensions IMO > > > IMO the productivity boost by having holes by default outweighs those > two objections. I am open to hearing any other possible issues others might > find. > > > > The change is trivial implementation-wise; add Opt_TypeHoles to the list > in languageExtensions Nothing in DynFlags. > > > > -KG > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Jan 13 03:26:59 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 12 Jan 2014 22:26:59 -0500 Subject: panic when compiling SHA In-Reply-To: References: <20131227.100716.1812997308262292710.kazu@iij.ad.jp> <501EC3C7-E7EF-4485-879A-404FFFF22F55@ouroborus.net> <52C7DB7E.1030408@gmail.com> <20140104.212236.2151539280544564973.kazu@iij.ad.jp> <59543203684B2244980D7E4057D5FBC148707206@DB3EX14MBXC306.europe.corp.microsoft.com> <2E9BAE47-AE0B-4189-89BC-A01FF8DE499B@ouroborus.net> <52CD2B7B.2030501@gmail.com> Message-ID: agreed, this level of unrolling would cause problems even in most C compilers! When I write unrolled simd C code, I use a fixed number of variables that corresponds to the # of registers that are live on my target arch (yes, internally the turn into SSA which then does an ANF/CPS style and such), but by shadowing/resuing a fixed number of names, i can make it "syntactically" clear what the lifetimes of my variables is. But yeah, branch predictors are pretty good on modern hardware, a loop is worth considering. On Sun, Jan 12, 2014 at 10:06 PM, Ben Lippmeier wrote: > > On 10/01/2014, at 6:17 , Adam Wick wrote: > > > That?s the problem with SHA, then. The implementation (and the spec, > really) is essentially a long combination of the form: > > > > let x_n5 = small_computation x_n1 x_n2 x_n3 x_n4 > > x_n6 = small_computation x_n2 x_n3 x_n4 x_n5 > > ? > > > > Which has ~70 entries. The actual number of live variables alive at any > time should be relatively small, but if slots aren?t getting reused there?s > going to be some significant blowup. (To be honest, I had figured ? and > thought I had validated ? that doing it this way would give the compiler > the best chance at generating optimal code, but it appears I merely set > myself up to hit this limitation several years later.) > > If this [1] is the current version then I don't think there is any > performance reason to manually unroll the loops like that. If you write a > standard tail-recursive loop then the branch predictor in the processor > should make the correct prediction for all iterations except the last one. > You'll get one pipeline stall at the end due to a mis-predicted backward > branch, but it won't matter in terms of absolute percentage of execution > time. You generally only need to worry about branches if the branch flag > flips between True and False frequently. > > If you care deeply about performance then on some processors it can be > helpful to unfold this sort of code so that the SHA constants are > represented as literals in the instruction stream instead of in static data > memory -- but that ability is very processor specific and you'd need to > really stare at the emitted assembly code to see if it's worthwhile. > > Ben. > > > [1] https://github.com/GaloisInc/SHA/blob/master/Data/Digest/Pure/SHA.hs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 13 08:42:47 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Jan 2014 08:42:47 +0000 Subject: Starting GHC development. In-Reply-To: <52D26E9F.9010301@fuuzetsu.co.uk> References: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> <52C704D5.4050606@fuuzetsu.co.uk> <52D26E9F.9010301@fuuzetsu.co.uk> Message-ID: <59543203684B2244980D7E4057D5FBC148712723@DB3EX14MBXC306.europe.corp.microsoft.com> None seem to fail on my (Linux) box. It'd be good if someone felt able to dig into the ones that are failing. If there is a good reason we should open a ticket and mark them as expect_broken( ticket-number ). Thanks! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Mateusz Kowalczyk | Sent: 12 January 2014 10:30 | To: Daniil Frumin | Cc: ghc-devs | Subject: Re: Starting GHC development. | | On 12/01/14 10:25, Daniil Frumin wrote: | > Does anyone actually know which tests are supposed to fail on | 'validate'? | > | | AFAIK the official stance is that you should see 0 failures. | Unofficially it seems that there's leniency and the tree seems to be in | a state with few tests failing consistently. It might be just my machine | though, but whenever I post my build logs, there seems to be no sense of | urgency to investigate so it does not seem like anyone cares or the | issue is known/being worked on. | | Unfortunately, the side effect of this (and what put me off when I tried | to write some stuff for GHC months ago) was that a new developer comes, | tries to build clean tree and it fails. It's pretty discouraging. | | -- | Mateusz K. | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From fuuzetsu at fuuzetsu.co.uk Mon Jan 13 08:51:54 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Mon, 13 Jan 2014 08:51:54 +0000 Subject: Starting GHC development. In-Reply-To: <59543203684B2244980D7E4057D5FBC148712723@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> <52C704D5.4050606@fuuzetsu.co.uk> <52D26E9F.9010301@fuuzetsu.co.uk> <59543203684B2244980D7E4057D5FBC148712723@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52D3A92A.9040503@fuuzetsu.co.uk> On 13/01/14 08:42, Simon Peyton Jones wrote: > None seem to fail on my (Linux) box. It'd be good if someone felt able to dig into the ones that are failing. If there is a good reason we should open a ticket and mark them as expect_broken( ticket-number ). Thanks! > > Simon > Hm. I checked a log from 6 days ago and here's the end of it: > Unexpected failures: > perf/compiler T1969 [stat too good] (normal) > perf/compiler T3064 [stat not good enough] (normal) > perf/compiler T3294 [stat not good enough] (normal) > perf/compiler T4801 [stat not good enough] (normal) > perf/haddock haddock.Cabal [stat not good enough] (normal) > perf/haddock haddock.base [stat not good enough] (normal) > perf/haddock haddock.compiler [stat not good enough] (normal) > perf/should_run lazy-bs-alloc [stat too good] (normal) We already know that the 32-bit Linux values for Haddock need updating but I have no idea about other ones. I will validate with a clean tree in the following few days and will pester the list with any failures but perhaps for the tests above, the numbers simply need updating. I do not know, I don't think there's any information anywhere about this. Perhaps there indeed aren't any problems but simply outdated tests. Can someone pitch in? Note that to a newcomer, a perf failure is still a failure especially considering that the bottom of the log tells you to fix these before sending any patches. I think fixing these is a little bit out of scope for a newcomer. -- Mateusz K. From simonpj at microsoft.com Mon Jan 13 08:57:51 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Jan 2014 08:57:51 +0000 Subject: Enable TypeHoles by default? In-Reply-To: References: Message-ID: <59543203684B2244980D7E4057D5FBC1487128DA@DB3EX14MBXC306.europe.corp.microsoft.com> This would be fine by me - it's a "user-experience" question. It would slightly threaten the notion that GHC is, by default, a Haskell-2010 compiler; that is, it accepts H-2010 programs and rejects non-H2010 programs. But I think it would be an acceptable bending of this principle, if people wanted it. Maybe ask ghc-users? Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Krzysztof Gogolewski Sent: 12 January 2014 22:56 To: ghc-devs at haskell.org Subject: Enable TypeHoles by default? Hello, I propose to enable -XTypeHoles in GHC by default. Unlike other -X* flags, holes do not really change meaning of the program, they only change error messages. Instead of "_x not in scope", we effectively get "_x not in scope, its expected type is a -> a". You get it only if you precede the identifier not in scope with underscore, so in some sense you declare the intention of using holes. Two possible issues: (a) If you use -fdefer-type-errors, then a program might compile, while previously it did not. However, we should facilitate compiling with defer-type-errors, so I don't think this is a disadvantage. (b) The identifier _ becomes both a pattern and a hole by default, which might confuse new users. Reply: I have never seen anyone ask why code such as "Just _ -> _" does not work. IMO the productivity boost by having holes by default outweighs those two objections. I am open to hearing any other possible issues others might find. The change is trivial implementation-wise; add Opt_TypeHoles to the list in languageExtensions Nothing in DynFlags. -KG -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 13 09:00:34 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Jan 2014 09:00:34 +0000 Subject: [commit: packages/integer-gmp] wip/T8647: Allocate initial 1-limb mpz_t on the Stack and introduce MPZ# type (20d7bfd) In-Reply-To: <20140112233720.5C02F2406B@ghc.haskell.org> References: <20140112233720.5C02F2406B@ghc.haskell.org> Message-ID: <59543203684B2244980D7E4057D5FBC1487128FF@DB3EX14MBXC306.europe.corp.microsoft.com> Would it be worth adding some info from the commit message to the comment with type MPZ#? In particular, the little table you give in the commit message is helpful, but someone looking at the code won't see it. This is subtle stuff. Simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of | git at git.haskell.org | Sent: 12 January 2014 23:37 | To: ghc-commits at haskell.org | Subject: [commit: packages/integer-gmp] wip/T8647: Allocate initial 1- | limb mpz_t on the Stack and introduce MPZ# type (20d7bfd) | | Repository : ssh://git at git.haskell.org/integer-gmp | | On branch : wip/T8647 | Link : | http://ghc.haskell.org/trac/ghc/changeset/20d7bfdd29917f5a8b8937fba9b724 | f7e71cd8dd/integer-gmp | | >--------------------------------------------------------------- | | commit 20d7bfdd29917f5a8b8937fba9b724f7e71cd8dd | Author: Herbert Valerio Riedel | Date: Thu Jan 9 00:19:31 2014 +0100 | | Allocate initial 1-limb mpz_t on the Stack and introduce MPZ# type | | We now allocate a 1-limb mpz_t on the stack instead of doing a more | expensive heap-allocation (especially if the heap-allocated copy | becomes | garbage right away); this addresses #8647. | | In order to delay heap allocations of 1-limb `ByteArray#`s instead | of | the previous `(# Int#, ByteArray# #)` pair, a 3-tuple | `(# Int#, ByteArray#, Word# #)` is returned now. This tuple is given | the | type-synonym `MPZ#`. | | This 3-tuple representation uses either the 1st and the 2nd element, | or | the 1st and the 3rd element to represent the limb(s) (NB: undefined | `ByteArray#` elements must not be accessed as they don't point to a | proper `ByteArray#`, see also `DUMMY_BYTE_ARR`); more specifically, | the | following encoding is used (where `?` means undefined/unused): | | - (# 0#, ?, 0## #) -> value = 0 | - (# 1#, ?, w #) -> value = w | - (# -1#, ?, w #) -> value = -w | - (# s#, d, 0## #) -> value = J# s d | | The `mpzToInteger` helper takes care of converting `MPZ#` into an | `Integer`, and allocating a 1-limb `ByteArray#` in case the | value (`w`/`-w`) doesn't fit the `S# Int#` representation). | | The following nofib benchmarks benefit from this optimization: | | Program Size Allocs Runtime Elapsed TotalMem | ------------------------------------------------------------------ | bernouilli +0.2% -5.2% 0.12 0.12 +0.0% | gamteb +0.2% -1.7% 0.03 0.03 +0.0% | kahan +0.3% -13.2% 0.17 0.17 +0.0% | mandel +0.2% -24.6% 0.04 0.04 +0.0% | power +0.2% -2.6% -2.0% -2.0% -8.3% | primetest +0.1% -17.3% 0.06 0.06 +0.0% | rsa +0.2% -18.5% 0.02 0.02 +0.0% | scs +0.1% -2.9% -0.1% -0.1% +0.0% | sphere +0.3% -0.8% 0.03 0.03 +0.0% | symalg +0.2% -3.1% 0.01 0.01 +0.0% | ------------------------------------------------------------------ | Min +0.1% -24.6% -4.6% -4.6% -8.3% | Max +0.3% +0.0% +5.9% +5.9% +4.5% | Geometric Mean +0.2% -1.0% +0.2% +0.2% -0.0% | | Signed-off-by: Herbert Valerio Riedel | | | >--------------------------------------------------------------- | | 20d7bfdd29917f5a8b8937fba9b724f7e71cd8dd | GHC/Integer/GMP/Prim.hs | 88 ++++++++++++------- | GHC/Integer/Type.lhs | 160 +++++++++++++++------------------ | cbits/gmp-wrappers.cmm | 224 +++++++++++++++++++++++++++++++++------- | ------- | 3 files changed, 285 insertions(+), 187 deletions(-) | | Diff suppressed because of size. To see it, use: | | git diff-tree --root --patch-with-stat --no-color --find-copies- | harder --ignore-space-at-eol --cc | 20d7bfdd29917f5a8b8937fba9b724f7e71cd8dd | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-commits From johan.tibell at gmail.com Mon Jan 13 09:02:03 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Mon, 13 Jan 2014 10:02:03 +0100 Subject: Enable TypeHoles by default? In-Reply-To: <59543203684B2244980D7E4057D5FBC1487128DA@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC1487128DA@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Perhaps we should let type holes be used for one release (so we can get some feedback) before turning it on by default? On Mon, Jan 13, 2014 at 9:57 AM, Simon Peyton Jones wrote: > This would be fine by me ? it?s a ?user-experience? question. > > > > It would slightly threaten the notion that GHC is, by default, a > Haskell-2010 compiler; that is, it accepts H-2010 programs and rejects > non-H2010 programs. But I think it would be an acceptable bending of this > principle, if people wanted it. Maybe ask ghc-users? > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Krzysztof > Gogolewski > *Sent:* 12 January 2014 22:56 > *To:* ghc-devs at haskell.org > *Subject:* Enable TypeHoles by default? > > > > Hello, > > > > I propose to enable -XTypeHoles in GHC by default. > > > > Unlike other -X* flags, holes do not really change meaning of the program, > they only change error messages. Instead of "_x not in scope", we > effectively get "_x not in scope, its expected type is a -> a". You get it > only if you precede the identifier not in scope with underscore, so in some > sense you declare the intention of using holes. > > > > Two possible issues: > > > > (a) If you use -fdefer-type-errors, then a program might compile, while > previously it did not. However, we should facilitate compiling with > defer-type-errors, so I don't think this is a disadvantage. > > > > (b) The identifier _ becomes both a pattern and a hole by default, which > might confuse new users. > > Reply: I have never seen anyone ask why code such as "Just _ -> _" does > not work. > > > > IMO the productivity boost by having holes by default outweighs those two > objections. I am open to hearing any other possible issues others might > find. > > > > The change is trivial implementation-wise; add Opt_TypeHoles to the list > in languageExtensions Nothing in DynFlags. > > > > -KG > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyrab at mail.ru Mon Jan 13 10:01:12 2014 From: kyrab at mail.ru (kyra) Date: Mon, 13 Jan 2014 14:01:12 +0400 Subject: [commit: ghc] master: Add Windows to NoSharedLibsPlatformList (4af1e76) In-Reply-To: <20140113062821.1C7D92406B@ghc.haskell.org> References: <20140113062821.1C7D92406B@ghc.haskell.org> Message-ID: <52D3B968.6020005@mail.ru> Does this mean we have no 64-bit windows support for 7.8 (only dynamic-linked compiler works on 64-bit windows)? On 1/13/2014 10:28, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/ghc > > On branch : master > Link : http://ghc.haskell.org/trac/ghc/changeset/4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d/ghc > >> --------------------------------------------------------------- > commit 4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d > Author: Austin Seipp > Date: Mon Jan 13 00:21:18 2014 -0600 > > Add Windows to NoSharedLibsPlatformList > > We're punting on full -dynamic and -dynamic-too support for Windows > right now, since it's still unstable. Also, ensure "Support dynamic-too" > in `ghc --info` is set to "NO" for Cabal. > > See issues #7134, #8228, and #5987 > > Signed-off-by: Austin Seipp > > >> --------------------------------------------------------------- > 4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d > compiler/main/DynFlags.hs | 4 +++- > mk/config.mk.in | 19 ++++--------------- > 2 files changed, 7 insertions(+), 16 deletions(-) > > diff --git a/compiler/main/DynFlags.hs b/compiler/main/DynFlags.hs > index 06d1ed9..734e7e9 100644 > --- a/compiler/main/DynFlags.hs > +++ b/compiler/main/DynFlags.hs > @@ -3563,7 +3563,7 @@ compilerInfo dflags > ("Support SMP", cGhcWithSMP), > ("Tables next to code", cGhcEnableTablesNextToCode), > ("RTS ways", cGhcRTSWays), > - ("Support dynamic-too", "YES"), > + ("Support dynamic-too", if isWindows then "NO" else "YES"), > ("Support parallel --make", "YES"), > ("Dynamic by default", if dYNAMIC_BY_DEFAULT dflags > then "YES" else "NO"), > @@ -3574,6 +3574,8 @@ compilerInfo dflags > ("LibDir", topDir dflags), > ("Global Package DB", systemPackageConfig dflags) > ] > + where > + isWindows = platformOS (targetPlatform dflags) == OSMinGW32 > > #include "../includes/dist-derivedconstants/header/GHCConstantsHaskellWrappers.hs" > > diff --git a/mk/config.mk.in b/mk/config.mk.in > index f61ecc0..59d48c4 100644 > --- a/mk/config.mk.in > +++ b/mk/config.mk.in > @@ -94,22 +94,11 @@ else > TargetElf = YES > endif > > -# Currently, on Windows, we artificially limit the unfolding creation > -# threshold to minimize the number of exported symbols on Windows > -# platforms in the stage2 DLL. This avoids a hard limit of 2^16 > -# exported symbols in the windows dynamic linker. > -# > -# This is a pitifully low threshold (the default is 750,) but it > -# reduced the symbol count by about ~7,000, bringing us back under the > -# limit (for now.) > -# > -# See #5987 > -ifeq "$(TargetOS_CPP)" "mingw32" > -GhcStage2HcOpts += -funfolding-creation-threshold=100 > -endif > - > # Some platforms don't support shared libraries > -NoSharedLibsPlatformList = arm-unknown-linux powerpc-unknown-linux > +NoSharedLibsPlatformList = arm-unknown-linux \ > + powerpc-unknown-linux \ > + x86_64-unknown-mingw32 \ > + i386-unknown-mingw32 > > ifeq "$(SOLARIS_BROKEN_SHLD)" "YES" > NoSharedLibsPlatformList += i386-unknown-solaris2 > > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-commits > From roma at ro-che.info Mon Jan 13 10:24:42 2014 From: roma at ro-che.info (Roman Cheplyaka) Date: Mon, 13 Jan 2014 12:24:42 +0200 Subject: Enable TypeHoles by default? In-Reply-To: <59543203684B2244980D7E4057D5FBC1487128DA@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC1487128DA@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <20140113102442.GA10504@sniper> * Simon Peyton Jones [2014-01-13 08:57:51+0000] > This would be fine by me - it's a "user-experience" question. > > It would slightly threaten the notion that GHC is, by default, a > Haskell-2010 compiler; that is, it accepts H-2010 programs and rejects > non-H2010 programs. But that's not the case even now, is it? Particularly due to NondecreasingIndentation. Roman -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From austin at well-typed.com Mon Jan 13 10:31:17 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 13 Jan 2014 04:31:17 -0600 Subject: [commit: ghc] master: Add Windows to NoSharedLibsPlatformList (4af1e76) In-Reply-To: <52D3B968.6020005@mail.ru> References: <20140113062821.1C7D92406B@ghc.haskell.org> <52D3B968.6020005@mail.ru> Message-ID: The 64bit GHC 7.6.3 windows compiler was not dynamically linked, although it did have -dynamic libraries (although using them is a pain in Windows.) It loaded static object files (you can verify this yourself: 'ghc -O foo.hs && ghci foo' will load the object file, but 'ghc -dynamic -O foo.hs && ghci foo' will not and instead interpret.) Relatedly, -dynamic-too is also broken on windows, but it's more of an optimization than anything. 7.8 won't have a dynamically linked GHCi for Windows and it won't have -dynamic-too (i.e. essentially the same as 7.6.) Linux, OS X will have both. At this exact moment, -dynamic also seems busted on Windows and I'm looking into fixing it. This will just help me in the mean time to clean up the tree and keep it building for others. On Mon, Jan 13, 2014 at 4:01 AM, kyra wrote: > Does this mean we have no 64-bit windows support for 7.8 (only > dynamic-linked compiler works on 64-bit windows)? > > > On 1/13/2014 10:28, git at git.haskell.org wrote: >> >> Repository : ssh://git at git.haskell.org/ghc >> >> On branch : master >> Link : >> http://ghc.haskell.org/trac/ghc/changeset/4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d/ghc >> >>> --------------------------------------------------------------- >> >> commit 4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d >> Author: Austin Seipp >> Date: Mon Jan 13 00:21:18 2014 -0600 >> >> Add Windows to NoSharedLibsPlatformList >> We're punting on full -dynamic and -dynamic-too support for >> Windows >> right now, since it's still unstable. Also, ensure "Support >> dynamic-too" >> in `ghc --info` is set to "NO" for Cabal. >> See issues #7134, #8228, and #5987 >> Signed-off-by: Austin Seipp >> >> >>> --------------------------------------------------------------- >> >> 4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d >> compiler/main/DynFlags.hs | 4 +++- >> mk/config.mk.in | 19 ++++--------------- >> 2 files changed, 7 insertions(+), 16 deletions(-) >> >> diff --git a/compiler/main/DynFlags.hs b/compiler/main/DynFlags.hs >> index 06d1ed9..734e7e9 100644 >> --- a/compiler/main/DynFlags.hs >> +++ b/compiler/main/DynFlags.hs >> @@ -3563,7 +3563,7 @@ compilerInfo dflags >> ("Support SMP", cGhcWithSMP), >> ("Tables next to code", cGhcEnableTablesNextToCode), >> ("RTS ways", cGhcRTSWays), >> - ("Support dynamic-too", "YES"), >> + ("Support dynamic-too", if isWindows then "NO" else >> "YES"), >> ("Support parallel --make", "YES"), >> ("Dynamic by default", if dYNAMIC_BY_DEFAULT dflags >> then "YES" else "NO"), >> @@ -3574,6 +3574,8 @@ compilerInfo dflags >> ("LibDir", topDir dflags), >> ("Global Package DB", systemPackageConfig dflags) >> ] >> + where >> + isWindows = platformOS (targetPlatform dflags) == OSMinGW32 >> #include >> "../includes/dist-derivedconstants/header/GHCConstantsHaskellWrappers.hs" >> diff --git a/mk/config.mk.in b/mk/config.mk.in >> index f61ecc0..59d48c4 100644 >> --- a/mk/config.mk.in >> +++ b/mk/config.mk.in >> @@ -94,22 +94,11 @@ else >> TargetElf = YES >> endif >> -# Currently, on Windows, we artificially limit the unfolding creation >> -# threshold to minimize the number of exported symbols on Windows >> -# platforms in the stage2 DLL. This avoids a hard limit of 2^16 >> -# exported symbols in the windows dynamic linker. >> -# >> -# This is a pitifully low threshold (the default is 750,) but it >> -# reduced the symbol count by about ~7,000, bringing us back under the >> -# limit (for now.) >> -# >> -# See #5987 >> -ifeq "$(TargetOS_CPP)" "mingw32" >> -GhcStage2HcOpts += -funfolding-creation-threshold=100 >> -endif >> - >> # Some platforms don't support shared libraries >> -NoSharedLibsPlatformList = arm-unknown-linux powerpc-unknown-linux >> +NoSharedLibsPlatformList = arm-unknown-linux \ >> + powerpc-unknown-linux \ >> + x86_64-unknown-mingw32 \ >> + i386-unknown-mingw32 >> ifeq "$(SOLARIS_BROKEN_SHLD)" "YES" >> NoSharedLibsPlatformList += i386-unknown-solaris2 >> >> _______________________________________________ >> ghc-commits mailing list >> ghc-commits at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-commits >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From marlowsd at gmail.com Mon Jan 13 09:51:02 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 13 Jan 2014 09:51:02 +0000 Subject: Enable TypeHoles by default? In-Reply-To: References: Message-ID: <52D3B706.7020302@gmail.com> On 12/01/2014 22:56, Krzysztof Gogolewski wrote: > I propose to enable -XTypeHoles in GHC by default. GHC supports strict Haskell 2010 by default, and enabling any extensions breaks that property. That's why we don't have any extensions on by default. Cheers, Simon From marlowsd at gmail.com Mon Jan 13 10:01:49 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 13 Jan 2014 10:01:49 +0000 Subject: Starting GHC development. In-Reply-To: <52C704D5.4050606@fuuzetsu.co.uk> References: <59543203684B2244980D7E4057D5FBC148704D05@DB3EX14MBXC306.europe.corp.microsoft.com> <52C704D5.4050606@fuuzetsu.co.uk> Message-ID: <52D3B98D.90900@gmail.com> On 03/01/2014 18:43, Mateusz Kowalczyk wrote: > On 03/01/14 13:27, Simon Peyton-Jones wrote: >> [snip] >> Thank you. We need lots of help! >> [snip] > > While I hate to interrupt this thread, I think this is a good chance to > mention something. > > I think the big issue for joining GHC development is the lack of > communication on the mailing list. There are many topics where a person > has a problem with GHC tree (can't validate/build, some tests are > failing), posts to GHC devs seeking help and never gets a reply. This is > very discouraging and often makes it outright impossible to contribute. > > An easy example is the failing tests one: unfortunately some tests are > known to fail, but they are only known to fail to existing GHC devs. A > new person tries to validate clean tree, gets test failures, asks for > help on GHC devs, doesn't get any, gives up. Personally I only just read your question about validating Haddock. I didn't know the answer, but I would have suggested a few things to try - it looks like your local package database has some broken packages, perhaps. I'm not sure what the "post-testsuite package check" is, so I would have to go and look. Generally, the following things might help if you get no answer on the mailing list: - ask in #ghc - search the ticket database (Google with 'site:ghc.haskell.org') - poke around in the code yourself There are not supposed to be any failing tests in validate. If there are, it is a bug: see if someone else has reported it, and if not, report it yourself. Thanks for trying to make GHC better. There are gaps in our support infrastructure, which is unfortunate but typical for an open source project, so sometimes you might have to fix something that wasn't on your critical path. Rest assured that we're very grateful when people do this! Cheers, Simon From marlowsd at gmail.com Mon Jan 13 10:08:19 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 13 Jan 2014 10:08:19 +0000 Subject: Alex unicode trick In-Reply-To: <52CC44F9.6010201@fuuzetsu.co.uk> References: <52CBABE8.4040001@fuuzetsu.co.uk> <52CC117F.8010006@gmail.com> <52CC44F9.6010201@fuuzetsu.co.uk> Message-ID: <52D3BB13.4020203@gmail.com> On 07/01/2014 18:18, Mateusz Kowalczyk wrote: > Ah, I think I understand now. If this is the case, at least the > ?alexGetChar? could be removed, right? Is Alex 2.x compatibility > necessary for any reason whatsoever? Yes, the backwards compatibility could be removed now that we require a very recent version of Alex. Cheers, Simon From marlowsd at gmail.com Mon Jan 13 10:12:31 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 13 Jan 2014 10:12:31 +0000 Subject: GHC API: Using runGhc twice or from multiple threads? In-Reply-To: References: Message-ID: <52D3BC0F.7010000@gmail.com> On 07/01/2014 13:55, Benno F?nfst?ck wrote: > Hello, > > is the following safe to do? > > main = do > runGhc libdir $ do ... > runGhc libdir $ do ... > > Or will this cause trouble? Is there state that is shared between the > two calls? The main restriction here is that you can only set the static flags once, because they depend on some global mutable state (the GLOBAL_VARs that Simon mentioned). > And what about this one: > > main = do > forkIO $ runGhc libdir $ do ... > forkIO $ runGhc libdir $ do ... The problem with this is the RTS linker, which is a single piece of shared global state. We could actually fix that if it became important. If you're not running interpreted code, this should be fine (apart from the static flags issue mentioned above). Cheers, Simon From kyrab at mail.ru Mon Jan 13 10:51:50 2014 From: kyrab at mail.ru (kyra) Date: Mon, 13 Jan 2014 14:51:50 +0400 Subject: [commit: ghc] master: Add Windows to NoSharedLibsPlatformList (4af1e76) In-Reply-To: References: <20140113062821.1C7D92406B@ghc.haskell.org> <52D3B968.6020005@mail.ru> Message-ID: <52D3C546.8010307@mail.ru> Statically linked 64-bit Windows GHC does not work because of #7134. Even LARGEADDRESSAWARE flag disabling (extremely bad hack itself) does not work anymore both on Windows 7 and Windows 8. Or is there another (besides dynamic linking) plan to attack #7134? I could step in to try to help with any of these, but I'd want to get more guidance then - either on enabling dll-relating things (for some time age I've tried to find better ghc-to-dlls decomposition using dll-split tool, but quickly found we can't do better than it is now, perhaps GHC itself needs some refactoring to solve this problem), or fixing #7134 in some other way. The last would be better, because dynamic-linked Windows GHC has longer load time (which can jump to intolerable 2-3 secs, which happens, I guess, when we approach 64k exported symbols limit). On 1/13/2014 14:31, Austin Seipp wrote: > The 64bit GHC 7.6.3 windows compiler was not dynamically linked, > although it did have -dynamic libraries (although using them is a pain > in Windows.) It loaded static object files (you can verify this > yourself: 'ghc -O foo.hs && ghci foo' will load the object file, but > 'ghc -dynamic -O foo.hs && ghci foo' will not and instead interpret.) > Relatedly, -dynamic-too is also broken on windows, but it's more of an > optimization than anything. > > 7.8 won't have a dynamically linked GHCi for Windows and it won't have > -dynamic-too (i.e. essentially the same as 7.6.) Linux, OS X will have > both. > > At this exact moment, -dynamic also seems busted on Windows and I'm > looking into fixing it. This will just help me in the mean time to > clean up the tree and keep it building for others. > > On Mon, Jan 13, 2014 at 4:01 AM, kyra wrote: >> Does this mean we have no 64-bit windows support for 7.8 (only >> dynamic-linked compiler works on 64-bit windows)? >> >> >> On 1/13/2014 10:28, git at git.haskell.org wrote: >>> Repository : ssh://git at git.haskell.org/ghc >>> >>> On branch : master >>> Link : >>> http://ghc.haskell.org/trac/ghc/changeset/4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d/ghc >>> >>>> --------------------------------------------------------------- >>> commit 4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d >>> Author: Austin Seipp >>> Date: Mon Jan 13 00:21:18 2014 -0600 >>> >>> Add Windows to NoSharedLibsPlatformList >>> We're punting on full -dynamic and -dynamic-too support for >>> Windows >>> right now, since it's still unstable. Also, ensure "Support >>> dynamic-too" >>> in `ghc --info` is set to "NO" for Cabal. >>> See issues #7134, #8228, and #5987 >>> Signed-off-by: Austin Seipp >>> >>> >>>> --------------------------------------------------------------- >>> 4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d >>> compiler/main/DynFlags.hs | 4 +++- >>> mk/config.mk.in | 19 ++++--------------- >>> 2 files changed, 7 insertions(+), 16 deletions(-) >>> >>> diff --git a/compiler/main/DynFlags.hs b/compiler/main/DynFlags.hs >>> index 06d1ed9..734e7e9 100644 >>> --- a/compiler/main/DynFlags.hs >>> +++ b/compiler/main/DynFlags.hs >>> @@ -3563,7 +3563,7 @@ compilerInfo dflags >>> ("Support SMP", cGhcWithSMP), >>> ("Tables next to code", cGhcEnableTablesNextToCode), >>> ("RTS ways", cGhcRTSWays), >>> - ("Support dynamic-too", "YES"), >>> + ("Support dynamic-too", if isWindows then "NO" else >>> "YES"), >>> ("Support parallel --make", "YES"), >>> ("Dynamic by default", if dYNAMIC_BY_DEFAULT dflags >>> then "YES" else "NO"), >>> @@ -3574,6 +3574,8 @@ compilerInfo dflags >>> ("LibDir", topDir dflags), >>> ("Global Package DB", systemPackageConfig dflags) >>> ] >>> + where >>> + isWindows = platformOS (targetPlatform dflags) == OSMinGW32 >>> #include >>> "../includes/dist-derivedconstants/header/GHCConstantsHaskellWrappers.hs" >>> diff --git a/mk/config.mk.in b/mk/config.mk.in >>> index f61ecc0..59d48c4 100644 >>> --- a/mk/config.mk.in >>> +++ b/mk/config.mk.in >>> @@ -94,22 +94,11 @@ else >>> TargetElf = YES >>> endif >>> -# Currently, on Windows, we artificially limit the unfolding creation >>> -# threshold to minimize the number of exported symbols on Windows >>> -# platforms in the stage2 DLL. This avoids a hard limit of 2^16 >>> -# exported symbols in the windows dynamic linker. >>> -# >>> -# This is a pitifully low threshold (the default is 750,) but it >>> -# reduced the symbol count by about ~7,000, bringing us back under the >>> -# limit (for now.) >>> -# >>> -# See #5987 >>> -ifeq "$(TargetOS_CPP)" "mingw32" >>> -GhcStage2HcOpts += -funfolding-creation-threshold=100 >>> -endif >>> - >>> # Some platforms don't support shared libraries >>> -NoSharedLibsPlatformList = arm-unknown-linux powerpc-unknown-linux >>> +NoSharedLibsPlatformList = arm-unknown-linux \ >>> + powerpc-unknown-linux \ >>> + x86_64-unknown-mingw32 \ >>> + i386-unknown-mingw32 >>> ifeq "$(SOLARIS_BROKEN_SHLD)" "YES" >>> NoSharedLibsPlatformList += i386-unknown-solaris2 >>> >>> _______________________________________________ >>> ghc-commits mailing list >>> ghc-commits at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-commits >>> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > From gergo at erdi.hu Mon Jan 13 10:53:55 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Mon, 13 Jan 2014 18:53:55 +0800 (SGT) Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Message-ID: Hi, On Thu, 9 Jan 2014, Austin Seipp wrote: > Hi Gergo, > > I went ahead and pushed the preliminary work to a new branch in the > official repositories. GHC, haddock and testsuite now have a > 'wip/pattern-synonyms' branch, where you can test the code: > > https://github.com/ghc/ghc/commits/wip/pattern-synonyms > https://github.com/ghc/haddock/commits/wip/pattern-synonyms > https://github.com/ghc/testsuite/commits/wip/pattern-synonyms So what's the intended workflow for me from now on? Will master be regularly merged into this branch? Should I base my future work (like fixing the outstanding issues your mail detailed) on top of this branch and continue pushing to my github repo? Thanks, Gergo -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' Why experiment on animals when there are so many lawyers? From kyrab at mail.ru Mon Jan 13 10:55:23 2014 From: kyrab at mail.ru (kyra) Date: Mon, 13 Jan 2014 14:55:23 +0400 Subject: [commit: ghc] master: Add Windows to NoSharedLibsPlatformList (4af1e76) In-Reply-To: <52D3C546.8010307@mail.ru> References: <20140113062821.1C7D92406B@ghc.haskell.org> <52D3B968.6020005@mail.ru> <52D3C546.8010307@mail.ru> Message-ID: <52D3C61B.5070200@mail.ru> Sorry for typing in a hurry. "some time age" should be read as "some time ago". On 1/13/2014 14:51, kyra wrote: > Statically linked 64-bit Windows GHC does not work because of #7134. > Even LARGEADDRESSAWARE flag disabling (extremely bad hack itself) does > not work anymore both on Windows 7 and Windows 8. > > Or is there another (besides dynamic linking) plan to attack #7134? > > I could step in to try to help with any of these, but I'd want to get > more guidance then - either on enabling dll-relating things (for some > time age I've tried to find better ghc-to-dlls decomposition using > dll-split tool, but quickly found we can't do better than it is now, > perhaps GHC itself needs some refactoring to solve this problem), or > fixing #7134 in some other way. The last would be better, because > dynamic-linked Windows GHC has longer load time (which can jump to > intolerable 2-3 secs, which happens, I guess, when we approach 64k > exported symbols limit). > > On 1/13/2014 14:31, Austin Seipp wrote: >> The 64bit GHC 7.6.3 windows compiler was not dynamically linked, >> although it did have -dynamic libraries (although using them is a pain >> in Windows.) It loaded static object files (you can verify this >> yourself: 'ghc -O foo.hs && ghci foo' will load the object file, but >> 'ghc -dynamic -O foo.hs && ghci foo' will not and instead interpret.) >> Relatedly, -dynamic-too is also broken on windows, but it's more of an >> optimization than anything. >> >> 7.8 won't have a dynamically linked GHCi for Windows and it won't have >> -dynamic-too (i.e. essentially the same as 7.6.) Linux, OS X will have >> both. >> >> At this exact moment, -dynamic also seems busted on Windows and I'm >> looking into fixing it. This will just help me in the mean time to >> clean up the tree and keep it building for others. >> >> On Mon, Jan 13, 2014 at 4:01 AM, kyra wrote: >>> Does this mean we have no 64-bit windows support for 7.8 (only >>> dynamic-linked compiler works on 64-bit windows)? >>> >>> >>> On 1/13/2014 10:28, git at git.haskell.org wrote: >>>> Repository : ssh://git at git.haskell.org/ghc >>>> >>>> On branch : master >>>> Link : >>>> http://ghc.haskell.org/trac/ghc/changeset/4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d/ghc >>>> >>>> >>>>> --------------------------------------------------------------- >>>> commit 4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d >>>> Author: Austin Seipp >>>> Date: Mon Jan 13 00:21:18 2014 -0600 >>>> >>>> Add Windows to NoSharedLibsPlatformList >>>> We're punting on full -dynamic and -dynamic-too support for >>>> Windows >>>> right now, since it's still unstable. Also, ensure "Support >>>> dynamic-too" >>>> in `ghc --info` is set to "NO" for Cabal. >>>> See issues #7134, #8228, and #5987 >>>> Signed-off-by: Austin Seipp >>>> >>>> >>>>> --------------------------------------------------------------- >>>> 4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d >>>> compiler/main/DynFlags.hs | 4 +++- >>>> mk/config.mk.in | 19 ++++--------------- >>>> 2 files changed, 7 insertions(+), 16 deletions(-) >>>> >>>> diff --git a/compiler/main/DynFlags.hs b/compiler/main/DynFlags.hs >>>> index 06d1ed9..734e7e9 100644 >>>> --- a/compiler/main/DynFlags.hs >>>> +++ b/compiler/main/DynFlags.hs >>>> @@ -3563,7 +3563,7 @@ compilerInfo dflags >>>> ("Support SMP", cGhcWithSMP), >>>> ("Tables next to code", cGhcEnableTablesNextToCode), >>>> ("RTS ways", cGhcRTSWays), >>>> - ("Support dynamic-too", "YES"), >>>> + ("Support dynamic-too", if isWindows then "NO" else >>>> "YES"), >>>> ("Support parallel --make", "YES"), >>>> ("Dynamic by default", if dYNAMIC_BY_DEFAULT dflags >>>> then "YES" else "NO"), >>>> @@ -3574,6 +3574,8 @@ compilerInfo dflags >>>> ("LibDir", topDir dflags), >>>> ("Global Package DB", systemPackageConfig dflags) >>>> ] >>>> + where >>>> + isWindows = platformOS (targetPlatform dflags) == OSMinGW32 >>>> #include >>>> "../includes/dist-derivedconstants/header/GHCConstantsHaskellWrappers.hs" >>>> >>>> diff --git a/mk/config.mk.in b/mk/config.mk.in >>>> index f61ecc0..59d48c4 100644 >>>> --- a/mk/config.mk.in >>>> +++ b/mk/config.mk.in >>>> @@ -94,22 +94,11 @@ else >>>> TargetElf = YES >>>> endif >>>> -# Currently, on Windows, we artificially limit the unfolding >>>> creation >>>> -# threshold to minimize the number of exported symbols on Windows >>>> -# platforms in the stage2 DLL. This avoids a hard limit of 2^16 >>>> -# exported symbols in the windows dynamic linker. >>>> -# >>>> -# This is a pitifully low threshold (the default is 750,) but it >>>> -# reduced the symbol count by about ~7,000, bringing us back under >>>> the >>>> -# limit (for now.) >>>> -# >>>> -# See #5987 >>>> -ifeq "$(TargetOS_CPP)" "mingw32" >>>> -GhcStage2HcOpts += -funfolding-creation-threshold=100 >>>> -endif >>>> - >>>> # Some platforms don't support shared libraries >>>> -NoSharedLibsPlatformList = arm-unknown-linux powerpc-unknown-linux >>>> +NoSharedLibsPlatformList = arm-unknown-linux \ >>>> + powerpc-unknown-linux \ >>>> + x86_64-unknown-mingw32 \ >>>> + i386-unknown-mingw32 >>>> ifeq "$(SOLARIS_BROKEN_SHLD)" "YES" >>>> NoSharedLibsPlatformList += i386-unknown-solaris2 >>>> >>>> _______________________________________________ >>>> ghc-commits mailing list >>>> ghc-commits at haskell.org >>>> http://www.haskell.org/mailman/listinfo/ghc-commits >>>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From gergo at erdi.hu Mon Jan 13 11:06:22 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Mon, 13 Jan 2014 19:06:22 +0800 (SGT) Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Message-ID: On Mon, 13 Jan 2014, Dr. ERDI Gergo wrote: >> I went ahead and pushed the preliminary work to a new branch in the >> official repositories. GHC, haddock and testsuite now have a >> 'wip/pattern-synonyms' branch, where you can test the code: >> >> https://github.com/ghc/ghc/commits/wip/pattern-synonyms >> https://github.com/ghc/haddock/commits/wip/pattern-synonyms >> https://github.com/ghc/testsuite/commits/wip/pattern-synonyms > > So what's the intended workflow for me from now on? Will master be regularly > merged into this branch? Should I base my future work (like fixing the > outstanding issues your mail detailed) on top of this branch and continue > pushing to my github repo? Oh and also, how do I reword the commit message of the single squashed commit? I'm asking because there are some small fixes I'd like to do on the message. Thanks, Gergo -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' Speak the truth, but leave immediately after. From kyrab at mail.ru Mon Jan 13 11:08:12 2014 From: kyrab at mail.ru (kyra) Date: Mon, 13 Jan 2014 15:08:12 +0400 Subject: [commit: ghc] master: Add Windows to NoSharedLibsPlatformList (4af1e76) In-Reply-To: <52D3C546.8010307@mail.ru> References: <20140113062821.1C7D92406B@ghc.haskell.org> <52D3B968.6020005@mail.ru> <52D3C546.8010307@mail.ru> Message-ID: <52D3C91C.1070608@mail.ru> More on this: On 1/13/2014 14:51, kyra wrote: > The last would be better, because dynamic-linked Windows GHC has > longer load time (which can jump to intolerable 2-3 secs, which > happens, I guess, when we approach 64k exported symbols limit). "which can jump to intolerable 2-3 secs" refers to different *builds* of GHC. Some builds had load times in the order of tenths of a second, some - up to 2-3 secs. For example ghc-7.7.20131210 load time was more than 2 secs. When I've rebuilt it lowering funfolding-creation-threshold significantly, load time lowered to tenths of a second. From fuuzetsu at fuuzetsu.co.uk Mon Jan 13 11:13:29 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Mon, 13 Jan 2014 11:13:29 +0000 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Message-ID: <52D3CA59.40508@fuuzetsu.co.uk> On 13/01/14 11:06, Dr. ERDI Gergo wrote: > On Mon, 13 Jan 2014, Dr. ERDI Gergo wrote: > >>> I went ahead and pushed the preliminary work to a new branch in the >>> official repositories. GHC, haddock and testsuite now have a >>> 'wip/pattern-synonyms' branch, where you can test the code: >>> >>> https://github.com/ghc/ghc/commits/wip/pattern-synonyms >>> https://github.com/ghc/haddock/commits/wip/pattern-synonyms >>> https://github.com/ghc/testsuite/commits/wip/pattern-synonyms >> >> So what's the intended workflow for me from now on? Will master be regularly >> merged into this branch? Should I base my future work (like fixing the >> outstanding issues your mail detailed) on top of this branch and continue >> pushing to my github repo? > > Oh and also, how do I reword the commit message of the single squashed > commit? I'm asking because there are some small fixes I'd like to do on > the message. > > Thanks, > Gergo > You can do an interactive rebase and stop at the commit you want to change. Then use git commit --ammend to change the message. You probably don't want to be changing history too much though, it's a pain for anyone working on the same branch. On a somewhat related note, you should probably update your Haddock changes on top of the current master. Let me know if you have problems merging it on top. -- Mateusz K. From gergo at erdi.hu Mon Jan 13 11:44:07 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Mon, 13 Jan 2014 19:44:07 +0800 (SGT) Subject: Pattern synonyms for 7.8? In-Reply-To: <52D3CA59.40508@fuuzetsu.co.uk> References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> <52D3CA59.40508@fuuzetsu.co.uk> Message-ID: On Mon, 13 Jan 2014, Mateusz Kowalczyk wrote: >> Oh and also, how do I reword the commit message of the single squashed >> commit? I'm asking because there are some small fixes I'd like to do on >> the message. >> >> Thanks, >> Gergo >> > > > You can do an interactive rebase and stop at the commit you want to > change. Then use git commit --ammend to change the message. You probably > don't want to be changing history too much though, it's a pain for > anyone working on the same branch. I am well aware of the technical tools Git provides for history rewriting. My workflow before the pattern synonyms got on a wip branch was that I was rewriting history all the time, and people basically had a read-only view via a public GitHub repo that I force-pushed to. But now that it is happening on GHC repos that I have no push permissions to, I don't know if someone will for example be willing to force-push any rebased stuff I might end up with. Who do I even contact to pull onto these wip branches anyway? Bye, Gergo -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' Friends help you move; Real friends help you move bodies. From ggreif at gmail.com Mon Jan 13 11:57:32 2014 From: ggreif at gmail.com (Gabor Greif) Date: Mon, 13 Jan 2014 12:57:32 +0100 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> <52D3CA59.40508@fuuzetsu.co.uk> Message-ID: >From what I understood, you *should* have all permissions to push to wip/ branches. If not, please contact the admins. (IIRC Austin did this previously). Cheers, Gabor On 1/13/14, Dr. ERDI Gergo wrote: > On Mon, 13 Jan 2014, Mateusz Kowalczyk wrote: > >>> Oh and also, how do I reword the commit message of the single squashed >>> commit? I'm asking because there are some small fixes I'd like to do on >>> the message. >>> >>> Thanks, >>> Gergo >>> >> >> >> You can do an interactive rebase and stop at the commit you want to >> change. Then use git commit --ammend to change the message. You probably >> don't want to be changing history too much though, it's a pain for >> anyone working on the same branch. > > I am well aware of the technical tools Git provides for history rewriting. > My workflow before the pattern synonyms got on a wip branch was that I was > rewriting history all the time, and people basically had a read-only view > via a public GitHub repo that I force-pushed to. But now that it is > happening on GHC repos that I have no push permissions to, I don't know if > someone will for example be willing to force-push any rebased stuff I > might end up with. Who do I even contact to pull onto these wip branches > anyway? > > Bye, > Gergo > > -- > > .--= ULLA! =-----------------. > \ http://gergo.erdi.hu \ > `---= gergo at erdi.hu =-------' > Friends help you move; Real friends help you move bodies. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Mon Jan 13 12:08:02 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Jan 2014 12:08:02 +0000 Subject: testsuite change Message-ID: <59543203684B2244980D7E4057D5FBC148712D1C@DB3EX14MBXC306.europe.corp.microsoft.com> Herbert I did 'git pull' in my source tree and got the error below. What do I do now? Simon == running git pull remote: Counting objects: 56029, done. remote: Compressing objects: 100% (16843/16843), done. remote: Total 55837 (delta 32572), reused 55743 (delta 32500) Receiving objects: 100% (55837/55837), 6.47 MiB | 2.96 MiB/s, done. Resolving deltas: 100% (32572/32572), completed with 102 local objects. >From git://git.haskell.org/ghc b7ddf63..9c91a24 master -> origin/master Updating b7ddf63..9c91a24 error: The following untracked working tree files would be overwritten by merge: testsuite/.gitignore testsuite/LICENSE testsuite/LICENSE.GPL testsuite/Makefile testsuite/README.md testsuite/config/bad.ps testsuite/config/ghc testsuite/config/good.ps testsuite/driver/runtests.py testsuite/driver/testglobals.py testsuite/driver/testlib.py testsuite/driver/testutil.py testsuite/mk/boilerplate.mk testsuite/mk/ghc-config.hs testsuite/mk/test.mk testsuite/tests/Makefile testsuite/tests/annotations/Makefile testsuite/tests/annotations/should_compile/Makefile testsuite/tests/annotations/should_compile/all.T testsuite/tests/annotations/should_compile/ann01.hs testsuite/tests/annotations/should_compile/ann01.stderr testsuite/tests/annotations/should_fail/Annfail04_Help.hs testsuite/tests/annotations/should_fail/Annfail05_Help.hs testsuite/tests/annotations/should_fail/Annfail06_Help.hs testsuite/tests/annotations/should_fail/Makefile testsuite/tests/annotations/should_fail/all.T testsuite/tests/annotations/should_fail/annfail01.hs testsuite/tests/annotations/should_fail/annfail01.stderr testsuite/tests/annotations/should_fail/annfail02.hs testsuite/tests/annotations/should_fail/annfail02.stderr testsuite/tests/annotations/should_fail/annfail03.hs testsuite/tests/annotations/should_fail/annfail03.stderr testsuite/tests/annotations/should_fail/annfail04.hs testsuite/tests/annotations/should_fail/annfail04.stderr testsuite/tests/annotations/should_fail/annfail05.hs testsuite/tests/annotations/should_fail/annfail05.stderr testsuite/tests/annotations/should_fail/annfail06.hs testsuite/tests/annotations/should_fail/annfail06.stderr testsuite/tests/annotations/should_fail/annfail07.hs testsuite/tests/annotations/should_fail/annfail07.stderr testsuite/tests/annotations/should_fail/annfail08.hs testsuite/tests/annotations/should_fail/annfail08.stderr testsuite/tests/annotations/should_fail/annfail09.hs testsuite/tests/annotations/should_fail/annfail09.stderr testsuite/tests/annotations/should_fail/annfail10.hs testsuite/tests/annotations/should_fail/annfail10.stderr testsuite/tests/annotations/should_fail/annfail11.hs testsuite/tests/annotations/should_fail/annfail11.stderr testsuite/tests/annotations/should_fail/annfail12.hs testsuite/tests/annotations/should_fail/annfail12.stderr testsuite/tests/annotations/should_fail/annfail13.hs testsuite/tests/annotations/should_fail/annfail13.stderr testsuite/tests/annotations/should_run/Annrun01_Help.hs testsuite/tests/annotations/should_run/Makefile testsuite/tests/annotations/should_run/all.T testsuite/tests/annotations/should_run/annrun01.hs testsuite/tests/annotations/should_run/annrun01.stdout testsuite/tests/arityanal/Main.hs testsuite/tests/arityanal/Main.stderr testsuite/tests/arityanal/Makefile testsuite/tests/arityanal/f0.hs testsuite/tests/arityanal/f0.stderr testsuite/tests/arityanal/f1.hs testsuite/tests/arityanal/f1.stderr testsuite/tests/arityanal/f10.hs testsuite/tests/arityanal/f10.stderr testsuite/tests/arityanal/f11.hs testsuite/tests/arityanal/f11.stderr testsuite/tests/arityanal/f12.hs testsuite/tests/arityanal/f12.stderr testsuite/tests/arityanal/f13.hs testsuite/tests/arityanal/f13.stderr testsuite/tests/arityanal/f14.hs testsuite/tests/arityanal/f14.stderr testsuite/tests/arityanal/f15.hs testsuite/tests/arityanal/f15.stderr testsuite/tests/arityanal/f2.hs testsuite/tests/arityanal/f2.stderr testsuite/tests/arityanal/f3.hs testsuite/tests/arityanal/f3.stderr testsuite/tests/arityanal/f4.hs testsuite/tests/arityanal/f4.stderr testsuite/tests/arityanal/f5.hs testsuite/tests/arityanal/f5.stderr testsuite/tests/arityanal/f6.hs testsuite/tests/arityanal/f6.stderr testsuite/tests/arityanal/f7.hs testsuite/tests/arityanal/f7.stderr testsuite/tests/arityanal/f8.hs testsuite/tests/arityanal/f8.stderr testsuite/tests/arityanal/f9.hs testsuite/tests/arityanal/f9.stderr testsuite/tests/arityanal/prim.hs testsuite/tests/arityanal/prim.stderr testsuite/tests/array/Makefile testsuite/tests/array/shou Aborting git failed: 256 at ./sync-all line 120. == Checking for old haddock repo == Checking for old binary repo == Checking for old mtl repo == Checking for old Cabal repo == Checking for old time from tarball == Checking for obsolete Git repo URL simonpj at cam-05-unx:~/code/HEAD-2$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Mon Jan 13 12:16:16 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 13 Jan 2014 13:16:16 +0100 Subject: testsuite change In-Reply-To: <59543203684B2244980D7E4057D5FBC148712D1C@DB3EX14MBXC306.europe.corp.microsoft.com> (Simon Peyton Jones's message of "Mon, 13 Jan 2014 12:08:02 +0000") References: <59543203684B2244980D7E4057D5FBC148712D1C@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <877ga4qdpr.fsf@gmail.com> Hello Simon, On 2014-01-13 at 13:08:02 +0100, Simon Peyton Jones wrote: > I did 'git pull' in my source tree and got the error below. What do > I do now? the easist is to just move the testsuite folder out the way; e.g. mv testsuite/ testsuite-old/ From simonpj at microsoft.com Mon Jan 13 12:18:53 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Jan 2014 12:18:53 +0000 Subject: testsuite change In-Reply-To: <877ga4qdpr.fsf@gmail.com> References: <59543203684B2244980D7E4057D5FBC148712D1C@DB3EX14MBXC306.europe.corp.microsoft.com> <877ga4qdpr.fsf@gmail.com> Message-ID: <59543203684B2244980D7E4057D5FBC148712D85@DB3EX14MBXC306.europe.corp.microsoft.com> and then? are all those untracked files really untracked? I definitely didn't add them! Have they been lost from the tree somehow? | -----Original Message----- | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] | Sent: 13 January 2014 12:16 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: testsuite change | | Hello Simon, | | On 2014-01-13 at 13:08:02 +0100, Simon Peyton Jones wrote: | > I did 'git pull' in my source tree and got the error below. What do | > I do now? | | the easist is to just move the testsuite folder out the way; | | e.g. mv testsuite/ testsuite-old/ From simonpj at microsoft.com Mon Jan 13 12:25:28 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Jan 2014 12:25:28 +0000 Subject: testsuite change In-Reply-To: <59543203684B2244980D7E4057D5FBC148712D85@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148712D1C@DB3EX14MBXC306.europe.corp.microsoft.com> <877ga4qdpr.fsf@gmail.com> <59543203684B2244980D7E4057D5FBC148712D85@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <59543203684B2244980D7E4057D5FBC148712DC1@DB3EX14MBXC306.europe.corp.microsoft.com> ..and indeed, having followed your advice, the new tree contains the claimed not-present files. So all seems well. But it's a mystery to me. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon | Peyton Jones | Sent: 13 January 2014 12:19 | To: Herbert Valerio Riedel | Cc: ghc-devs at haskell.org | Subject: RE: testsuite change | | and then? are all those untracked files really untracked? I definitely | didn't add them! Have they been lost from the tree somehow? | | | | -----Original Message----- | | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] | | Sent: 13 January 2014 12:16 | | To: Simon Peyton Jones | | Cc: ghc-devs at haskell.org | | Subject: Re: testsuite change | | | | Hello Simon, | | | | On 2014-01-13 at 13:08:02 +0100, Simon Peyton Jones wrote: | | > I did 'git pull' in my source tree and got the error below. What | | > do I do now? | | | | the easist is to just move the testsuite folder out the way; | | | | e.g. mv testsuite/ testsuite-old/ | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From hvriedel at gmail.com Mon Jan 13 12:30:33 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 13 Jan 2014 13:30:33 +0100 Subject: testsuite change In-Reply-To: <59543203684B2244980D7E4057D5FBC148712D85@DB3EX14MBXC306.europe.corp.microsoft.com> (Simon Peyton Jones's message of "Mon, 13 Jan 2014 12:18:53 +0000") References: <59543203684B2244980D7E4057D5FBC148712D1C@DB3EX14MBXC306.europe.corp.microsoft.com> <877ga4qdpr.fsf@gmail.com> <59543203684B2244980D7E4057D5FBC148712D85@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <8738ksqd1y.fsf@gmail.com> On 2014-01-13 at 13:18:53 +0100, Simon Peyton Jones wrote: > and then? are all those untracked files really untracked? I > definitely didn't add them! Have they been lost from the tree > somehow? Well, Git just detects there's already something which it would overwrite (as it doesn't know what a nested testsuite/.git means). So they're were only untracked from ghc.git's perspective. If you want to make sure you really have no uncommitted things there, just cd into testsuite/ (or testsuite-old if you moved it already), and run 'git status'/'git log' and other common Git commands to find out if there's something worth saving over from the old standalone testsuite.git (Git commands will use the first found .git/ folder while traversing upwards towards the filesystem root) HTH, hvr > > > | -----Original Message----- > | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] > | Sent: 13 January 2014 12:16 > | To: Simon Peyton Jones > | Cc: ghc-devs at haskell.org > | Subject: Re: testsuite change > | > | Hello Simon, > | > | On 2014-01-13 at 13:08:02 +0100, Simon Peyton Jones wrote: > | > I did 'git pull' in my source tree and got the error below. What do > | > I do now? > | > | the easist is to just move the testsuite folder out the way; > | > | e.g. mv testsuite/ testsuite-old/ > -- "Elegance is not optional" -- Richard O'Keefe From eir at cis.upenn.edu Mon Jan 13 12:30:32 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Mon, 13 Jan 2014 07:30:32 -0500 Subject: Enable TypeHoles by default? In-Reply-To: <52D3B706.7020302@gmail.com> References: <52D3B706.7020302@gmail.com> Message-ID: <2E187DB3-FFFC-4E3C-90E2-34607F57B599@cis.upenn.edu> Maybe I'm missing something here, but how does specifying TypeHoles make GHC not compliant with Haskell 2010? Turning on TypeHoles should change only error messages. The set of programs that compile (and their meanings) should remain unchanged, by my understanding. I'm mildly in favor of this change, but I agree that perhaps a conversation on the users list and/or waiting a cycle isn't a bad idea. Richard On Jan 13, 2014, at 4:51 AM, Simon Marlow wrote: > On 12/01/2014 22:56, Krzysztof Gogolewski wrote: >> I propose to enable -XTypeHoles in GHC by default. > > GHC supports strict Haskell 2010 by default, and enabling any extensions breaks that property. That's why we don't have any extensions on by default. > > Cheers, > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Mon Jan 13 12:34:17 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Jan 2014 12:34:17 +0000 Subject: [commit: ghc] master: Add Windows to NoSharedLibsPlatformList (4af1e76) In-Reply-To: <52D3C546.8010307@mail.ru> References: <20140113062821.1C7D92406B@ghc.haskell.org> <52D3B968.6020005@mail.ru> <52D3C546.8010307@mail.ru> Message-ID: <59543203684B2244980D7E4057D5FBC148712E22@DB3EX14MBXC306.europe.corp.microsoft.com> I think Austin (and the rest of us) would be thrilled if you felt able to help with dynamic linking on Windows. Thank you. I'm utterly ignorant of the details, but it would be SO GREAT to have some help on this. Austin can fill you in. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of kyra | Sent: 13 January 2014 10:52 | To: ghc-devs at haskell.org | Subject: Re: [commit: ghc] master: Add Windows to | NoSharedLibsPlatformList (4af1e76) | | Statically linked 64-bit Windows GHC does not work because of #7134. | Even LARGEADDRESSAWARE flag disabling (extremely bad hack itself) does | not work anymore both on Windows 7 and Windows 8. | | Or is there another (besides dynamic linking) plan to attack #7134? | | I could step in to try to help with any of these, but I'd want to get | more guidance then - either on enabling dll-relating things (for some | time age I've tried to find better ghc-to-dlls decomposition using dll- | split tool, but quickly found we can't do better than it is now, perhaps | GHC itself needs some refactoring to solve this problem), or fixing | #7134 in some other way. The last would be better, because dynamic- | linked Windows GHC has longer load time (which can jump to intolerable | 2-3 secs, which happens, I guess, when we approach 64k exported symbols | limit). | | On 1/13/2014 14:31, Austin Seipp wrote: | > The 64bit GHC 7.6.3 windows compiler was not dynamically linked, | > although it did have -dynamic libraries (although using them is a pain | > in Windows.) It loaded static object files (you can verify this | > yourself: 'ghc -O foo.hs && ghci foo' will load the object file, but | > 'ghc -dynamic -O foo.hs && ghci foo' will not and instead interpret.) | > Relatedly, -dynamic-too is also broken on windows, but it's more of an | > optimization than anything. | > | > 7.8 won't have a dynamically linked GHCi for Windows and it won't have | > -dynamic-too (i.e. essentially the same as 7.6.) Linux, OS X will have | > both. | > | > At this exact moment, -dynamic also seems busted on Windows and I'm | > looking into fixing it. This will just help me in the mean time to | > clean up the tree and keep it building for others. | > | > On Mon, Jan 13, 2014 at 4:01 AM, kyra wrote: | >> Does this mean we have no 64-bit windows support for 7.8 (only | >> dynamic-linked compiler works on 64-bit windows)? | >> | >> | >> On 1/13/2014 10:28, git at git.haskell.org wrote: | >>> Repository : ssh://git at git.haskell.org/ghc | >>> | >>> On branch : master | >>> Link : | >>> http://ghc.haskell.org/trac/ghc/changeset/4af1e76c701a7698ebd9b5ca3f | >>> b1394dd8b56c8d/ghc | >>> | >>>> --------------------------------------------------------------- | >>> commit 4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d | >>> Author: Austin Seipp | >>> Date: Mon Jan 13 00:21:18 2014 -0600 | >>> | >>> Add Windows to NoSharedLibsPlatformList | >>> We're punting on full -dynamic and -dynamic-too support | >>> for Windows | >>> right now, since it's still unstable. Also, ensure "Support | >>> dynamic-too" | >>> in `ghc --info` is set to "NO" for Cabal. | >>> See issues #7134, #8228, and #5987 | >>> Signed-off-by: Austin Seipp | >>> | >>> | >>>> --------------------------------------------------------------- | >>> 4af1e76c701a7698ebd9b5ca3fb1394dd8b56c8d | >>> compiler/main/DynFlags.hs | 4 +++- | >>> mk/config.mk.in | 19 ++++--------------- | >>> 2 files changed, 7 insertions(+), 16 deletions(-) | >>> | >>> diff --git a/compiler/main/DynFlags.hs b/compiler/main/DynFlags.hs | >>> index 06d1ed9..734e7e9 100644 | >>> --- a/compiler/main/DynFlags.hs | >>> +++ b/compiler/main/DynFlags.hs | >>> @@ -3563,7 +3563,7 @@ compilerInfo dflags | >>> ("Support SMP", cGhcWithSMP), | >>> ("Tables next to code", | cGhcEnableTablesNextToCode), | >>> ("RTS ways", cGhcRTSWays), | >>> - ("Support dynamic-too", "YES"), | >>> + ("Support dynamic-too", if isWindows then "NO" else | >>> "YES"), | >>> ("Support parallel --make", "YES"), | >>> ("Dynamic by default", if dYNAMIC_BY_DEFAULT | dflags | >>> then "YES" else "NO"), @@ | >>> -3574,6 +3574,8 @@ compilerInfo dflags | >>> ("LibDir", topDir dflags), | >>> ("Global Package DB", systemPackageConfig | dflags) | >>> ] | >>> + where | >>> + isWindows = platformOS (targetPlatform dflags) == OSMinGW32 | >>> #include | >>> "../includes/dist- | derivedconstants/header/GHCConstantsHaskellWrappers.hs" | >>> diff --git a/mk/config.mk.in b/mk/config.mk.in index | >>> f61ecc0..59d48c4 100644 | >>> --- a/mk/config.mk.in | >>> +++ b/mk/config.mk.in | >>> @@ -94,22 +94,11 @@ else | >>> TargetElf = YES | >>> endif | >>> -# Currently, on Windows, we artificially limit the unfolding | >>> creation -# threshold to minimize the number of exported symbols on | >>> Windows -# platforms in the stage2 DLL. This avoids a hard limit of | >>> 2^16 -# exported symbols in the windows dynamic linker. | >>> -# | >>> -# This is a pitifully low threshold (the default is 750,) but it -# | >>> reduced the symbol count by about ~7,000, bringing us back under the | >>> -# limit (for now.) -# -# See #5987 -ifeq "$(TargetOS_CPP)" | >>> "mingw32" | >>> -GhcStage2HcOpts += -funfolding-creation-threshold=100 | >>> -endif | >>> - | >>> # Some platforms don't support shared libraries | >>> -NoSharedLibsPlatformList = arm-unknown-linux powerpc-unknown-linux | >>> +NoSharedLibsPlatformList = arm-unknown-linux \ | >>> + powerpc-unknown-linux \ | >>> + x86_64-unknown-mingw32 \ | >>> + i386-unknown-mingw32 | >>> ifeq "$(SOLARIS_BROKEN_SHLD)" "YES" | >>> NoSharedLibsPlatformList += i386-unknown-solaris2 | >>> | >>> _______________________________________________ | >>> ghc-commits mailing list | >>> ghc-commits at haskell.org | >>> http://www.haskell.org/mailman/listinfo/ghc-commits | >>> | >> _______________________________________________ | >> ghc-devs mailing list | >> ghc-devs at haskell.org | >> http://www.haskell.org/mailman/listinfo/ghc-devs | >> | > | > | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From gergo at erdi.hu Mon Jan 13 12:48:47 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Mon, 13 Jan 2014 20:48:47 +0800 (SGT) Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Message-ID: On Thu, 9 Jan 2014, Austin Seipp wrote: > 1) As Richard pointed out, the docs are under docs/users_guide, as > well as the release notes. Please feel free to elaborate however you > want on the feature and the bulletpoint for the release notes. Hope to get around to these in the weekend. > 2) The failures are indeed a result of your code, in particular: > > driver T4437 [bad stdout] (normal) > generics GenDerivOutput [stderr mismatch] (normal) > generics GenDerivOutput1_0 [stderr mismatch] (normal) > generics GenDerivOutput1_1 [stderr mismatch] (normal) > rename/should_compile T7336 [stderr mismatch] (normal) Fixed these. > 3) It seems GHCi does not support declaring pattern synonyms at the > REPL. I'm not sure if it's intentional, but if it goes in like this, > please be sure to document it in the release notes. We can file a > ticket later for supporting pattern synonyms at the REPL. It's definitely not intentional and I have no idea why it would be so. Isn't GHCi a fairly thin wrapper around the GHC internals? Is there any wiki page detailing the differences in GHCi vs GHC code paths? Thanks, Gergo From gergo at erdi.hu Mon Jan 13 12:51:55 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Mon, 13 Jan 2014 20:51:55 +0800 (SGT) Subject: Pattern synonyms for 7.8? In-Reply-To: <52D3CA59.40508@fuuzetsu.co.uk> References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> <52D3CA59.40508@fuuzetsu.co.uk> Message-ID: On Mon, 13 Jan 2014, Mateusz Kowalczyk wrote: > On a somewhat related note, you should probably update your Haddock > changes on top of the current master. Let me know if you have problems > merging it on top. Hi Mateusz, Thanks for the offer, but it seems my patches re-apply on top of Haddock's latest master with no issues at all. Bye, Gergo -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' ?brenl?t: unalmas id?szak k?t szunya k?z?tt From dominique.devriese at cs.kuleuven.be Mon Jan 13 12:56:08 2014 From: dominique.devriese at cs.kuleuven.be (Dominique Devriese) Date: Mon, 13 Jan 2014 13:56:08 +0100 Subject: Enable TypeHoles by default? In-Reply-To: <2E187DB3-FFFC-4E3C-90E2-34607F57B599@cis.upenn.edu> References: <52D3B706.7020302@gmail.com> <2E187DB3-FFFC-4E3C-90E2-34607F57B599@cis.upenn.edu> Message-ID: Perhaps already as part of such a feedback round/bikeshedding opportunity, I'm wondering if I'm the only one who finds the name "TypeHoles" confusing, since as far as I understand, the extension enables holes in *expressions*, not types... I would personally find something like TypedHoles (note the added d) or ExpressionHoles or something similar more intuitive. Not that I have strong feelings about this, though... Note that I haven't actually tried the extension yet, but from the description, it seems like a very nice addition to GHC, so kudos to whoever did the work... Regards, Dominique 2014/1/13 Richard Eisenberg : > Maybe I'm missing something here, but how does specifying TypeHoles make GHC not compliant with Haskell 2010? Turning on TypeHoles should change only error messages. The set of programs that compile (and their meanings) should remain unchanged, by my understanding. > > I'm mildly in favor of this change, but I agree that perhaps a conversation on the users list and/or waiting a cycle isn't a bad idea. > > Richard > > On Jan 13, 2014, at 4:51 AM, Simon Marlow wrote: > >> On 12/01/2014 22:56, Krzysztof Gogolewski wrote: >>> I propose to enable -XTypeHoles in GHC by default. >> >> GHC supports strict Haskell 2010 by default, and enabling any extensions breaks that property. That's why we don't have any extensions on by default. >> >> Cheers, >> Simon >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From kvanberendonck at gmail.com Mon Jan 13 13:05:56 2014 From: kvanberendonck at gmail.com (Kyle Van Berendonck) Date: Tue, 14 Jan 2014 00:05:56 +1100 Subject: Folding constants for floats Message-ID: Hi, I'm cutting my teeth on some constant folding for floats in the cmm. I have a question regarding the ticket I'm tackling: Should floats be folded with infinite precision (and later truncated to the platform float size) -- most useful/accurate, or folded with the platform precision, i.e. double, losing accuracy but keeping consistent behaviour with -O0 -- most "correct"? I would prefer the first case because it's *much* easier to implement than the second, and it'll probably rot less. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Mon Jan 13 14:23:30 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 13 Jan 2014 14:23:30 +0000 Subject: Enable TypeHoles by default? In-Reply-To: <2E187DB3-FFFC-4E3C-90E2-34607F57B599@cis.upenn.edu> References: <52D3B706.7020302@gmail.com> <2E187DB3-FFFC-4E3C-90E2-34607F57B599@cis.upenn.edu> Message-ID: <52D3F6E2.6070304@gmail.com> Ah, my apologies, for some reason I thought that -XTypeHoles implied -fdefer-type-errors, but I see it doesn't. Ignore me! Turning on TypeHoles by default looks like a useful thing, yes. Cheers, Simon On 13/01/2014 12:30, Richard Eisenberg wrote: > Maybe I'm missing something here, but how does specifying TypeHoles make GHC not compliant with Haskell 2010? Turning on TypeHoles should change only error messages. The set of programs that compile (and their meanings) should remain unchanged, by my understanding. > > I'm mildly in favor of this change, but I agree that perhaps a conversation on the users list and/or waiting a cycle isn't a bad idea. > > Richard > > On Jan 13, 2014, at 4:51 AM, Simon Marlow wrote: > >> On 12/01/2014 22:56, Krzysztof Gogolewski wrote: >>> I propose to enable -XTypeHoles in GHC by default. >> >> GHC supports strict Haskell 2010 by default, and enabling any extensions breaks that property. That's why we don't have any extensions on by default. >> >> Cheers, >> Simon >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Mon Jan 13 14:44:59 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Jan 2014 14:44:59 +0000 Subject: Enable TypeHoles by default? In-Reply-To: References: <52D3B706.7020302@gmail.com> <2E187DB3-FFFC-4E3C-90E2-34607F57B599@cis.upenn.edu> Message-ID: <59543203684B2244980D7E4057D5FBC148713021@DB3EX14MBXC306.europe.corp.microsoft.com> | Perhaps already as part of such a feedback round/bikeshedding | opportunity, I'm wondering if I'm the only one who finds the name | "TypeHoles" confusing, since as far as I understand, the extension | enables holes in *expressions*, not types... I would personally find | something like TypedHoles (note the added d) or ExpressionHoles or | something similar more intuitive. I certainly don't mind adding "TypedHoles" as a synonym, use it in the user manual, and deprecate TypeHoles (and remove it later). If (a) no one objects and (b) someone wants to send a patch. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Dominique Devriese | Sent: 13 January 2014 12:56 | To: ghc-devs at haskell.org | Subject: Re: Enable TypeHoles by default? | | Perhaps already as part of such a feedback round/bikeshedding | opportunity, I'm wondering if I'm the only one who finds the name | "TypeHoles" confusing, since as far as I understand, the extension | enables holes in *expressions*, not types... I would personally find | something like TypedHoles (note the added d) or ExpressionHoles or | something similar more intuitive. Not that I have strong feelings about | this, though... Note that I haven't actually tried the extension yet, | but from the description, it seems like a very nice addition to GHC, so | kudos to whoever did the work... | | Regards, | Dominique | | 2014/1/13 Richard Eisenberg : | > Maybe I'm missing something here, but how does specifying TypeHoles | make GHC not compliant with Haskell 2010? Turning on TypeHoles should | change only error messages. The set of programs that compile (and their | meanings) should remain unchanged, by my understanding. | > | > I'm mildly in favor of this change, but I agree that perhaps a | conversation on the users list and/or waiting a cycle isn't a bad idea. | > | > Richard | > | > On Jan 13, 2014, at 4:51 AM, Simon Marlow wrote: | > | >> On 12/01/2014 22:56, Krzysztof Gogolewski wrote: | >>> I propose to enable -XTypeHoles in GHC by default. | >> | >> GHC supports strict Haskell 2010 by default, and enabling any | extensions breaks that property. That's why we don't have any | extensions on by default. | >> | >> Cheers, | >> Simon | >> _______________________________________________ | >> ghc-devs mailing list | >> ghc-devs at haskell.org | >> http://www.haskell.org/mailman/listinfo/ghc-devs | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Mon Jan 13 14:58:03 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Jan 2014 14:58:03 +0000 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Message-ID: <59543203684B2244980D7E4057D5FBC14871347B@DB3EX14MBXC306.europe.corp.microsoft.com> Check out TcRnDriver.tcRnDeclsi. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Dr. | ERDI Gergo | Sent: 13 January 2014 12:49 | To: Austin Seipp | Cc: Joachim Breitner; GHC Devs | Subject: Re: Pattern synonyms for 7.8? | | On Thu, 9 Jan 2014, Austin Seipp wrote: | | > 1) As Richard pointed out, the docs are under docs/users_guide, as | > well as the release notes. Please feel free to elaborate however you | > want on the feature and the bulletpoint for the release notes. | | Hope to get around to these in the weekend. | | > 2) The failures are indeed a result of your code, in particular: | > | > driver T4437 [bad stdout] (normal) | > generics GenDerivOutput [stderr mismatch] (normal) | > generics GenDerivOutput1_0 [stderr mismatch] (normal) | > generics GenDerivOutput1_1 [stderr mismatch] (normal) | > rename/should_compile T7336 [stderr mismatch] (normal) | | Fixed these. | | > 3) It seems GHCi does not support declaring pattern synonyms at the | > REPL. I'm not sure if it's intentional, but if it goes in like this, | > please be sure to document it in the release notes. We can file a | > ticket later for supporting pattern synonyms at the REPL. | | It's definitely not intentional and I have no idea why it would be so. | Isn't GHCi a fairly thin wrapper around the GHC internals? Is there any | wiki page detailing the differences in GHCi vs GHC code paths? | | Thanks, | Gergo | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Mon Jan 13 15:20:47 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Jan 2014 15:20:47 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <52D0272D.30909@gmail.com> References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> <59543203684B2244980D7E4057D5FBC148708D03@DB3EX14MBXC306.europe.corp.microsoft.com> <52CD19AD.7030503@gmail.com> <59543203684B2244980D7E4057D5FBC148709591@DB3EX14MBXC306.europe.corp.microsoft.com> <52CE6040.30705@gmail.com> <59543203684B2244980D7E4057D5FBC14870E940@DB3EX14MBXC306.europe.corp.microsoft.com> <52D01E85.2010900@gmail.com> <59543203684B2244980D7E4057D5FBC14870EA68@DB3EX14MBXC306.europe.corp.microsoft.com> <52D0272D.30909@gmail.com> Message-ID: <59543203684B2244980D7E4057D5FBC148713516@DB3EX14MBXC306.europe.corp.microsoft.com> Thanks. Reading what you write below, I can see two possible motivations. 1. Reduce stack sizes. 2. Eliminate memory moves For (1) do we have any data to show that the non-overlap of areas was giving rise to unacceptably big stacks? For (2) that is indeed clever, but it's pretty serendipitous: it relies on the overlap being just so, so that coincidentally y gets stored in the same place as it was loaded from. I imagine that you don't plan the stack layout to cause that to happen; it's just a coincidence. Do we have any data to show that the coincidence happens with any frequency? Also, as you note, we lose the opportunity for certain sorts of code motion, perhaps increasing register pressure a lot. So there is a downside too. You seldom do things without a very good reason, so I feel I must be missing something. Simon | -----Original Message----- | From: Simon Marlow [mailto:marlowsd at gmail.com] | Sent: 10 January 2014 17:00 | To: Simon Peyton Jones; Herbert Valerio Riedel | Cc: ghc-devs at haskell.org | Subject: Re: High-level Cmm code and stack allocation | | So stack areas are still a great abstraction, the only change is that | they now overlap. It's not just about stack getting too big, I've | copied the notes I made about it below (which I will paste into the code | in due course). The nice property that we can generate well-defined Cmm | without knowing explicit stack offsets is intact. | | What is different is that there used to be an intermediate state where | live variables were saved to abstract stack areas across calls, but Sp | was still not manifest. This intermediate state doesn't exist any more, | the stack layout algorithm does it all in one pass. To me this was far | simpler, and I think it ended up being fewer lines of code than the old | multi-phase stack layout algorithm (it's also much faster). | | Of course you can always change this. My goal was to get code that was | at least as good as the old code generator and in a reasonable amount of | time, and this was the shortest path I could find to that goal. | | Cheers, | Simon | | e.g. if we had | | x = Sp[old + 8] | y = Sp[old + 16] | | Sp[young(L) + 8] = L | Sp[young(L) + 16] = y | Sp[young(L) + 24] = x | call f() returns to L | | if areas semantically do not overlap, then we might optimise this to | | Sp[young(L) + 8] = L | Sp[young(L) + 16] = Sp[old + 8] | Sp[young(L) + 24] = Sp[old + 16] | call f() returns to L | | and now young(L) cannot be allocated at the same place as old, and we | are doomed to use more stack. | | - old+8 conflicts with young(L)+8 | - old+16 conflicts with young(L)+16 and young(L)+8 | | so young(L)+8 == old+24 and we get | | Sp[-8] = L | Sp[-16] = Sp[8] | Sp[-24] = Sp[0] | Sp -= 24 | call f() returns to L | | However, if areas are defined to be "possibly overlapping" in the | semantics, then we cannot commute any loads/stores of old with young(L), | and we will be able to re-use both old+8 and old+16 for young(L). | | x = Sp[8] | y = Sp[0] | | Sp[8] = L | Sp[0] = y | Sp[-8] = x | Sp = Sp - 8 | call f() returns to L | | Now, the assignments of y go away, | | x = Sp[8] | Sp[8] = L | Sp[-8] = x | Sp = Sp - 8 | call f() returns to L | | | Conclusion: | | - T[old+N] aliases with U[young(L)+M] for all T, U, L, N and M | - T[old+N] aliases with U[old+M] only if the areas actually overlap | | this ensures that we will not commute any accesses to old with | young(L) or young(L) with young(L'), and the stack allocator will get | the maximum opportunity to overlap these areas, keeping the stack use to | a minimum and possibly avoiding some assignments. | | | | On 10/01/2014 16:35, Simon Peyton Jones wrote: | > Oh, ok. Alas, a good chunk of my model of Cmm has just gone out of | the window. I thought that areas were such a lovely, well-behaved | abstraction. I was thrilled when we came up with them, and I'm very | sorry to see them go. | > | > There are no many things that I no longer understand. I now have no | idea how we save live variables over a call, or how multiple returned | values from one call (returned on the stack) stay right where they are | if they are live across the next call. | > | > What was the actual problem? That functions used too much stack, so | the stack was getting too big? But a one slot area corresponds exactly | to a live variable, so I don't see how the area abstraction could | possibly increase stack size. And is stack size a crucial issue anyway? | > | > Apart from anything else, areas would have given a lovely solution to | the problem this thread started with! | > | > I guess we can talk about this when you next visit? But some | documentation would be welcome. | > | > Simon | > | > | -----Original Message----- | > | From: Simon Marlow [mailto:marlowsd at gmail.com] | > | Sent: 10 January 2014 16:24 | > | To: Simon Peyton Jones; Herbert Valerio Riedel | > | Cc: ghc-devs at haskell.org | > | Subject: Re: High-level Cmm code and stack allocation | > | | > | There are no one-slot areas any more, I ditched those when I rewrote | > | the stack allocator. There is only ever one live area: either the | > | old area or the young area for a call we are about to make or have | just made. | > | (see the data type: I removed the one-slot areas) | > | | > | I struggled for a long time with this. The problem is that with the | > | semantics of non-overlapping areas, code motion optimisations would | > | tend to increase the stack requirements of the function by | > | overlapping the live ranges of the areas. I concluded that actually | > | what we wanted was areas that really do overlap, and optimisations | > | that respect that, so that we get more efficient stack usage. | > | | > | Cheers, | > | Simon | > | | > | On 10/01/2014 15:22, Simon Peyton Jones wrote: | > | > That documentation would be good, yes! I don't know what it means | > | > to | > | say "we don't really have a general concept of areas any more". We | > | did before, and I didn't know that it had gone away. Urk! We can | > | have lots of live areas, notably the old area (for the current | > | call/return parameters, the call area for a call we are preparing, | > | and the one-slot areas for variables we are saving on the stack. | > | > | > | > Here's he current story | > | > https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/StackAre | > | > as | > | > | > | > I agree that we have no concrete syntax for talking about areas, | > | > but | > | that is something we could fix. But I'm worried that they may not | > | mean what they used to mean. | > | > | > | > Simon | > | > | > | > | -----Original Message----- | > | > | From: Simon Marlow [mailto:marlowsd at gmail.com] | > | > | Sent: 09 January 2014 08:39 | > | > | To: Simon Peyton Jones; Herbert Valerio Riedel | > | > | Cc: ghc-devs at haskell.org | > | > | Subject: Re: High-level Cmm code and stack allocation | > | > | | > | > | On 08/01/2014 10:07, Simon Peyton Jones wrote: | > | > | > | > Can't we just allocate a Cmm "area"? The address of an | > | > | > | > area is a | > | > | > | perfectly well-defined Cmm value. | > | > | > | > | > | > What about this idea? | > | > | | > | > | We don't really have a general concept of areas (any more), and | > | > | areas aren't exposed in the concrete Cmm syntax at all. The | > | > | current semantics is that areas may overlap with each other, so | > | > | there should only be one active area at any point. I found that | > | > | this was important to ensure that we could generate good code | > | > | from the stack layout algorithm, otherwise it had to make | > | > | pessimistic assumptions | > | and use too much stack. | > | > | | > | > | You're going to ask me where this is documented, and I think I | > | > | have to admit to slacking off, sorry :-) We did discuss it at | > | > | the time, and I made copious notes, but I didn't transfer those | to the code. | > | > | I'll add a Note. | > | > | | > | > | Cheers, | > | > | Simon | > | > | | > | > | | > | > | > Simon | > | > | > | > | > | > | -----Original Message----- | > | > | > | From: Simon Marlow [mailto:marlowsd at gmail.com] | > | > | > | Sent: 08 January 2014 09:26 | > | > | > | To: Simon Peyton Jones; Herbert Valerio Riedel | > | > | > | Cc: ghc-devs at haskell.org | > | > | > | Subject: Re: High-level Cmm code and stack allocation | > | > | > | | > | > | > | On 07/01/14 22:53, Simon Peyton Jones wrote: | > | > | > | > | Yes, this is technically wrong but luckily works. I'd | > | > | > | > | very much like to have a better solution, preferably one | > | > | > | > | that doesn't add any extra overhead. | > | > | > | > | > | > | > | > | __decodeFloat_Int is a C function, so it will not touch | > | > | > | > | the Haskell stack. | > | > | > | > | > | > | > | > This all seems terribly fragile to me. At least it ought | > | > | > | > to be | > | > | > | surrounded with massive comments pointing out how terribly | > | > | > | fragile it is, breaking all the rules that we carefully | > | > | > | document | > | elsewhere. | > | > | > | > | > | > | > | > Can't we just allocate a Cmm "area"? The address of an | > | > | > | > area is a | > | > | > | perfectly well-defined Cmm value. | > | > | > | | > | > | > | It is fragile, yes. We can't use static memory because it | > | > | > | needs to be thread-local. This particular hack has gone | > | > | > | through several iterations over the years: first we had | > | > | > | static memory, which broke when we did the parallel runtime, | > | > | > | then we had special storage in the Capability, which we gave | > | > | > | up when GMP was split out into a separate library, because | > | > | > | it didn't seem right to have magic fields in the Capability | for one library. | > | > | > | | > | > | > | I'm looking into whether we can do temporary allocation on | > | > | > | the heap for this instead. | > | > | > | | > | > | > | Cheers, | > | > | > | Simon | > | > | > | | > | > | > | | > | > | > | > Simon | > | > | > | > | > | > | > | > | -----Original Message----- | > | > | > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On | > | > | > | > | Behalf Of Simon Marlow | > | > | > | > | Sent: 07 January 2014 16:05 | > | > | > | > | To: Herbert Valerio Riedel; ghc-devs at haskell.org | > | > | > | > | Subject: Re: High-level Cmm code and stack allocation | > | > | > | > | | > | > | > | > | On 04/01/2014 23:26, Herbert Valerio Riedel wrote: | > | > | > | > | > Hello, | > | > | > | > | > | > | > | > | > | > According to Note [Syntax of .cmm files], | > | > | > | > | > | > | > | > | > | > | There are two ways to write .cmm code: | > | > | > | > | > | | > | > | > | > | > | (1) High-level Cmm code delegates the stack | > | > | > | > | > | handling to GHC, | > | > | > | and | > | > | > | > | > | never explicitly mentions Sp or registers. | > | > | > | > | > | | > | > | > | > | > | (2) Low-level Cmm manages the stack itself, and | > | > | > | > | > | must know | > | > | about | > | > | > | > | > | calling conventions. | > | > | > | > | > | | > | > | > | > | > | Whether you want high-level or low-level Cmm is | > | > | > | > | > | indicated by the presence of an argument list on a | > | procedure. | > | > | > | > | > | > | > | > | > | > However, while working on integer-gmp I've been | > | > | > | > | > noticing in integer-gmp/cbits/gmp-wrappers.cmm that | > | > | > | > | > even though all Cmm | > | > | > | > | procedures | > | > | > | > | > have been converted to high-level Cmm, they still | > | > | > | > | > reference the | > | > | > | 'Sp' | > | > | > | > | > register, e.g. | > | > | > | > | > | > | > | > | > | > | > | > | > | > | > #define GMP_TAKE1_RET1(name,mp_fun) \ | > | > | > | > | > name (W_ ws1, P_ d1) \ | > | > | > | > | > { \ | > | > | > | > | > W_ mp_tmp1; \ | > | > | > | > | > W_ mp_result1; \ | > | > | > | > | > \ | > | > | > | > | > again: \ | > | > | > | > | > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ | > | > | > | > | > MAYBE_GC(again); \ | > | > | > | > | > \ | > | > | > | > | > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ | > | > | > | > | > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ | > | > | > | > | > ... \ | > | > | > | > | > | > | > | > | > | > | > | > | > | > | > So is this valid high-level Cmm code? What's the | > | > | > | > | > proper way to | > | > | > | > | allocate | > | > | > | > | > Stack (and/or Heap) memory from high-level Cmm code? | > | > | > | > | | > | > | > | > | Yes, this is technically wrong but luckily works. I'd | > | > | > | > | very much like to have a better solution, preferably one | > | > | > | > | that doesn't add any extra overhead. | > | > | > | > | | > | > | > | > | The problem here is that we need to allocate a couple of | > | > | > | > | temporary words and take their address; that's an | > | > | > | > | unusual thing to do in Cmm, so it only occurs in a few | > | > | > | > | places (mainly | > | > | interacting with gmp). | > | > | > | > | Usually if you want some temporary storage you can use | > | > | > | > | local variables or some heap-allocated memory. | > | > | > | > | | > | > | > | > | Cheers, | > | > | > | > | Simon | > | > | > | > | _______________________________________________ | > | > | > | > | ghc-devs mailing list | > | > | > | > | ghc-devs at haskell.org | > | > | > | > | http://www.haskell.org/mailman/listinfo/ghc-devs | > | > | > | > | > | > | > | > | > | > From carter.schonwald at gmail.com Mon Jan 13 15:58:41 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 13 Jan 2014 10:58:41 -0500 Subject: Folding constants for floats In-Reply-To: References: Message-ID: This is actually a bit more subtle than you'd think. Are those constants precise and exact? (There's certainly floating point code that exploits the cancellations in the floating point model) There's many floating point computations that can't be done with exact rational operations. There's also certain aspects that are target dependent like operations having 80bit vs 64bit precision. (Ie using the old intel fp registers vs sse2 and newer) What's the ticket you're working on? Please be very cautious with floating point, any changes to the meaning that aren't communicated by the programs author could leave a haskeller numerical analyst scratching their head. For example, when doing these floating point computations, what rounding modes will you use? On Monday, January 13, 2014, Kyle Van Berendonck wrote: > Hi, > > I'm cutting my teeth on some constant folding for floats in the cmm. > > I have a question regarding the ticket I'm tackling: > > Should floats be folded with infinite precision (and later truncated to > the platform float size) -- most useful/accurate, or folded with the > platform precision, i.e. double, losing accuracy but keeping consistent > behaviour with -O0 -- most "correct"? > > I would prefer the first case because it's *much* easier to implement than > the second, and it'll probably rot less. > > Regards. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Jan 13 16:27:13 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 13 Jan 2014 11:27:13 -0500 Subject: Folding constants for floats In-Reply-To: References: Message-ID: Oh I see the ticket. Are you focusing on adding hex support to Double# and Float# ? That would be splendid. We currently don have a decent way of writing nan, and the infinities. That would be splendid. On Monday, January 13, 2014, Carter Schonwald wrote: > This is actually a bit more subtle than you'd think. Are those constants > precise and exact? (There's certainly floating point code that exploits > the cancellations in the floating point model) There's many floating point > computations that can't be done with exact rational operations. There's > also certain aspects that are target dependent like operations having 80bit > vs 64bit precision. (Ie using the old intel fp registers vs sse2 and newer) > > What's the ticket you're working on? > > > Please be very cautious with floating point, any changes to the meaning > that aren't communicated by the programs author could leave a haskeller > numerical analyst scratching their head. For example, when doing these > floating point computations, what rounding modes will you use? > > On Monday, January 13, 2014, Kyle Van Berendonck wrote: > >> Hi, >> >> I'm cutting my teeth on some constant folding for floats in the cmm. >> >> I have a question regarding the ticket I'm tackling: >> >> Should floats be folded with infinite precision (and later truncated to >> the platform float size) -- most useful/accurate, or folded with the >> platform precision, i.e. double, losing accuracy but keeping consistent >> behaviour with -O0 -- most "correct"? >> >> I would prefer the first case because it's *much* easier to implement >> than the second, and it'll probably rot less. >> >> Regards. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 13 16:27:54 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Jan 2014 16:27:54 +0000 Subject: Extending fold/build fusion In-Reply-To: References: Message-ID: <59543203684B2244980D7E4057D5FBC148713626@DB3EX14MBXC306.europe.corp.microsoft.com> I've hesitated to reply, because I have lots of questions but no time to investigate in. I'm looking at your wiki page https://github.com/takano-akio/ww-fusion * Does your proposed new fold' run faster than the old one? You give no data. * The new foldl' is not a "good consumer" in the foldr/build sense, which a big loss. What if you say fold' k z [1..n]; you want the intermediate list to vanish. * My brain is too small to truly understand your idea. But since foldrW is non-recursive, what happens if you inline foldrW into fold', and then simplify? I'm betting you get something pretty similar to the old foldl'. Try in by hand, and with GHC and let's see the final optimised code. * Under "motivation" you say "GHC generates something essentially like..." and then give some code. Now, if GHC would only eta-expand 'go' with a second argument, you'd get brilliant code. And maybe that would help lots of programs, not just this one. It's a slight delicate transformation but I've often thought we should try it; c.f #7994, #5809 Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Akio Takano Sent: 09 January 2014 13:25 To: ghc-devs Subject: Re: Extending fold/build fusion Any input on this is appreciated. In particular, I'd like to know: if I implement the idea as a patch to the base package, is there a chance it is considered for merge? -- Takano Akio On Fri, Jan 3, 2014 at 11:20 PM, Akio Takano > wrote: Hi, I have been thinking about how foldl' can be turned into a good consumer, and I came up with something that I thought would work. So I'd like to ask for opinions from the ghc devs: if this idea looks good, if it is a known bad idea, if there is a better way to do it, etc. The main idea is to have an extended version of foldr: -- | A mapping between @a@ and @b at . data Wrap a b = Wrap (a -> b) (b -> a) foldrW :: (forall e. Wrap (f e) (e -> b -> b)) -> (a -> b -> b) -> b -> [a] -> b foldrW (Wrap wrap unwrap) f z0 list0 = wrap go list0 z0 where go = unwrap $ \list z' -> case list of [] -> z' x:xs -> f x $ wrap go xs z' This allows the user to apply an arbitrary "worker-wrapper" transformation to the loop. Using this, foldl' can be defined as newtype Simple b e = Simple { runSimple :: e -> b -> b } foldl' :: (b -> a -> b) -> b -> [a] -> b foldl' f initial xs = foldrW (Wrap wrap unwrap) g id xs initial where wrap (Simple s) e k a = k $ s e a unwrap u = Simple $ \e -> u e id g x next acc = next $! f acc x The wrap and unwrap functions here ensure that foldl' gets compiled into a loop that returns a value of 'b', rather than a function 'b -> b', effectively un-CPS-transforming the loop. I put preliminary code and some more explanation on Github: https://github.com/takano-akio/ww-fusion Thank you, Takano Akio -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Mon Jan 13 17:13:55 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 13 Jan 2014 18:13:55 +0100 Subject: [PATCH] platformFromTriple: fix to recognize Solaris triple (i386-pc-solaris2.11) In-Reply-To: <8738l29w29.fsf@gmail.com> References: <1388942048-16010-1-git-send-email-karel.gardas@centrum.cz> <8738l29w29.fsf@gmail.com> Message-ID: <52D41ED3.6070802@centrum.cz> Hello Herbert, the fix in a little bit extended version is already up-stream: https://github.com/haskell/cabal/commit/98a3feb23364897779dd665758949555a84dc5b8 What is the process to ask ghc developers to update from upstream? Thanks! Karel On 01/ 5/14 06:29 PM, Herbert Valerio Riedel wrote: > Hello Karel, > > Please submit this fix at the upstream Cabal project > at https://github.com/haskell/cabal/issues > > When it's merged upstream we can sync up GHC's in-tree copy of the Cabal > library to Cabal upstream. > > Thanks, > hvr > > On 2014-01-05 at 18:14:08 +0100, Karel Gardas wrote: >> --- >> Cabal/Distribution/System.hs | 2 +- >> 1 files changed, 1 insertions(+), 1 deletions(-) >> >> diff --git a/Cabal/Distribution/System.hs b/Cabal/Distribution/System.hs >> index a18e491..4fc76f6 100644 >> --- a/Cabal/Distribution/System.hs >> +++ b/Cabal/Distribution/System.hs >> @@ -89,7 +89,7 @@ osAliases Compat Windows = ["mingw32", "win32"] >> osAliases _ OSX = ["darwin"] >> osAliases _ IOS = ["ios"] >> osAliases Permissive FreeBSD = ["kfreebsdgnu"] >> -osAliases Permissive Solaris = ["solaris2"] >> +osAliases _ Solaris = ["solaris2"] >> osAliases _ _ = [] >> >> instance Text OS where > From marlowsd at gmail.com Mon Jan 13 17:13:58 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 13 Jan 2014 17:13:58 +0000 Subject: High-level Cmm code and stack allocation In-Reply-To: <59543203684B2244980D7E4057D5FBC148713516@DB3EX14MBXC306.europe.corp.microsoft.com> References: <87fvp3coqr.fsf@gnu.org> <52CC25A4.8060004@gmail.com> <59543203684B2244980D7E4057D5FBC148708D03@DB3EX14MBXC306.europe.corp.microsoft.com> <52CD19AD.7030503@gmail.com> <59543203684B2244980D7E4057D5FBC148709591@DB3EX14MBXC306.europe.corp.microsoft.com> <52CE6040.30705@gmail.com> <59543203684B2244980D7E4057D5FBC14870E940@DB3EX14MBXC306.europe.corp.microsoft.com> <52D01E85.2010900@gmail.com> <59543203684B2244980D7E4057D5FBC14870EA68@DB3EX14MBXC306.europe.corp.microsoft.com> <52D0272D.30909@gmail.com> <59543203684B2244980D7E4057D5FBC148713516@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52D41ED6.2010507@gmail.com> Using more stack generally (but not always) implies extra memory traffic. I noticed it happening a lot, but I didn't make measurements - we never had a way to generate code with just this one thing changed, because the new code generator had lots of issues with bad code, and this was just one. We can think of stack allocation as a black box: it takes Cmm in which (a) variables live across calls and (b) stack references are to [Old+n] or [Young+n], and returns Cmm in which (a) variables do not live across calls, and (b) all stack references are explicit offsets from Sp. The internals of this box are what has changed. Most users of Cmm don't need to care, because you can write optimisations on both the pre-stack-allocated Cmm and the post-stack-allocated Cmm without knowing anything about how stack allocation works. Indeed CmmSink (now) works on both forms. The stack area idea exposed some of the internals of this box; I don't think that's necessarily a good thing. There was *another* form of Cmm, in which (a) variables do not live across calls, and (b) stack references are to [Old+n], [Young+n] or [Sp(var)]. There was a (beautifully simple) spill pass using Hoopl that inserted spills at the definition site; unfortunately to generate good code you often have to move the spills somewhere else. And that's really hard, because code motion interacts in complex ways with stack layout: making a bad code motion decision can increase your stack requirements. This is a pretty good summary of what I was finding difficult here. It was not possible to generate good code without doing some optimisation on this intermediate stage, yet by doing stack allocation in a different way it was much easier to get good code. So the new stack allocator just walks through the code spilling, reloading, and allocating stack frames as it goes and making intelligent decisions about not spilling things if they're already on the stack. This does a really good job, and it was easy to add a couple of important special cases for common things. There's plenty of room to do something better. However, what we have now generates good code from the kind of things that the code generator generates (since that's what I tuned it for, by peering at lots of Cmm and tweaking things), so any improvements won't see much benefit for typical Haskell code. I have some more docs for the stack layout code that I'll push shortly. Cheers, Simon On 13/01/2014 15:20, Simon Peyton Jones wrote: > Thanks. Reading what you write below, I can see two possible motivations. > > 1. Reduce stack sizes. > 2. Eliminate memory moves > > For (1) do we have any data to show that the non-overlap of areas was giving rise to unacceptably big stacks? > > For (2) that is indeed clever, but it's pretty serendipitous: it relies on the overlap being just so, so that coincidentally y gets stored in the same place as it was loaded from. I imagine that you don't plan the stack layout to cause that to happen; it's just a coincidence. Do we have any data to show that the coincidence happens with any frequency? > > Also, as you note, we lose the opportunity for certain sorts of code motion, perhaps increasing register pressure a lot. So there is a downside too. > > You seldom do things without a very good reason, so I feel I must be missing something. > > Simon > > | -----Original Message----- > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | Sent: 10 January 2014 17:00 > | To: Simon Peyton Jones; Herbert Valerio Riedel > | Cc: ghc-devs at haskell.org > | Subject: Re: High-level Cmm code and stack allocation > | > | So stack areas are still a great abstraction, the only change is that > | they now overlap. It's not just about stack getting too big, I've > | copied the notes I made about it below (which I will paste into the code > | in due course). The nice property that we can generate well-defined Cmm > | without knowing explicit stack offsets is intact. > | > | What is different is that there used to be an intermediate state where > | live variables were saved to abstract stack areas across calls, but Sp > | was still not manifest. This intermediate state doesn't exist any more, > | the stack layout algorithm does it all in one pass. To me this was far > | simpler, and I think it ended up being fewer lines of code than the old > | multi-phase stack layout algorithm (it's also much faster). > | > | Of course you can always change this. My goal was to get code that was > | at least as good as the old code generator and in a reasonable amount of > | time, and this was the shortest path I could find to that goal. > | > | Cheers, > | Simon > | > | e.g. if we had > | > | x = Sp[old + 8] > | y = Sp[old + 16] > | > | Sp[young(L) + 8] = L > | Sp[young(L) + 16] = y > | Sp[young(L) + 24] = x > | call f() returns to L > | > | if areas semantically do not overlap, then we might optimise this to > | > | Sp[young(L) + 8] = L > | Sp[young(L) + 16] = Sp[old + 8] > | Sp[young(L) + 24] = Sp[old + 16] > | call f() returns to L > | > | and now young(L) cannot be allocated at the same place as old, and we > | are doomed to use more stack. > | > | - old+8 conflicts with young(L)+8 > | - old+16 conflicts with young(L)+16 and young(L)+8 > | > | so young(L)+8 == old+24 and we get > | > | Sp[-8] = L > | Sp[-16] = Sp[8] > | Sp[-24] = Sp[0] > | Sp -= 24 > | call f() returns to L > | > | However, if areas are defined to be "possibly overlapping" in the > | semantics, then we cannot commute any loads/stores of old with young(L), > | and we will be able to re-use both old+8 and old+16 for young(L). > | > | x = Sp[8] > | y = Sp[0] > | > | Sp[8] = L > | Sp[0] = y > | Sp[-8] = x > | Sp = Sp - 8 > | call f() returns to L > | > | Now, the assignments of y go away, > | > | x = Sp[8] > | Sp[8] = L > | Sp[-8] = x > | Sp = Sp - 8 > | call f() returns to L > | > | > | Conclusion: > | > | - T[old+N] aliases with U[young(L)+M] for all T, U, L, N and M > | - T[old+N] aliases with U[old+M] only if the areas actually overlap > | > | this ensures that we will not commute any accesses to old with > | young(L) or young(L) with young(L'), and the stack allocator will get > | the maximum opportunity to overlap these areas, keeping the stack use to > | a minimum and possibly avoiding some assignments. > | > | > | > | On 10/01/2014 16:35, Simon Peyton Jones wrote: > | > Oh, ok. Alas, a good chunk of my model of Cmm has just gone out of > | the window. I thought that areas were such a lovely, well-behaved > | abstraction. I was thrilled when we came up with them, and I'm very > | sorry to see them go. > | > > | > There are no many things that I no longer understand. I now have no > | idea how we save live variables over a call, or how multiple returned > | values from one call (returned on the stack) stay right where they are > | if they are live across the next call. > | > > | > What was the actual problem? That functions used too much stack, so > | the stack was getting too big? But a one slot area corresponds exactly > | to a live variable, so I don't see how the area abstraction could > | possibly increase stack size. And is stack size a crucial issue anyway? > | > > | > Apart from anything else, areas would have given a lovely solution to > | the problem this thread started with! > | > > | > I guess we can talk about this when you next visit? But some > | documentation would be welcome. > | > > | > Simon > | > > | > | -----Original Message----- > | > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | > | Sent: 10 January 2014 16:24 > | > | To: Simon Peyton Jones; Herbert Valerio Riedel > | > | Cc: ghc-devs at haskell.org > | > | Subject: Re: High-level Cmm code and stack allocation > | > | > | > | There are no one-slot areas any more, I ditched those when I rewrote > | > | the stack allocator. There is only ever one live area: either the > | > | old area or the young area for a call we are about to make or have > | just made. > | > | (see the data type: I removed the one-slot areas) > | > | > | > | I struggled for a long time with this. The problem is that with the > | > | semantics of non-overlapping areas, code motion optimisations would > | > | tend to increase the stack requirements of the function by > | > | overlapping the live ranges of the areas. I concluded that actually > | > | what we wanted was areas that really do overlap, and optimisations > | > | that respect that, so that we get more efficient stack usage. > | > | > | > | Cheers, > | > | Simon > | > | > | > | On 10/01/2014 15:22, Simon Peyton Jones wrote: > | > | > That documentation would be good, yes! I don't know what it means > | > | > to > | > | say "we don't really have a general concept of areas any more". We > | > | did before, and I didn't know that it had gone away. Urk! We can > | > | have lots of live areas, notably the old area (for the current > | > | call/return parameters, the call area for a call we are preparing, > | > | and the one-slot areas for variables we are saving on the stack. > | > | > > | > | > Here's he current story > | > | > https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/StackAre > | > | > as > | > | > > | > | > I agree that we have no concrete syntax for talking about areas, > | > | > but > | > | that is something we could fix. But I'm worried that they may not > | > | mean what they used to mean. > | > | > > | > | > Simon > | > | > > | > | > | -----Original Message----- > | > | > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | > | > | Sent: 09 January 2014 08:39 > | > | > | To: Simon Peyton Jones; Herbert Valerio Riedel > | > | > | Cc: ghc-devs at haskell.org > | > | > | Subject: Re: High-level Cmm code and stack allocation > | > | > | > | > | > | On 08/01/2014 10:07, Simon Peyton Jones wrote: > | > | > | > | > Can't we just allocate a Cmm "area"? The address of an > | > | > | > | > area is a > | > | > | > | perfectly well-defined Cmm value. > | > | > | > > | > | > | > What about this idea? > | > | > | > | > | > | We don't really have a general concept of areas (any more), and > | > | > | areas aren't exposed in the concrete Cmm syntax at all. The > | > | > | current semantics is that areas may overlap with each other, so > | > | > | there should only be one active area at any point. I found that > | > | > | this was important to ensure that we could generate good code > | > | > | from the stack layout algorithm, otherwise it had to make > | > | > | pessimistic assumptions > | > | and use too much stack. > | > | > | > | > | > | You're going to ask me where this is documented, and I think I > | > | > | have to admit to slacking off, sorry :-) We did discuss it at > | > | > | the time, and I made copious notes, but I didn't transfer those > | to the code. > | > | > | I'll add a Note. > | > | > | > | > | > | Cheers, > | > | > | Simon > | > | > | > | > | > | > | > | > | > Simon > | > | > | > > | > | > | > | -----Original Message----- > | > | > | > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | > | > | > | Sent: 08 January 2014 09:26 > | > | > | > | To: Simon Peyton Jones; Herbert Valerio Riedel > | > | > | > | Cc: ghc-devs at haskell.org > | > | > | > | Subject: Re: High-level Cmm code and stack allocation > | > | > | > | > | > | > | > | On 07/01/14 22:53, Simon Peyton Jones wrote: > | > | > | > | > | Yes, this is technically wrong but luckily works. I'd > | > | > | > | > | very much like to have a better solution, preferably one > | > | > | > | > | that doesn't add any extra overhead. > | > | > | > | > > | > | > | > | > | __decodeFloat_Int is a C function, so it will not touch > | > | > | > | > | the Haskell stack. > | > | > | > | > > | > | > | > | > This all seems terribly fragile to me. At least it ought > | > | > | > | > to be > | > | > | > | surrounded with massive comments pointing out how terribly > | > | > | > | fragile it is, breaking all the rules that we carefully > | > | > | > | document > | > | elsewhere. > | > | > | > | > > | > | > | > | > Can't we just allocate a Cmm "area"? The address of an > | > | > | > | > area is a > | > | > | > | perfectly well-defined Cmm value. > | > | > | > | > | > | > | > | It is fragile, yes. We can't use static memory because it > | > | > | > | needs to be thread-local. This particular hack has gone > | > | > | > | through several iterations over the years: first we had > | > | > | > | static memory, which broke when we did the parallel runtime, > | > | > | > | then we had special storage in the Capability, which we gave > | > | > | > | up when GMP was split out into a separate library, because > | > | > | > | it didn't seem right to have magic fields in the Capability > | for one library. > | > | > | > | > | > | > | > | I'm looking into whether we can do temporary allocation on > | > | > | > | the heap for this instead. > | > | > | > | > | > | > | > | Cheers, > | > | > | > | Simon > | > | > | > | > | > | > | > | > | > | > | > | > Simon > | > | > | > | > > | > | > | > | > | -----Original Message----- > | > | > | > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On > | > | > | > | > | Behalf Of Simon Marlow > | > | > | > | > | Sent: 07 January 2014 16:05 > | > | > | > | > | To: Herbert Valerio Riedel; ghc-devs at haskell.org > | > | > | > | > | Subject: Re: High-level Cmm code and stack allocation > | > | > | > | > | > | > | > | > | > | On 04/01/2014 23:26, Herbert Valerio Riedel wrote: > | > | > | > | > | > Hello, > | > | > | > | > | > > | > | > | > | > | > According to Note [Syntax of .cmm files], > | > | > | > | > | > > | > | > | > | > | > | There are two ways to write .cmm code: > | > | > | > | > | > | > | > | > | > | > | > | (1) High-level Cmm code delegates the stack > | > | > | > | > | > | handling to GHC, > | > | > | > | and > | > | > | > | > | > | never explicitly mentions Sp or registers. > | > | > | > | > | > | > | > | > | > | > | > | (2) Low-level Cmm manages the stack itself, and > | > | > | > | > | > | must know > | > | > | about > | > | > | > | > | > | calling conventions. > | > | > | > | > | > | > | > | > | > | > | > | Whether you want high-level or low-level Cmm is > | > | > | > | > | > | indicated by the presence of an argument list on a > | > | procedure. > | > | > | > | > | > > | > | > | > | > | > However, while working on integer-gmp I've been > | > | > | > | > | > noticing in integer-gmp/cbits/gmp-wrappers.cmm that > | > | > | > | > | > even though all Cmm > | > | > | > | > | procedures > | > | > | > | > | > have been converted to high-level Cmm, they still > | > | > | > | > | > reference the > | > | > | > | 'Sp' > | > | > | > | > | > register, e.g. > | > | > | > | > | > > | > | > | > | > | > > | > | > | > | > | > #define GMP_TAKE1_RET1(name,mp_fun) \ > | > | > | > | > | > name (W_ ws1, P_ d1) \ > | > | > | > | > | > { \ > | > | > | > | > | > W_ mp_tmp1; \ > | > | > | > | > | > W_ mp_result1; \ > | > | > | > | > | > \ > | > | > | > | > | > again: \ > | > | > | > | > | > STK_CHK_GEN_N (2 * SIZEOF_MP_INT); \ > | > | > | > | > | > MAYBE_GC(again); \ > | > | > | > | > | > \ > | > | > | > | > | > mp_tmp1 = Sp - 1 * SIZEOF_MP_INT; \ > | > | > | > | > | > mp_result1 = Sp - 2 * SIZEOF_MP_INT; \ > | > | > | > | > | > ... \ > | > | > | > | > | > > | > | > | > | > | > > | > | > | > | > | > So is this valid high-level Cmm code? What's the > | > | > | > | > | > proper way to > | > | > | > | > | allocate > | > | > | > | > | > Stack (and/or Heap) memory from high-level Cmm code? > | > | > | > | > | > | > | > | > | > | Yes, this is technically wrong but luckily works. I'd > | > | > | > | > | very much like to have a better solution, preferably one > | > | > | > | > | that doesn't add any extra overhead. > | > | > | > | > | > | > | > | > | > | The problem here is that we need to allocate a couple of > | > | > | > | > | temporary words and take their address; that's an > | > | > | > | > | unusual thing to do in Cmm, so it only occurs in a few > | > | > | > | > | places (mainly > | > | > | interacting with gmp). > | > | > | > | > | Usually if you want some temporary storage you can use > | > | > | > | > | local variables or some heap-allocated memory. > | > | > | > | > | > | > | > | > | > | Cheers, > | > | > | > | > | Simon > | > | > | > | > | _______________________________________________ > | > | > | > | > | ghc-devs mailing list > | > | > | > | > | ghc-devs at haskell.org > | > | > | > | > | http://www.haskell.org/mailman/listinfo/ghc-devs > | > | > | > | > > | > | > | > > | > | > > | > > From krz.gogolewski at gmail.com Mon Jan 13 18:45:53 2014 From: krz.gogolewski at gmail.com (Krzysztof Gogolewski) Date: Mon, 13 Jan 2014 19:45:53 +0100 Subject: Enable TypeHoles by default? In-Reply-To: <59543203684B2244980D7E4057D5FBC148713021@DB3EX14MBXC306.europe.corp.microsoft.com> References: <52D3B706.7020302@gmail.com> <2E187DB3-FFFC-4E3C-90E2-34607F57B599@cis.upenn.edu> <59543203684B2244980D7E4057D5FBC148713021@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: I have re-sent the question to glasgow-haskell-users; to avoid duplication, let's continue the thread there. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Mon Jan 13 20:57:03 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 13 Jan 2014 20:57:03 +0000 Subject: [commit: packages/integer-gmp] master: Allocate initial 1-limb mpz_t on the Stack and introduce MPZ# type (7bdcadd) In-Reply-To: <20140113132526.74D922406B@ghc.haskell.org> References: <20140113132526.74D922406B@ghc.haskell.org> Message-ID: <52D4531F.1030907@gmail.com> On 13/01/14 13:25, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/integer-gmp > > On branch : master > Link : http://ghc.haskell.org/trac/ghc/changeset/7bdcadda7e884edffb1427f0685493f3a2e5c5fa/integer-gmp > >> --------------------------------------------------------------- > > commit 7bdcadda7e884edffb1427f0685493f3a2e5c5fa > Author: Herbert Valerio Riedel > Date: Thu Jan 9 00:19:31 2014 +0100 > > Allocate initial 1-limb mpz_t on the Stack and introduce MPZ# type > > We now allocate a 1-limb mpz_t on the stack instead of doing a more > expensive heap-allocation (especially if the heap-allocated copy becomes > garbage right away); this addresses #8647. While this is quite cool (turning some J# back into S#), I don't understand why you've done it this way. Couldn't it be done in the Haskell layer rather than modifying the primops? The ByteArray# has already been allocated by GMP, so you don't lose anything by returning it to Haskell and checking the size there. Then all the DUMMY_BYTEARRAY stuff would go away. Cheers, Simon From hvriedel at gmail.com Mon Jan 13 21:49:39 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 13 Jan 2014 22:49:39 +0100 Subject: [commit: packages/integer-gmp] master: Allocate initial 1-limb mpz_t on the Stack and introduce MPZ# type (7bdcadd) In-Reply-To: <52D4531F.1030907@gmail.com> (Simon Marlow's message of "Mon, 13 Jan 2014 20:57:03 +0000") References: <20140113132526.74D922406B@ghc.haskell.org> <52D4531F.1030907@gmail.com> Message-ID: <87y52jpn64.fsf@gmail.com> On 2014-01-13 at 21:57:03 +0100, Simon Marlow wrote: > On 13/01/14 13:25, git at git.haskell.org wrote: >> Repository : ssh://git at git.haskell.org/integer-gmp >> >> On branch : master >> Link : http://ghc.haskell.org/trac/ghc/changeset/7bdcadda7e884edffb1427f0685493f3a2e5c5fa/integer-gmp >> >>> --------------------------------------------------------------- >> >> commit 7bdcadda7e884edffb1427f0685493f3a2e5c5fa >> Author: Herbert Valerio Riedel >> Date: Thu Jan 9 00:19:31 2014 +0100 >> >> Allocate initial 1-limb mpz_t on the Stack and introduce MPZ# type >> >> We now allocate a 1-limb mpz_t on the stack instead of doing a more >> expensive heap-allocation (especially if the heap-allocated copy becomes >> garbage right away); this addresses #8647. > > While this is quite cool (turning some J# back into S#), I don't > understand why you've done it this way. Couldn't it be done in the > Haskell layer rather than modifying the primops? The ByteArray# has > already been allocated by GMP, so you don't lose anything by returning > it to Haskell and checking the size there. Then all the > DUMMY_BYTEARRAY stuff would go away. Actually there isn't always a ByteArray# allocated; the patch got rid of all mpz_init() calls for the result-mpz_t (which would have allocated 1-limb ByteArray#s); Now instead, the single word-sized limb that would have been heap-allocated via mpz_init() before calling the actual GMP operation, is allocated on the stack instead, and only if the GMP routines need to grow the passed in mpz_t's an actual ByteArray# is allocated. That's why I needed a way to return either a single stack-allocated limb (hence the word#), *or* an heap-allocated 'ByteArray#', which lead to the MPZ# 3-tuple. Greetings, hvr From kvanberendonck at gmail.com Mon Jan 13 22:21:45 2014 From: kvanberendonck at gmail.com (Kyle Van Berendonck) Date: Tue, 14 Jan 2014 09:21:45 +1100 Subject: Folding constants for floats In-Reply-To: References: Message-ID: Hi, I'd like to work on the primitives first. They are relatively easy to implement. Here's how I figure it; The internal representation of the floats in the cmm is as a Rational (ratio of Integers), so they have "infinite precision". I can implement all the constant folding by just writing my own operations on these rationals; ie, ** takes the power of the top/bottom and reconstructs a new Rational, log takes the difference between the log of the top/bottom etc. This is all very easy to fold. I can encode errors in the Rational where infinity is >0 %: 0 and NaN is 0 %: 0. Since the size of floating point constants is more of an architecture specific thing, and floats don't wrap around like integers do, it would make more sense (in my opinion) to only reduce the value to the architecture specific precision (or clip it to a NaN or such) in the **final** stage as apposed to trying to emulate the behavior of a double native to the architecture (which is a very hard thing to do, and results in precision errors -- the real question is, do people want precision errors when they write literals in code, or are they really looking for the compiler to do a better job than them at making sure they stay precise?) On Tue, Jan 14, 2014 at 3:27 AM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > Oh I see the ticket. Are you focusing on adding hex support to Double# > and Float# ? That would be splendid. We currently don have a decent way of > writing nan, and the infinities. That would be splendid. > > On Monday, January 13, 2014, Carter Schonwald wrote: > >> This is actually a bit more subtle than you'd think. Are those constants >> precise and exact? (There's certainly floating point code that exploits >> the cancellations in the floating point model) There's many floating point >> computations that can't be done with exact rational operations. There's >> also certain aspects that are target dependent like operations having 80bit >> vs 64bit precision. (Ie using the old intel fp registers vs sse2 and newer) >> >> What's the ticket you're working on? >> >> >> Please be very cautious with floating point, any changes to the meaning >> that aren't communicated by the programs author could leave a haskeller >> numerical analyst scratching their head. For example, when doing these >> floating point computations, what rounding modes will you use? >> >> On Monday, January 13, 2014, Kyle Van Berendonck wrote: >> >>> Hi, >>> >>> I'm cutting my teeth on some constant folding for floats in the cmm. >>> >>> I have a question regarding the ticket I'm tackling: >>> >>> Should floats be folded with infinite precision (and later truncated to >>> the platform float size) -- most useful/accurate, or folded with the >>> platform precision, i.e. double, losing accuracy but keeping consistent >>> behaviour with -O0 -- most "correct"? >>> >>> I would prefer the first case because it's *much* easier to implement >>> than the second, and it'll probably rot less. >>> >>> Regards. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Mon Jan 13 22:38:59 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 13 Jan 2014 23:38:59 +0100 Subject: [PATCH] platformFromTriple: fix to recognize Solaris triple (i386-pc-solaris2.11) In-Reply-To: <52D41ED3.6070802@centrum.cz> (Karel Gardas's message of "Mon, 13 Jan 2014 18:13:55 +0100") References: <1388942048-16010-1-git-send-email-karel.gardas@centrum.cz> <8738l29w29.fsf@gmail.com> <52D41ED3.6070802@centrum.cz> Message-ID: <87txd7pkvw.fsf@gmail.com> Hello Karel, On 2014-01-13 at 18:13:55 +0100, Karel Gardas wrote: > Hello Herbert, > > the fix in a little bit extended version is already up-stream: > https://github.com/haskell/cabal/commit/98a3feb23364897779dd665758949555a84dc5b8 well, it's a first step that's in the master branch, but GHC HEAD currently tracks the stable Cabal branch (currently this is "1.18"[1]) as we want GHC to ship with a proper release of the Cabal lib. > What is the process to ask ghc developers to update from upstream? You'll need to persuade the Cabal devs to make the fix above available in a stable branch; if the fix makes it into a Cabal release in time for the final GHC 7.8 release, it will most likely be part of 7.8. However, I don't know if there's a concrete plan for a Cabal-1.18.1.3 release currently. [1]: https://github.com/haskell/cabal/commits/1.18 From ml at isaac.cedarswampstudios.org Mon Jan 13 23:02:27 2014 From: ml at isaac.cedarswampstudios.org (Isaac Dupree) Date: Mon, 13 Jan 2014 18:02:27 -0500 Subject: Folding constants for floats In-Reply-To: References: Message-ID: <52D47083.5040809@isaac.cedarswampstudios.org> On 01/13/2014 05:21 PM, Kyle Van Berendonck wrote: > Hi, > > I'd like to work on the primitives first. They are relatively easy to > implement. Here's how I figure it; > > The internal representation of the floats in the cmm is as a Rational > (ratio of Integers), so they have "infinite precision". I can implement > all the constant folding by just writing my own operations on these > rationals; ie, ** takes the power of the top/bottom and reconstructs a > new Rational, log takes the difference between the log of the top/bottom > etc. This is all very easy to fold. What about sin(), etc? I don't think identities will get you out of computing at least some irrational numbers. (Maybe I'm missing your point?) > Since the size of floating point constants is more of an > architecture specific thing IEEE 754 is becoming more and more ubiquitous. As far as I know, Haskell Float is always IEEE 754 32-bit binary floating point and Double is IEEE 754 64-bit binary floating point, on machines that support this (including x86_64, ARM, and sometimes x86). Let's not undermine this progress. > and floats don't wrap around like integers > do, it would make more sense (in my opinion) to only reduce the value to > the architecture specific precision (or clip it to a NaN or such) in the > **final** stage as apposed to trying to emulate the behavior of a double > native to the architecture (which is a very hard thing to do, and > results in precision errors GCC uses MPFR to exactly emulate the target machine's rounding behaviour. > the real question is, do people want > precision errors when they write literals in code, Yes. Look at GCC. If you don't pass -ffast-math (which says you don't care if floating-point rounding behaves as specified), you get the same floating-point behaviour with and without optimizations. This is IMHO even more important for Haskell where we tend to believe in deterministic pure code. -Isaac From svenpanne at gmail.com Tue Jan 14 08:03:34 2014 From: svenpanne at gmail.com (Sven Panne) Date: Tue, 14 Jan 2014 09:03:34 +0100 Subject: Folding constants for floats In-Reply-To: <52D47083.5040809@isaac.cedarswampstudios.org> References: <52D47083.5040809@isaac.cedarswampstudios.org> Message-ID: ... and let's not forget about such fun stuff as IEEE's -0, e.g.: 1/(-1 * 0) => -Infinity 1/(0 + (-1 * 0)) => Infinity If we take the standpoint that Haskell's Float and Double types correspond to IEEE 754 floating point numbers, there is almost no mathematical equivalence which holds, and consequently almost all folding or other optimizations will be wrong. One can do all these things behind a flag (trading IEEE compatibility for better code), but this shouldn't be done by default IMHO. From carter.schonwald at gmail.com Tue Jan 14 08:51:01 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 14 Jan 2014 03:51:01 -0500 Subject: Folding constants for floats In-Reply-To: References: <52D47083.5040809@isaac.cedarswampstudios.org> Message-ID: maybe so, but having a semantics by default is huge, and honestly i'm not super interested in optimizations that merely change one infinity for another. What would the alternative semantics be? Whatever it is, how will we communicate it to our users? GHC's generally been (by accidenta) IEEE compliant, changing that will possibly break someones code! (perhaps). Also who's going to specify this alternative semantics and educate everyone about it? the thing is floating point doesn't act like most other models of numbers, they have a very very non linear grid of precision across as HUGE dynamic range. Pretending theyre something they're not is the root of most problems with them. either way, its a complex problem that nees to be carefully sorted out On Tue, Jan 14, 2014 at 3:03 AM, Sven Panne wrote: > ... and let's not forget about such fun stuff as IEEE's -0, e.g.: > > 1/(-1 * 0) => -Infinity > 1/(0 + (-1 * 0)) => Infinity > > If we take the standpoint that Haskell's Float and Double types > correspond to IEEE 754 floating point numbers, there is almost no > mathematical equivalence which holds, and consequently almost all > folding or other optimizations will be wrong. One can do all these > things behind a flag (trading IEEE compatibility for better code), but > this shouldn't be done by default IMHO. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Tue Jan 14 08:57:30 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 14 Jan 2014 08:57:30 +0000 Subject: Quick code style question: Wild card binders Message-ID: <1389689850.2465.4.camel@kirk> Hi, both work, so it is a matter of style, and I?m not sure which one is better style: If I generate a Case where the case binder is not used, should I * use the wildcard binder (mkWildCase), to be explicit about the fact fact that the wildcard binder is unused, or should I * generate a new Unique and a new Id nevertheless, because wildCard is bad? Thanks, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: This is a digitally signed message part URL: From tkn.akio at gmail.com Tue Jan 14 09:22:00 2014 From: tkn.akio at gmail.com (Akio Takano) Date: Tue, 14 Jan 2014 18:22:00 +0900 Subject: Extending fold/build fusion In-Reply-To: <59543203684B2244980D7E4057D5FBC148713626@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148713626@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Thank you for looking at this! On Tue, Jan 14, 2014 at 1:27 AM, Simon Peyton Jones wrote: > I?ve hesitated to reply, because I have lots of questions but no time to > investigate in. I?m looking at your wiki page > https://github.com/takano-akio/ww-fusion > > > > ? Does your proposed new fold? run faster than the old one? You > give no data. > No, it runs just equally fast as the old one. At the Core level they are the same. I ran some criterion benchmarks: source: https://github.com/takano-akio/ww-fusion/blob/master/benchmarks.hs results: http://htmlpreview.github.io/?https://github.com/takano-akio/ww-fusion/blob/master/foldl.html The point was not to make foldl' faster, but to make it fuse well with good producers. > ? The new foldl? is not a ?good consumer? in the foldr/build > sense, which a big loss. What if you say fold? k z [1..n]; you want the > intermediate list to vanish. > For my idea to work, enumFromTo and all other good producers need to be redefined in terms of buildW, which fuses with foldrW. The definition of buildW and the relevant rules are here: https://github.com/takano-akio/ww-fusion/blob/master/WWFusion.hs > ? My brain is too small to truly understand your idea. But > since foldrW is non-recursive, what happens if you inline foldrW into > fold?, and then simplify? I?m betting you get something pretty similar to > the old foldl?. Try in by hand, and with GHC and let?s see the final > optimised code. > I checked this and I see the same code as the old foldl', modulo order of arguments. This is what I expected. > ? Under ?motivation? you say ?GHC generates something > essentially like?? and then give some code. Now, if GHC would only > eta-expand ?go? with a second argument, you?d get brilliant code. And maybe > that would help lots of programs, not just this one. It?s a slight > delicate transformation but I?ve often thought we should try it; c.f #7994, > #5809 > I agree that it would be generally useful if GHC did this transformation. However I don't think it's good enough for this particular goal of making foldl' fuse well. Consider a function that flattens a binary tree into a list: data Tree = Tip {-# UNPACK #-} !Int | Bin Tree Tree toList :: Tree -> [Int] toList tree = build (toListFB tree) {-# INLINE toList #-} toListFB :: Tree -> (Int -> r -> r) -> r -> r toListFB root cons nil = go root nil where go (Tip x) rest = cons x rest go (Bin x y) rest = go x (go y rest) Let's say we want to eliminate the intermediate list in the expression (sum (toList t)). Currently sum is not a good consumer, but if it were, after fusion we'd get something like: sumList :: Tree -> Int sumList root = go0 root id 0 go0 :: Tree -> (Int -> Int) -> Int -> Int go0 (Tip x) k = (k $!) . (x+) go0 (Bin x y) k = go0 x (go0 y k) Now, merely eta-expanding go0 is not enough to get efficient code, because the function will still build a partial application every time it sees a Bin constructor. For this recursion to work in an allocation-free way, it must be rather like: go1 :: Tree -> Int -> Int go1 (Tip x) n = x + n go1 (Bin x y) n = go1 y (go1 x n) And this is what we get if we define foldl' and toList in terms of foldrW and buildW. I think a similar problem arises whenever you define a good consumer that traverses a tree-like structure, and you want to use a strict fold to consume a list produced by that producer. Thank you, Takano Akio > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Akio > Takano > *Sent:* 09 January 2014 13:25 > *To:* ghc-devs > *Subject:* Re: Extending fold/build fusion > > > > Any input on this is appreciated. In particular, I'd like to know: if I > implement the idea as a patch to the base package, is there a chance it is > considered for merge? > > > -- Takano Akio > > On Fri, Jan 3, 2014 at 11:20 PM, Akio Takano wrote: > > Hi, > > I have been thinking about how foldl' can be turned into a good consumer, > and I came up with something that I thought would work. So I'd like to ask > for opinions from the ghc devs: if this idea looks good, if it is a known > bad idea, if there is a better way to do it, etc. > > The main idea is to have an extended version of foldr: > > -- | A mapping between @a@ and @b at . > data Wrap a b = Wrap (a -> b) (b -> a) > > foldrW > :: (forall e. Wrap (f e) (e -> b -> b)) > -> (a -> b -> b) -> b -> [a] -> b > foldrW (Wrap wrap unwrap) f z0 list0 = wrap go list0 z0 > where > go = unwrap $ \list z' -> case list of > [] -> z' > x:xs -> f x $ wrap go xs z' > > This allows the user to apply an arbitrary "worker-wrapper" transformation > to the loop. > > Using this, foldl' can be defined as > > newtype Simple b e = Simple { runSimple :: e -> b -> b } > > foldl' :: (b -> a -> b) -> b -> [a] -> b > foldl' f initial xs = foldrW (Wrap wrap unwrap) g id xs initial > where > wrap (Simple s) e k a = k $ s e a > unwrap u = Simple $ \e -> u e id > g x next acc = next $! f acc x > > The wrap and unwrap functions here ensure that foldl' gets compiled into a > loop that returns a value of 'b', rather than a function 'b -> b', > effectively un-CPS-transforming the loop. > > I put preliminary code and some more explanation on Github: > > https://github.com/takano-akio/ww-fusion > > Thank you, > Takano Akio > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Tue Jan 14 09:59:36 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 14 Jan 2014 09:59:36 +0000 Subject: Quick code style question: Wild card binders In-Reply-To: <1389689850.2465.4.camel@kirk> References: <1389689850.2465.4.camel@kirk> Message-ID: <1389693576.2465.9.camel@kirk> Hi, Am Dienstag, den 14.01.2014, 08:57 +0000 schrieb Joachim Breitner: > both work, so it is a matter of style, and I?m not sure which one is > better style: If I generate a Case where the case binder is not used, > should I > * use the wildcard binder (mkWildCase), to be explicit about the fact > fact that the wildcard binder is unused, or should I > * generate a new Unique and a new Id nevertheless, because wildCard is > bad? here I is what I learned from SPJ: If I have access to a monad, better use a unique. Even if the case binder is not used _now_ when I generate the code, later simplifier phases might attempt to make use of it. If I had not access to a monad, then using the wild binder is ok if I have control over the body and its free variables. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Tue Jan 14 10:01:38 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 14 Jan 2014 10:01:38 +0000 Subject: [commit: packages/integer-gmp] master: Allocate initial 1-limb mpz_t on the Stack and introduce MPZ# type (7bdcadd) In-Reply-To: <87y52jpn64.fsf@gmail.com> References: <20140113132526.74D922406B@ghc.haskell.org> <52D4531F.1030907@gmail.com> <87y52jpn64.fsf@gmail.com> Message-ID: <59543203684B2244980D7E4057D5FBC14871457D@DB3EX14MBXC306.europe.corp.microsoft.com> | Actually there isn't always a ByteArray# allocated; the patch got rid of | all mpz_init() calls for the result-mpz_t (which would have allocated 1- | limb ByteArray#s); | | Now instead, the single word-sized limb that would have been heap- | allocated via mpz_init() before calling the actual GMP operation, is | allocated on the stack instead, and only if the GMP routines need to | grow the passed in mpz_t's an actual ByteArray# is allocated. | | That's why I needed a way to return either a single stack-allocated limb | (hence the word#), *or* an heap-allocated 'ByteArray#', which lead to | the MPZ# 3-tuple. Interesting! To risk becoming like a broken record, have you described this overall strategy somewhere? And linked to that explanation from suitable places? This "big picture" information is invaluable. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Herbert Valerio Riedel | Sent: 13 January 2014 21:50 | To: Simon Marlow | Cc: ghc-devs at haskell.org | Subject: Re: [commit: packages/integer-gmp] master: Allocate initial 1- | limb mpz_t on the Stack and introduce MPZ# type (7bdcadd) | | On 2014-01-13 at 21:57:03 +0100, Simon Marlow wrote: | > On 13/01/14 13:25, git at git.haskell.org wrote: | >> Repository : ssh://git at git.haskell.org/integer-gmp | >> | >> On branch : master | >> Link : | http://ghc.haskell.org/trac/ghc/changeset/7bdcadda7e884edffb1427f0685493 | f3a2e5c5fa/integer-gmp | >> | >>> --------------------------------------------------------------- | >> | >> commit 7bdcadda7e884edffb1427f0685493f3a2e5c5fa | >> Author: Herbert Valerio Riedel | >> Date: Thu Jan 9 00:19:31 2014 +0100 | >> | >> Allocate initial 1-limb mpz_t on the Stack and introduce MPZ# | >> type | >> | >> We now allocate a 1-limb mpz_t on the stack instead of doing a | more | >> expensive heap-allocation (especially if the heap-allocated copy | becomes | >> garbage right away); this addresses #8647. | > | > While this is quite cool (turning some J# back into S#), I don't | > understand why you've done it this way. Couldn't it be done in the | > Haskell layer rather than modifying the primops? The ByteArray# has | > already been allocated by GMP, so you don't lose anything by returning | > it to Haskell and checking the size there. Then all the | > DUMMY_BYTEARRAY stuff would go away. | | Actually there isn't always a ByteArray# allocated; the patch got rid of | all mpz_init() calls for the result-mpz_t (which would have allocated 1- | limb ByteArray#s); | | Now instead, the single word-sized limb that would have been heap- | allocated via mpz_init() before calling the actual GMP operation, is | allocated on the stack instead, and only if the GMP routines need to | grow the passed in mpz_t's an actual ByteArray# is allocated. | | That's why I needed a way to return either a single stack-allocated limb | (hence the word#), *or* an heap-allocated 'ByteArray#', which lead to | the MPZ# 3-tuple. | | Greetings, | hvr | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From austin at well-typed.com Tue Jan 14 10:14:18 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 14 Jan 2014 04:14:18 -0600 Subject: RC Status Message-ID: Hello all, I apologize for the lack of updates recently. I'm well aware people are a bit ready to move on, so the good news is we should be able to do so shortly. Simon and I talked last week, and for the RC to go ahead, we've decided to punt a few tickets for the moment. The biggest hurdle is dynamic support on Windows, which we've decided to punt. I have put together a page detailing the current status with some more detail about what's going on: https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8 I spent a lot of time testing my (just merged) branch on several machines. The highlights for interested parties: * OS X looks pretty good across the board, minus a `./validate` bug I need to track down on Mavericks. 10.8 still needs to be tested (this will hopefully happen soon.) Otherwise it seems to be fine. * With the dynamic stuff punted, Windows looks pretty good right now too. There is some minor testsuite fallout I need to investigate. One of Herbert's patches broke the 64bit windows build, and I'm bisecting it. Then I'll test a win64 bootstrap. * Linux, as usual, seems just fine, and I just need to finish the i386 bootstrap. * I need to update some performance numbers. There is one final bug to fix, #7602, which is a performance regression for OS X, but I'm investigating the `./validate` bug on Mavericks first. Once these look green across the board, I speculate I'll begin to make the RC. I expect this to happen ASAP (within the week.) -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Tue Jan 14 10:16:25 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 14 Jan 2014 04:16:25 -0600 Subject: [commit: ghc] master: Add Windows to NoSharedLibsPlatformList (4af1e76) In-Reply-To: <52D3C91C.1070608@mail.ru> References: <20140113062821.1C7D92406B@ghc.haskell.org> <52D3B968.6020005@mail.ru> <52D3C546.8010307@mail.ru> <52D3C91C.1070608@mail.ru> Message-ID: First off, I would really like any help with Windows, and I'm more than willing to give advice, help, or even access to hardware if people are interested to test their work. Second, the state we're currently in basically leaves us the way 7.6.3 was. This technically isn't a regression, but it leaves the 64bit Windows build in a fairly unsatisfactory position due to the bugs, as they were in the last release. However, dynamic for Windows is the biggest thing holding up the RC, and we're behind schedule (people are ready to move on) - so in light of this, the RC will likely move forward shortly with these in the same state (which is unfortunate, but we decided to punt it in a decision last week.) It's sad to say that we have so few Windows hackers, it's hard to hold up for so long on this issue. But you can help change all of this! During the RC period, I would very much welcome fixes for some of these issues, and be more than willing to assist you where possible to do that (including detailing what I've learned.) See the status email I sent to the list for some more details. PS. I don't know if you use IRC, but it's easily available and there are several GHC hackers in #ghc on irc.freenode.org, including myself, so if you can spare any time it might be faster than emails. Also be sure to read over https://ghc.haskell.org/trac/ghc/wiki/Building - especially the "Getting started for developers" section, which will help you with some of the mechanical GHC workflows. On Mon, Jan 13, 2014 at 5:08 AM, kyra wrote: > More on this: > > > On 1/13/2014 14:51, kyra wrote: >> >> The last would be better, because dynamic-linked Windows GHC has longer >> load time (which can jump to intolerable 2-3 secs, which happens, I guess, >> when we approach 64k exported symbols limit). > > > "which can jump to intolerable 2-3 secs" refers to different *builds* of > GHC. Some builds had load times in the order of tenths of a second, some - > up to 2-3 secs. For example ghc-7.7.20131210 load time was more than 2 secs. > When I've rebuilt it lowering funfolding-creation-threshold significantly, > load time lowered to tenths of a second. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From kazu at iij.ad.jp Tue Jan 14 12:23:38 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Tue, 14 Jan 2014 21:23:38 +0900 (JST) Subject: RC Status In-Reply-To: References: Message-ID: <20140114.212338.1936899713892957648.kazu@iij.ad.jp> Hi, For Mac, we should verify darchon's Cabal patch for dynamic linking in #8266. darchon's patch does not works well in my Mavericks. https://ghc.haskell.org/trac/ghc/ticket/8266 If you guys are Mac users, please test darchon's patch. Also, a problem exists in FreeBSD: https://ghc.haskell.org/trac/ghc/ticket/8451 I lost my FreeBSD environment, so I cannot help fixing #8451 at this moment. --Kazu > Hello all, > > I apologize for the lack of updates recently. I'm well aware people > are a bit ready to move on, so the good news is we should be able to > do so shortly. > > Simon and I talked last week, and for the RC to go ahead, we've > decided to punt a few tickets for the moment. The biggest hurdle is > dynamic support on Windows, which we've decided to punt. > > I have put together a page detailing the current status with some more > detail about what's going on: > > https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8 > > I spent a lot of time testing my (just merged) branch on several machines. > > The highlights for interested parties: > > * OS X looks pretty good across the board, minus a `./validate` bug I > need to track down on Mavericks. 10.8 still needs to be tested (this > will hopefully happen soon.) Otherwise it seems to be fine. > * With the dynamic stuff punted, Windows looks pretty good right now > too. There is some minor testsuite fallout I need to investigate. One > of Herbert's patches broke the 64bit windows build, and I'm bisecting > it. Then I'll test a win64 bootstrap. > * Linux, as usual, seems just fine, and I just need to finish the > i386 bootstrap. > * I need to update some performance numbers. > > There is one final bug to fix, #7602, which is a performance > regression for OS X, but I'm investigating the `./validate` bug on > Mavericks first. > > Once these look green across the board, I speculate I'll begin to make > the RC. I expect this to happen ASAP (within the week.) > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From pali.gabor at gmail.com Tue Jan 14 12:30:53 2014 From: pali.gabor at gmail.com (=?ISO-8859-1?Q?P=E1li_G=E1bor_J=E1nos?=) Date: Tue, 14 Jan 2014 13:30:53 +0100 Subject: RC Status In-Reply-To: <20140114.212338.1936899713892957648.kazu@iij.ad.jp> References: <20140114.212338.1936899713892957648.kazu@iij.ad.jp> Message-ID: On Tue, Jan 14, 2014 at 1:23 PM, Kazu Yamamoto wrote: > Also, a problem exists in FreeBSD: > > https://ghc.haskell.org/trac/ghc/ticket/8451 Thanks for the reminder, I will fix it this week. From the.dead.shall.rise at gmail.com Tue Jan 14 12:35:08 2014 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Tue, 14 Jan 2014 13:35:08 +0100 Subject: [PATCH] platformFromTriple: fix to recognize Solaris triple (i386-pc-solaris2.11) In-Reply-To: <87txd7pkvw.fsf@gmail.com> References: <1388942048-16010-1-git-send-email-karel.gardas@centrum.cz> <8738l29w29.fsf@gmail.com> <52D41ED3.6070802@centrum.cz> <87txd7pkvw.fsf@gmail.com> Message-ID: Hi, On Mon, Jan 13, 2014 at 11:38 PM, Herbert Valerio Riedel wrote: > > You'll need to persuade the Cabal devs to make the fix above available > in a stable branch; if the fix makes it into a Cabal release in time for > the final GHC 7.8 release, it will most likely be part of 7.8. However, > I don't know if there's a concrete plan for a Cabal-1.18.1.3 release > currently. Johan is the final authority on this, but IIRC we wanted 1.18.1.2 to be the final 1.18 release. I'll merge that fix into the 1.18 branch. -- () ascii ribbon campaign - against html e-mail /\ www.asciiribbon.org - against proprietary attachments From simonpj at microsoft.com Tue Jan 14 13:59:55 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 14 Jan 2014 13:59:55 +0000 Subject: ghci verbosity Message-ID: <59543203684B2244980D7E4057D5FBC148714BA2@DB3EX14MBXC306.europe.corp.microsoft.com> Friends ghc -interactive has just started being more verbose. The "linking...done" stuff didn't happen before. Does this ring any bells for anyone? I have not investigated at all so far; hoping someone will say "oh yes, I know and will fix". Simon bash$ ~/5builds/HEAD/inplace/bin/ghc-stage2 --interactive GHCi, version 7.7.20140114: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Prelude> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rarash at student.chalmers.se Tue Jan 14 14:08:47 2014 From: rarash at student.chalmers.se (Arash Rouhani) Date: Tue, 14 Jan 2014 15:08:47 +0100 Subject: ghci verbosity In-Reply-To: <59543203684B2244980D7E4057D5FBC148714BA2@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148714BA2@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <52D544EF.1050000@student.chalmers.se> Ghci always outputted that, no? However, now that it uses a lot of newlines instead of having it all on the same line. I have an about 2 months old code base, and I get this output $ ./inplace/bin/ghc-stage2 --interactive GHCi, version 7.7.20140106: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. (so in case anyone wants to know when this started, it was after 2 months ago) Cheers, Arash On 2014-01-14 14:59, Simon Peyton Jones wrote: > > Friends > > ghc --interactive has just started being more verbose. The > "linking...done" stuff didn't happen before. Does this ring any bells > for anyone? I have not investigated at all so far; hoping someone will > say "oh yes, I know and will fix". > > Simon > > bash$ ~/5builds/HEAD/inplace/bin/ghc-stage2 --interactive > > GHCi, version 7.7.20140114: http://www.haskell.org/ghc/ :? for help > > Loading package ghc-prim ... > > linking ... > > done. > > Loading package integer-gmp ... > > linking ... > > done. > > Loading package base ... > > linking ... > > done. > > Prelude> > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergo at erdi.hu Tue Jan 14 14:22:04 2014 From: gergo at erdi.hu (=?UTF-8?B?RHIuIMOJUkRJIEdlcmfFkQ==?=) Date: Tue, 14 Jan 2014 22:22:04 +0800 Subject: Fwd: Re: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> <52D3CA59.40508@fuuzetsu.co.uk> Message-ID: Hi, How do I get permissions to push to my wip branch? Thanks, Gergo ---------- Forwarded message ---------- From: "Dr. ERDI Gergo" Date: Jan 13, 2014 8:12 PM Subject: Re: Pattern synonyms for 7.8? To: Cc: "Gabor Greif" (removing Mateusz and ghc-devs from the recipient list) Hi, On Mon, 13 Jan 2014, Gabor Greif wrote: From what I understood, you *should* have all permissions to push to > wip/ branches. If not, please contact the admins. (IIRC Austin did > this previously). > OK, I think this was the missing information that got me confused. What repo are we talking about? I tried the mirror on GitHub (git at github.com:ghc/ghc.git) but that one doesn't seem to work: ERROR: Permission to ghc/ghc.git denied to gergoerdi. fatal: Could not read from remote repository. I also tried the Haskell.org repo of ssh://git at git.haskell.org/ghc, but that doesn't work either (which is unsurprising since I don't remember ever sending my SSH public key to haskell.org): 20:10:02 [cactus at galaxy ghc]$ git push -u origin wip/pattern-synonyms Permission denied (publickey). Please advise. Thanks, Gergo -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' Ki volt Casper, miel?tt meghalt? -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Tue Jan 14 14:57:31 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 14 Jan 2014 15:57:31 +0100 Subject: ghci verbosity In-Reply-To: <59543203684B2244980D7E4057D5FBC148714BA2@DB3EX14MBXC306.europe.corp.microsoft.com> (Simon Peyton Jones's message of "Tue, 14 Jan 2014 13:59:55 +0000") References: <59543203684B2244980D7E4057D5FBC148714BA2@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <87sisq1uhw.fsf@gmail.com> Hello Simon, On 2014-01-14 at 14:59:55 +0100, Simon Peyton Jones wrote: > ghc -interactive has just started being more verbose. The > "linking...done" stuff didn't happen before. Does this ring any bells > for anyone? I have not investigated at all so far; hoping someone will > say "oh yes, I know and will fix". I won't use exactly those words... however, I can point you to http://git.haskell.org/ghc.git/commitdiff/08a3536e4246e323fbcd8040e0b80001950fe9bc as the offending commit if that helps... :-) Greetings, hvr From austin at well-typed.com Tue Jan 14 16:01:19 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 14 Jan 2014 10:01:19 -0600 Subject: RC Status In-Reply-To: References: <20140114.212338.1936899713892957648.kazu@iij.ad.jp> Message-ID: Hi Pali, Thanks a lot. I will be branching soon. Please let me know if you need me to merge anything to the release branch. On Tue, Jan 14, 2014 at 6:30 AM, P?li G?bor J?nos wrote: > On Tue, Jan 14, 2014 at 1:23 PM, Kazu Yamamoto wrote: >> Also, a problem exists in FreeBSD: >> >> https://ghc.haskell.org/trac/ghc/ticket/8451 > > Thanks for the reminder, I will fix it this week. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Tue Jan 14 16:02:19 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 14 Jan 2014 10:02:19 -0600 Subject: RC Status In-Reply-To: <20140114.212338.1936899713892957648.kazu@iij.ad.jp> References: <20140114.212338.1936899713892957648.kazu@iij.ad.jp> Message-ID: Hi Kazu, Yes, I'll test Christiaan's patch and try to get it in today. On Tue, Jan 14, 2014 at 6:23 AM, Kazu Yamamoto wrote: > Hi, > > For Mac, we should verify darchon's Cabal patch for dynamic linking in > #8266. darchon's patch does not works well in my Mavericks. > > https://ghc.haskell.org/trac/ghc/ticket/8266 > > If you guys are Mac users, please test darchon's patch. > > > Also, a problem exists in FreeBSD: > > https://ghc.haskell.org/trac/ghc/ticket/8451 > > I lost my FreeBSD environment, so I cannot help fixing #8451 at this > moment. > > --Kazu > >> Hello all, >> >> I apologize for the lack of updates recently. I'm well aware people >> are a bit ready to move on, so the good news is we should be able to >> do so shortly. >> >> Simon and I talked last week, and for the RC to go ahead, we've >> decided to punt a few tickets for the moment. The biggest hurdle is >> dynamic support on Windows, which we've decided to punt. >> >> I have put together a page detailing the current status with some more >> detail about what's going on: >> >> https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8 >> >> I spent a lot of time testing my (just merged) branch on several machines. >> >> The highlights for interested parties: >> >> * OS X looks pretty good across the board, minus a `./validate` bug I >> need to track down on Mavericks. 10.8 still needs to be tested (this >> will hopefully happen soon.) Otherwise it seems to be fine. >> * With the dynamic stuff punted, Windows looks pretty good right now >> too. There is some minor testsuite fallout I need to investigate. One >> of Herbert's patches broke the 64bit windows build, and I'm bisecting >> it. Then I'll test a win64 bootstrap. >> * Linux, as usual, seems just fine, and I just need to finish the >> i386 bootstrap. >> * I need to update some performance numbers. >> >> There is one final bug to fix, #7602, which is a performance >> regression for OS X, but I'm investigating the `./validate` bug on >> Mavericks first. >> >> Once these look green across the board, I speculate I'll begin to make >> the RC. I expect this to happen ASAP (within the week.) >> >> -- >> Regards, >> >> Austin Seipp, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From rrnewton at gmail.com Tue Jan 14 16:41:48 2014 From: rrnewton at gmail.com (Ryan Newton) Date: Tue, 14 Jan 2014 11:41:48 -0500 Subject: Releasing containers 0.5.3.2 -- before GHC 7.8? Message-ID: Hi guys, I'm wondering if we can do a hackage release of 0.5.3.2? That "splitRoot" function is in there, and my ability to deploy parallel code that uses containers depends on people getting it! Are there any other changes since 0.5.3.1? Replacing containers seems like a real pain for end users, so it would be great if 0.5.3.2 could come with GHC 7.8. Currently, it looks like the GHC repo is up to date in that it includes 0.5.3.1. I realize it is late days for this, but: - It's been a month since we put splitRoot in; I've been using it heavily and it I'm pretty confident that it's correct. (It's so simple!) - Nothing else is touched, so there is very little liability associated with this version bump. And, as you know, if we don't make this round it's a long latency before the next chance. That is, before we can expect people to do parallel folds over Data.Set or Data.Map without installation headache. Any objections? -Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Tue Jan 14 16:48:23 2014 From: svenpanne at gmail.com (Sven Panne) Date: Tue, 14 Jan 2014 17:48:23 +0100 Subject: Folding constants for floats In-Reply-To: References: <52D47083.5040809@isaac.cedarswampstudios.org> Message-ID: 2014/1/14 Carter Schonwald : > maybe so, but having a semantics by default is huge, and honestly i'm not > super interested in optimizations that merely change one infinity for > another. What would the alternative semantics be? I'm not sure that I understood your reply: My example regarding -0 was only demonstrating the status quo of GHCi and is IEEE-754-conformant. The 1/foo is only used to distinguish between 0 and -0, it is not about infinities per se. My point was: As much as I propose to keep these current semantics, there might be users who care more about performance than IEEE-754-conformance. For those, relatively simple semantics could be: Regarding optimizations, numbers are considered "mathematical" numbers, ignoring any rounding and precision issues, and everything involving -0, NaN, and infinities is undefined. This would open up optimizations like easy constant folding, transforming 0 + x to x, x - x to 0, x `op` y to y `op` x for mathematically commutative operators, associativity, etc. I'm not 100% sure how useful this would really be, but I think we agree that this shouldn't be the default. Cheers, S. From carter.schonwald at gmail.com Tue Jan 14 17:01:34 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 14 Jan 2014 12:01:34 -0500 Subject: Folding constants for floats In-Reply-To: References: <52D47083.5040809@isaac.cedarswampstudios.org> Message-ID: Sven, I'm one of those people who cares about numerical performance :-). Kinda been my obsession :-). My near term stop gap is writing some very high quality ffi bindings, but I'm very keen on Haskell giving fortran a run for it's money. Glad we agree the version that's easier to debug (IEEE, ie current ghc semantics) should be the default There's much more meaningful ways we can improve floating point perf, like adding simd support more systematically to ghc (which I'm just now getting the ball rolling on that hacking, there's a lot of things I need to do before adding that mind you ). better constant propagation will help in a few cases, and should be explored. But deciding what the right relaxed rules should be isn't something we should do off the cuff. We should write down the space of possible relaxed rules, add engineering support to ghc better experiment with benchmarking various approaches, and then if something has a good perf impact see about providing it exposed through a flag. On Tuesday, January 14, 2014, Sven Panne wrote: > 2014/1/14 Carter Schonwald >: > > maybe so, but having a semantics by default is huge, and honestly i'm not > > super interested in optimizations that merely change one infinity for > > another. What would the alternative semantics be? > > I'm not sure that I understood your reply: My example regarding -0 was > only demonstrating the status quo of GHCi and is IEEE-754-conformant. > The 1/foo is only used to distinguish between 0 and -0, it is not > about infinities per se. > > My point was: As much as I propose to keep these current semantics, > there might be users who care more about performance than > IEEE-754-conformance. For those, relatively simple semantics could be: > Regarding optimizations, numbers are considered "mathematical" > numbers, ignoring any rounding and precision issues, and everything > involving -0, NaN, and infinities is undefined. This would open up > optimizations like easy constant folding, transforming 0 + x to x, x - > x to 0, x `op` y to y `op` x for mathematically commutative operators, > associativity, etc. > > I'm not 100% sure how useful this would really be, but I think we > agree that this shouldn't be the default. > > Cheers, > S. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roma at ro-che.info Tue Jan 14 17:01:50 2014 From: roma at ro-che.info (Roman Cheplyaka) Date: Tue, 14 Jan 2014 19:01:50 +0200 Subject: Releasing containers 0.5.3.2 -- before GHC 7.8? In-Reply-To: References: Message-ID: <20140114170150.GA31232@sniper> * Ryan Newton [2014-01-14 11:41:48-0500] > Replacing containers seems like a real pain for end users Is it a real pain? Why? I just tried 'cabal install containers', and it went flawlessly. To make it clear, I'm not in any way opposed to containers upgrade, but that phrase struck me as odd. The only issue I'm aware of is related to the GHC API, but high-performance parallel algorithms and the GHC API are rarely used together in the same project. Roman -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From carter.schonwald at gmail.com Tue Jan 14 17:52:54 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 14 Jan 2014 12:52:54 -0500 Subject: RC Status In-Reply-To: References: <20140114.212338.1936899713892957648.kazu@iij.ad.jp> Message-ID: I don't have access to a windows machine (at least not this week, may be able to rectify that in a week or two ), but if there's some turn the crank engineering that can be done anyways to help facilitate helping fix up the windows sitch, I'm happy to volunteer some time turning the crank. On Tuesday, January 14, 2014, Austin Seipp wrote: > Hi Kazu, > > Yes, I'll test Christiaan's patch and try to get it in today. > > On Tue, Jan 14, 2014 at 6:23 AM, Kazu Yamamoto > > wrote: > > Hi, > > > > For Mac, we should verify darchon's Cabal patch for dynamic linking in > > #8266. darchon's patch does not works well in my Mavericks. > > > > https://ghc.haskell.org/trac/ghc/ticket/8266 > > > > If you guys are Mac users, please test darchon's patch. > > > > > > Also, a problem exists in FreeBSD: > > > > https://ghc.haskell.org/trac/ghc/ticket/8451 > > > > I lost my FreeBSD environment, so I cannot help fixing #8451 at this > > moment. > > > > --Kazu > > > >> Hello all, > >> > >> I apologize for the lack of updates recently. I'm well aware people > >> are a bit ready to move on, so the good news is we should be able to > >> do so shortly. > >> > >> Simon and I talked last week, and for the RC to go ahead, we've > >> decided to punt a few tickets for the moment. The biggest hurdle is > >> dynamic support on Windows, which we've decided to punt. > >> > >> I have put together a page detailing the current status with some more > >> detail about what's going on: > >> > >> https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8 > >> > >> I spent a lot of time testing my (just merged) branch on several > machines. > >> > >> The highlights for interested parties: > >> > >> * OS X looks pretty good across the board, minus a `./validate` bug I > >> need to track down on Mavericks. 10.8 still needs to be tested (this > >> will hopefully happen soon.) Otherwise it seems to be fine. > >> * With the dynamic stuff punted, Windows looks pretty good right now > >> too. There is some minor testsuite fallout I need to investigate. One > >> of Herbert's patches broke the 64bit windows build, and I'm bisecting > >> it. Then I'll test a win64 bootstrap. > >> * Linux, as usual, seems just fine, and I just need to finish the > >> i386 bootstrap. > >> * I need to update some performance numbers. > >> > >> There is one final bug to fix, #7602, which is a performance > >> regression for OS X, but I'm investigating the `./validate` bug on > >> Mavericks first. > >> > >> Once these look green across the board, I speculate I'll begin to make > >> the RC. I expect this to happen ASAP (within the week.) > >> > >> -- > >> Regards, > >> > >> Austin Seipp, Haskell Consultant > >> Well-Typed LLP, http://www.well-typed.com/ > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rrnewton at gmail.com Tue Jan 14 19:19:57 2014 From: rrnewton at gmail.com (Ryan Newton) Date: Tue, 14 Jan 2014 14:19:57 -0500 Subject: Releasing containers 0.5.3.2 -- before GHC 7.8? In-Reply-To: <20140114170150.GA31232@sniper> References: <20140114170150.GA31232@sniper> Message-ID: On Tue, Jan 14, 2014 at 12:01 PM, Roman Cheplyaka wrote: > * Ryan Newton [2014-01-14 11:41:48-0500] > > Replacing containers seems like a real pain for end users > > Is it a real pain? Why? > One thing I ran into is that cabal sandboxes want consistent dependencies. And when users get to this point where they need to grab our latest containers, they've got a bunch of core/haskell platform packages that depend on the old containers. I didn't mean that there was anything difficult about containers itself, just that almost everything else depends on it. -Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Tue Jan 14 19:23:12 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 14 Jan 2014 14:23:12 -0500 Subject: Releasing containers 0.5.3.2 -- before GHC 7.8? In-Reply-To: References: <20140114170150.GA31232@sniper> Message-ID: have you tried installing a newer version of containers yourself globally, and making the other one hidden? Or just making the global one ghc comes with hidden? On Tue, Jan 14, 2014 at 2:19 PM, Ryan Newton wrote: > > > > On Tue, Jan 14, 2014 at 12:01 PM, Roman Cheplyaka wrote: > >> * Ryan Newton [2014-01-14 11:41:48-0500] >> > Replacing containers seems like a real pain for end users >> >> Is it a real pain? Why? >> > > One thing I ran into is that cabal sandboxes want consistent dependencies. > And when users get to this point where they need to grab our latest > containers, they've got a bunch of core/haskell platform packages that > depend on the old containers. > > I didn't mean that there was anything difficult about containers itself, > just that almost everything else depends on it. > > -Ryan > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ml at isaac.cedarswampstudios.org Tue Jan 14 19:54:09 2014 From: ml at isaac.cedarswampstudios.org (Isaac Dupree) Date: Tue, 14 Jan 2014 14:54:09 -0500 Subject: Folding constants for floats In-Reply-To: References: <52D47083.5040809@isaac.cedarswampstudios.org> Message-ID: <52D595E1.7030901@isaac.cedarswampstudios.org> On 01/14/2014 11:48 AM, Sven Panne wrote: > My point was: As much as I propose to keep these current semantics, > there might be users who care more about performance than > IEEE-754-conformance. Adding a -ffast-math flag could be fine IMHO. > For those, relatively simple semantics could be: > Regarding optimizations, numbers are considered "mathematical" > numbers, ignoring any rounding and precision issues, How do you plan to constant-fold things like "log(cos(pi**pi))" without rounding? I checked C, and apparently the optimizer is entitled to assume the default floating-point control modes (e.g. rounding mode, quiet/signaling NaN) are in effect except in scopes where "#pragma STDC FENV_ACCESS ON" is given. However the standard does not entitle the optimizer to change rounding in any other way. This is sufficient for constant-folding in regions where FENV_ACCESS is off. GCC also has flags to control floating-point optimization: http://gcc.gnu.org/wiki/FloatingPointMath Probably it's best not to touch floating point optimization without understanding all these issues. Hmm, I can't see how non-default floating point control mode is compatible with Haskell's purity... Even without optimizations, (1/3 :: Double) could evaluate to two different values in the same program if the floating-point rounding mode changes during execution (e.g. by C fesetenv()). -Isaac From carter.schonwald at gmail.com Tue Jan 14 19:59:19 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 14 Jan 2014 14:59:19 -0500 Subject: Folding constants for floats In-Reply-To: <52D595E1.7030901@isaac.cedarswampstudios.org> References: <52D47083.5040809@isaac.cedarswampstudios.org> <52D595E1.7030901@isaac.cedarswampstudios.org> Message-ID: I emphatically and forcefully agree with Isaac. Thanks for articulating these issues much better than I could. On Tue, Jan 14, 2014 at 2:54 PM, Isaac Dupree < ml at isaac.cedarswampstudios.org> wrote: > On 01/14/2014 11:48 AM, Sven Panne wrote: > >> My point was: As much as I propose to keep these current semantics, >> there might be users who care more about performance than >> IEEE-754-conformance. >> > > Adding a -ffast-math flag could be fine IMHO. > > > For those, relatively simple semantics could be: >> Regarding optimizations, numbers are considered "mathematical" >> numbers, ignoring any rounding and precision issues, >> > > How do you plan to constant-fold things like "log(cos(pi**pi))" without > rounding? > > I checked C, and apparently the optimizer is entitled to assume the > default floating-point control modes (e.g. rounding mode, quiet/signaling > NaN) are in effect except in scopes where "#pragma STDC FENV_ACCESS ON" is > given. However the standard does not entitle the optimizer to change > rounding in any other way. This is sufficient for constant-folding in > regions where FENV_ACCESS is off. GCC also has flags to control > floating-point optimization: http://gcc.gnu.org/wiki/FloatingPointMath > > Probably it's best not to touch floating point optimization without > understanding all these issues. > > Hmm, I can't see how non-default floating point control mode is compatible > with Haskell's purity... Even without optimizations, (1/3 :: Double) could > evaluate to two different values in the same program if the floating-point > rounding mode changes during execution (e.g. by C fesetenv()). > > -Isaac > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwbarton at gmail.com Tue Jan 14 20:15:32 2014 From: rwbarton at gmail.com (Reid Barton) Date: Tue, 14 Jan 2014 15:15:32 -0500 Subject: Releasing containers 0.5.3.2 -- before GHC 7.8? In-Reply-To: References: <20140114170150.GA31232@sniper> Message-ID: On Tue, Jan 14, 2014 at 2:19 PM, Ryan Newton wrote: > On Tue, Jan 14, 2014 at 12:01 PM, Roman Cheplyaka wrote: > >> * Ryan Newton [2014-01-14 11:41:48-0500] >> > Replacing containers seems like a real pain for end users >> >> Is it a real pain? Why? >> > > One thing I ran into is that cabal sandboxes want consistent dependencies. > And when users get to this point where they need to grab our latest > containers, they've got a bunch of core/haskell platform packages that > depend on the old containers. > > I didn't mean that there was anything difficult about containers itself, > just that almost everything else depends on it. > In addition to the general pain of updating packages at the base of the dependency hierarchy, there is also the fact that the template-haskell package depends on containers. As far as I know upgrading template-haskell is impossible, or at least a Very Bad Idea, so any library that wants to use an updated version of containers can't use template-haskell, or even be linked into an application that uses template-haskell directly or through another library. As far as I am concerned as a GHC user, versions of containers that aren't the one that came with my GHC might as well not exist. For example if I see that a package has a constraint "containers >= 0.10", I just assume I cannot use the library with GHC 7.4. Thus I'm strongly in favor of synchronizing containers releases with releases of GHC. Regards, Reid Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Tue Jan 14 20:23:13 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Tue, 14 Jan 2014 21:23:13 +0100 Subject: RC Status In-Reply-To: References: Message-ID: <52D59CB1.304@centrum.cz> Austin, if I may lobby for Solaris a little bit: current HEAD is broken on Solaris due to a bug in Cabal, which is already fixed in Cabal HEAD and even promised to be merged into Cabal 1.18 branch. IMHO the fix is no-brainer but anyway, I've created ticket for this: https://ghc.haskell.org/trac/ghc/ticket/8670 with whole history of investigation and all necessary links so every required information is on one place... Thanks! Karel From carter.schonwald at gmail.com Tue Jan 14 20:40:41 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 14 Jan 2014 15:40:41 -0500 Subject: Releasing containers 0.5.3.2 -- before GHC 7.8? In-Reply-To: References: <20140114170150.GA31232@sniper> Message-ID: ok, thats a good point On Tue, Jan 14, 2014 at 3:15 PM, Reid Barton wrote: > On Tue, Jan 14, 2014 at 2:19 PM, Ryan Newton wrote: > >> On Tue, Jan 14, 2014 at 12:01 PM, Roman Cheplyaka wrote: >> >>> * Ryan Newton [2014-01-14 11:41:48-0500] >>> > Replacing containers seems like a real pain for end users >>> >>> Is it a real pain? Why? >>> >> >> One thing I ran into is that cabal sandboxes want consistent >> dependencies. And when users get to this point where they need to grab our >> latest containers, they've got a bunch of core/haskell platform packages >> that depend on the old containers. >> >> I didn't mean that there was anything difficult about containers itself, >> just that almost everything else depends on it. >> > > In addition to the general pain of updating packages at the base of the > dependency hierarchy, there is also the fact that the template-haskell > package depends on containers. As far as I know upgrading template-haskell > is impossible, or at least a Very Bad Idea, so any library that wants to > use an updated version of containers can't use template-haskell, or even be > linked into an application that uses template-haskell directly or through > another library. > > As far as I am concerned as a GHC user, versions of containers that aren't > the one that came with my GHC might as well not exist. For example if I see > that a package has a constraint "containers >= 0.10", I just assume I > cannot use the library with GHC 7.4. Thus I'm strongly in favor of > synchronizing containers releases with releases of GHC. > > Regards, > Reid Barton > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Tue Jan 14 21:12:22 2014 From: svenpanne at gmail.com (Sven Panne) Date: Tue, 14 Jan 2014 22:12:22 +0100 Subject: Folding constants for floats In-Reply-To: References: <52D47083.5040809@isaac.cedarswampstudios.org> <52D595E1.7030901@isaac.cedarswampstudios.org> Message-ID: 2014/1/14 Carter Schonwald : > I emphatically and forcefully agree with Isaac. [...] Yup, I would prefer to no touch FP optimization in a rush, too. I am not sure if this is still the case today, but I remember breaking some FP stuff in GHC when doing cross-compilation + bootstrapping with it ages ago. Can this still happen today? I don't know how GHC is ported to a brand new platform nowadays... All the rounding magic etc. has to happen as if it was executed on the target platform, not like on the platform GHC is running. More fun stuff to consider, I guess. .-) From carter.schonwald at gmail.com Tue Jan 14 21:24:27 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 14 Jan 2014 16:24:27 -0500 Subject: Folding constants for floats In-Reply-To: References: <52D47083.5040809@isaac.cedarswampstudios.org> <52D595E1.7030901@isaac.cedarswampstudios.org> Message-ID: some of those issues come up even more forcefully when cross compiling from a 64bit to 32 bit architecture :), but you're absolutely right, and It sounds like theres a clear near term concensus Even more fun is in the case of ghc-ios, where ideally a single build would create the right object code for 64bit + 32bit arm both! I think theres some subtle fun there! :) On Tue, Jan 14, 2014 at 4:12 PM, Sven Panne wrote: > 2014/1/14 Carter Schonwald : > > I emphatically and forcefully agree with Isaac. [...] > > Yup, I would prefer to no touch FP optimization in a rush, too. I am > not sure if this is still the case today, but I remember breaking some > FP stuff in GHC when doing cross-compilation + bootstrapping with it > ages ago. Can this still happen today? I don't know how GHC is ported > to a brand new platform nowadays... All the rounding magic etc. has to > happen as if it was executed on the target platform, not like on the > platform GHC is running. More fun stuff to consider, I guess. .-) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Tue Jan 14 21:47:31 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 14 Jan 2014 22:47:31 +0100 Subject: Releasing containers 0.5.3.2 -- before GHC 7.8? In-Reply-To: <20140114213316.GA32711@auryn.cz> References: <20140114213316.GA32711@auryn.cz> Message-ID: I'll make a release in the next few days. On Tue, Jan 14, 2014 at 10:33 PM, Milan Straka wrote: > Hi Johan, > > I think releasing 0.5.4 is a good idea. Could I ask you to do the > release as usual, please? > > We added the splitRoot function, so it should really be 0.5.4 and not > only 0.5.3.2. Actually, we added Functor instance to Graph.SCC and > Functor and Applicative instances to Graph.SetM, but Graph is rarely > used, so I would deliberately break PVP and not do a major version bump. > > Thanks, > cheers, > Milan > > > -----Original message----- > > From: Ryan Newton > > Sent: 14 Jan 2014, 11:41 > > > > Hi guys, > > > > I'm wondering if we can do a hackage release of 0.5.3.2? That > "splitRoot" > > function is in there, and my ability to deploy parallel code that uses > > containers depends on people getting it! Are there any other changes > since > > 0.5.3.1? > > > > Replacing containers seems like a real pain for end users, so it would be > > great if 0.5.3.2 could come with GHC 7.8. Currently, it looks like the > GHC > > repo is up to date in that it includes 0.5.3.1. > > > > I realize it is late days for this, but: > > > > - It's been a month since we put splitRoot in; I've been using it > > heavily and it I'm pretty confident that it's correct. (It's so > simple!) > > - Nothing else is touched, so there is very little liability > associated > > with this version bump. > > > > And, as you know, if we don't make this round it's a long latency before > > the next chance. That is, before we can expect people to do parallel > folds > > over Data.Set or Data.Map without installation headache. > > > > Any objections? > > -Ryan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 15 09:19:09 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 15 Jan 2014 09:19:09 +0000 Subject: [Haskell-cafe] GHC API: runStmt not taking into account reloaded module In-Reply-To: References: Message-ID: <59543203684B2244980D7E4057D5FBC148716293@DB3EX14MBXC306.europe.corp.microsoft.com> That sounds odd. Can you make a small reproducible test case, open a ticket, and attach the test? Thanks SImon From: Haskell-Cafe [mailto:haskell-cafe-bounces at haskell.org] On Behalf Of JP Moresmau Sent: 14 January 2014 20:14 To: Haskell Cafe Subject: [Haskell-cafe] GHC API: runStmt not taking into account reloaded module It's late here and I'm probably overlooking something stupid, so I'd like if somebody could put my nose on it... I'm using the GHC API to evaluate statements. I use runStmt to get a RunResult, lookupName to get the ID for the bound names, obtainTermFromId to get the term and showTerm to display it. So I can call a function from the loaded module with some parameters and get the result. Good! However, if I reload a module and I change the implementation of the function, runStmt still returns the old value! I know the reload worked because if I added new names, getNamesInScope returns the new names. What do I need to do to make sure the new function definitions are used? I've perused the source code of InteractiveEval and ghci but nothing stood out. I am calling setContext after load. Thanks a million! -- JP Moresmau http://jpmoresmau.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpmoresmau at gmail.com Wed Jan 15 09:40:07 2014 From: jpmoresmau at gmail.com (JP Moresmau) Date: Wed, 15 Jan 2014 10:40:07 +0100 Subject: [Haskell-cafe] GHC API: runStmt not taking into account reloaded module In-Reply-To: <59543203684B2244980D7E4057D5FBC148716293@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148716293@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Thanks Simon, I will write a simple test case and see what happens. JP On Wed, Jan 15, 2014 at 10:19 AM, Simon Peyton Jones wrote: > That sounds odd. Can you make a small reproducible test case, open a > ticket, and attach the test? > > > > Thanks > > > > SImon > > > > *From:* Haskell-Cafe [mailto:haskell-cafe-bounces at haskell.org] *On Behalf > Of *JP Moresmau > *Sent:* 14 January 2014 20:14 > *To:* Haskell Cafe > *Subject:* [Haskell-cafe] GHC API: runStmt not taking into account > reloaded module > > > > It's late here and I'm probably overlooking something stupid, so I'd like > if somebody could put my nose on it... I'm using the GHC API to evaluate > statements. I use runStmt to get a RunResult, lookupName to get the ID for > the bound names, obtainTermFromId to get the term and showTerm to display > it. So I can call a function from the loaded module with some parameters > and get the result. Good! > > However, if I reload a module and I change the implementation of the > function, runStmt still returns the old value! I know the reload worked > because if I added new names, getNamesInScope returns the new names. What > do I need to do to make sure the new function definitions are used? I've > perused the source code of InteractiveEval and ghci but nothing stood out. > I am calling setContext after load. > > > > Thanks a million! > > > > -- > JP Moresmau > http://jpmoresmau.blogspot.com/ > -- JP Moresmau http://jpmoresmau.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 15 10:23:58 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 15 Jan 2014 10:23:58 +0000 Subject: ghci verbosity In-Reply-To: <87sisq1uhw.fsf@gmail.com> References: <59543203684B2244980D7E4057D5FBC148714BA2@DB3EX14MBXC306.europe.corp.microsoft.com> <87sisq1uhw.fsf@gmail.com> Message-ID: <59543203684B2244980D7E4057D5FBC1487163E4@DB3EX14MBXC306.europe.corp.microsoft.com> Ha ha! Fixing... | -----Original Message----- | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] | Sent: 14 January 2014 14:58 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: ghci verbosity | | Hello Simon, | | On 2014-01-14 at 14:59:55 +0100, Simon Peyton Jones wrote: | > ghc -interactive has just started being more verbose. The | > "linking...done" stuff didn't happen before. Does this ring any bells | > for anyone? I have not investigated at all so far; hoping someone will | > say "oh yes, I know and will fix". | | I won't use exactly those words... however, I can point you to | | | http://git.haskell.org/ghc.git/commitdiff/08a3536e4246e323fbcd8040e0b800 | 01950fe9bc | | as the offending commit if that helps... :-) | | Greetings, | hvr From gergo at erdi.hu Wed Jan 15 11:27:29 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Wed, 15 Jan 2014 19:27:29 +0800 (SGT) Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Message-ID: On Thu, 9 Jan 2014, Austin Seipp wrote: > 3) It seems GHCi does not support declaring pattern synonyms at the > REPL. I'm not sure if it's intentional, but if it goes in like this, > please be sure to document it in the release notes. We can file a > ticket later for supporting pattern synonyms at the REPL. In GHCi, it seems to fail in the parser. So I thought, well that makes sense, isn't the REPL in GHCi supposed to be something like the inside of a 'do' block? But I tried creating a datatype in GHCi and that worked.. so my point is, I am now aware that I am confused about GHCi's behaviour. Given the time constraints, for the initial release I unfortunately have to decide *not* to support the GHCi REPL. I'll put that in the docs when I write them. -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' I had my car's alignment checked. It's chaotic evil! From simonpj at microsoft.com Wed Jan 15 11:31:13 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 15 Jan 2014 11:31:13 +0000 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Message-ID: <59543203684B2244980D7E4057D5FBC1487165C0@DB3EX14MBXC306.europe.corp.microsoft.com> I think that's fine. But yes, the REPL does support top-level declarations. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Dr. | ERDI Gergo | Sent: 15 January 2014 11:27 | To: Austin Seipp | Cc: Joachim Breitner; GHC Devs | Subject: Re: Pattern synonyms for 7.8? | | On Thu, 9 Jan 2014, Austin Seipp wrote: | | > 3) It seems GHCi does not support declaring pattern synonyms at the | > REPL. I'm not sure if it's intentional, but if it goes in like this, | > please be sure to document it in the release notes. We can file a | > ticket later for supporting pattern synonyms at the REPL. | | In GHCi, it seems to fail in the parser. So I thought, well that makes | sense, isn't the REPL in GHCi supposed to be something like the inside | of a 'do' block? But I tried creating a datatype in GHCi and that | worked.. so my point is, I am now aware that I am confused about GHCi's | behaviour. | Given the time constraints, for the initial release I unfortunately have | to decide *not* to support the GHCi REPL. I'll put that in the docs when | I write them. | | -- | | .--= ULLA! =-----------------. | \ http://gergo.erdi.hu \ | `---= gergo at erdi.hu =-------' | I had my car's alignment checked. It's chaotic evil! | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From jpmoresmau at gmail.com Wed Jan 15 16:37:36 2014 From: jpmoresmau at gmail.com (JP Moresmau) Date: Wed, 15 Jan 2014 17:37:36 +0100 Subject: [Haskell-cafe] GHC API: runStmt not taking into account reloaded module In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148716293@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: While writing a smaller test case, I managed to get a different error, which in turn allowed me to find this: http://www.haskell.org/pipermail/glasgow-haskell-users/2008-May/014841.html. It turns out I had the ghcLink session flag set to NoLink, it works fine with LinkInMemory! I knew it was something stupid! Thanks, there is no need to open a ticket in the end. JP On Wed, Jan 15, 2014 at 10:40 AM, JP Moresmau wrote: > Thanks Simon, I will write a simple test case and see what happens. > > JP > > > On Wed, Jan 15, 2014 at 10:19 AM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > >> That sounds odd. Can you make a small reproducible test case, open a >> ticket, and attach the test? >> >> >> >> Thanks >> >> >> >> SImon >> >> >> >> *From:* Haskell-Cafe [mailto:haskell-cafe-bounces at haskell.org] *On >> Behalf Of *JP Moresmau >> *Sent:* 14 January 2014 20:14 >> *To:* Haskell Cafe >> *Subject:* [Haskell-cafe] GHC API: runStmt not taking into account >> reloaded module >> >> >> >> It's late here and I'm probably overlooking something stupid, so I'd like >> if somebody could put my nose on it... I'm using the GHC API to evaluate >> statements. I use runStmt to get a RunResult, lookupName to get the ID for >> the bound names, obtainTermFromId to get the term and showTerm to display >> it. So I can call a function from the loaded module with some parameters >> and get the result. Good! >> >> However, if I reload a module and I change the implementation of the >> function, runStmt still returns the old value! I know the reload worked >> because if I added new names, getNamesInScope returns the new names. What >> do I need to do to make sure the new function definitions are used? I've >> perused the source code of InteractiveEval and ghci but nothing stood out. >> I am calling setContext after load. >> >> >> >> Thanks a million! >> >> >> >> -- >> JP Moresmau >> http://jpmoresmau.blogspot.com/ >> > > > > -- > JP Moresmau > http://jpmoresmau.blogspot.com/ > -- JP Moresmau http://jpmoresmau.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 15 19:20:51 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 15 Jan 2014 19:20:51 +0000 Subject: Extending fold/build fusion In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148713626@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <59543203684B2244980D7E4057D5FBC1487169CD@DB3EX14MBXC306.europe.corp.microsoft.com> Akio Aha! So you are really talking about replacing the *entire* foldr/build story with a new one, namely a foldW/buildW story. Presumably all producers and consumers (map, filter, take, drop etc) must be redefined using foldW and buildW instead of fold and build. Is that right? That is much more significant than the wiki page describes. If you are serious about this, could you perhaps update the wiki page to describe what you propose? Do you believe that the new story will catch every case that the old one does? (Plus some new ones.) Does your data support that? I'm really not sure about your Tree example. I agree that the foldl' style code gives the result that you show. But I tried the more straightforward version: sumT :: Tree -> Int sumT t = foldr (+) 0 (build (toListFB t)) This yielded pretty decent code: FB.$wgo = \ (w_sio :: FB.Tree) (ww_sir :: GHC.Prim.Int#) -> case w_sio of _ { FB.Tip rb_dgM -> GHC.Prim.+# rb_dgM ww_sir; FB.Bin x_af0 y_af1 -> case FB.$wgo y_af1 ww_sir of ww1_siv { __DEFAULT -> FB.$wgo x_af0 ww1_siv } } This builds no thunks. It does build stack equal to the depth of the tree. But your desired go1 code will also do exactly the same; go1 is strict in its second argument and hence will use call-by-value, and hence will build stack equal to the depth of the tree. In short, I'm not yet seeing a benefit. I am probably missing something important. Suggestion: rather than just reply to this email (soon lost in the email stream), it would be easier for others to join in if you updated your wiki page to say (a) what you propose, and (b) how it can yield benefits that the current setup cannot. Then an email reply can say "go look at section 3" or whatever. best wishes Simon From: Akio Takano [mailto:tkn.akio at gmail.com] Sent: 14 January 2014 09:22 To: Simon Peyton Jones Cc: ghc-devs Subject: Re: Extending fold/build fusion Thank you for looking at this! On Tue, Jan 14, 2014 at 1:27 AM, Simon Peyton Jones > wrote: I've hesitated to reply, because I have lots of questions but no time to investigate in. I'm looking at your wiki page https://github.com/takano-akio/ww-fusion * Does your proposed new fold' run faster than the old one? You give no data. No, it runs just equally fast as the old one. At the Core level they are the same. I ran some criterion benchmarks: source: https://github.com/takano-akio/ww-fusion/blob/master/benchmarks.hs results: http://htmlpreview.github.io/?https://github.com/takano-akio/ww-fusion/blob/master/foldl.html The point was not to make foldl' faster, but to make it fuse well with good producers. * The new foldl' is not a "good consumer" in the foldr/build sense, which a big loss. What if you say fold' k z [1..n]; you want the intermediate list to vanish. For my idea to work, enumFromTo and all other good producers need to be redefined in terms of buildW, which fuses with foldrW. The definition of buildW and the relevant rules are here: https://github.com/takano-akio/ww-fusion/blob/master/WWFusion.hs * My brain is too small to truly understand your idea. But since foldrW is non-recursive, what happens if you inline foldrW into fold', and then simplify? I'm betting you get something pretty similar to the old foldl'. Try in by hand, and with GHC and let's see the final optimised code. I checked this and I see the same code as the old foldl', modulo order of arguments. This is what I expected. * Under "motivation" you say "GHC generates something essentially like..." and then give some code. Now, if GHC would only eta-expand 'go' with a second argument, you'd get brilliant code. And maybe that would help lots of programs, not just this one. It's a slight delicate transformation but I've often thought we should try it; c.f #7994, #5809 I agree that it would be generally useful if GHC did this transformation. However I don't think it's good enough for this particular goal of making foldl' fuse well. Consider a function that flattens a binary tree into a list: data Tree = Tip {-# UNPACK #-} !Int | Bin Tree Tree toList :: Tree -> [Int] toList tree = build (toListFB tree) {-# INLINE toList #-} toListFB :: Tree -> (Int -> r -> r) -> r -> r toListFB root cons nil = go root nil where go (Tip x) rest = cons x rest go (Bin x y) rest = go x (go y rest) Let's say we want to eliminate the intermediate list in the expression (sum (toList t)). Currently sum is not a good consumer, but if it were, after fusion we'd get something like: sumList :: Tree -> Int sumList root = go0 root id 0 go0 :: Tree -> (Int -> Int) -> Int -> Int go0 (Tip x) k = \m -> k $! (x+m) go0 (Bin x y) k = go0 x (go0 y k) Now, merely eta-expanding go0 is not enough to get efficient code, because the function will still build a partial application every time it sees a Bin constructor. For this recursion to work in an allocation-free way, it must be rather like: go1 :: Tree -> Int -> Int go1 (Tip x) n = x + n go1 (Bin x y) n = go1 y (go1 x n) And this is what we get if we define foldl' and toList in terms of foldrW and buildW. I think a similar problem arises whenever you define a good consumer that traverses a tree-like structure, and you want to use a strict fold to consume a list produced by that producer. Thank you, Takano Akio Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Akio Takano Sent: 09 January 2014 13:25 To: ghc-devs Subject: Re: Extending fold/build fusion Any input on this is appreciated. In particular, I'd like to know: if I implement the idea as a patch to the base package, is there a chance it is considered for merge? -- Takano Akio On Fri, Jan 3, 2014 at 11:20 PM, Akio Takano > wrote: Hi, I have been thinking about how foldl' can be turned into a good consumer, and I came up with something that I thought would work. So I'd like to ask for opinions from the ghc devs: if this idea looks good, if it is a known bad idea, if there is a better way to do it, etc. The main idea is to have an extended version of foldr: -- | A mapping between @a@ and @b at . data Wrap a b = Wrap (a -> b) (b -> a) foldrW :: (forall e. Wrap (f e) (e -> b -> b)) -> (a -> b -> b) -> b -> [a] -> b foldrW (Wrap wrap unwrap) f z0 list0 = wrap go list0 z0 where go = unwrap $ \list z' -> case list of [] -> z' x:xs -> f x $ wrap go xs z' This allows the user to apply an arbitrary "worker-wrapper" transformation to the loop. Using this, foldl' can be defined as newtype Simple b e = Simple { runSimple :: e -> b -> b } foldl' :: (b -> a -> b) -> b -> [a] -> b foldl' f initial xs = foldrW (Wrap wrap unwrap) g id xs initial where wrap (Simple s) e k a = k $ s e a unwrap u = Simple $ \e -> u e id g x next acc = next $! f acc x The wrap and unwrap functions here ensure that foldl' gets compiled into a loop that returns a value of 'b', rather than a function 'b -> b', effectively un-CPS-transforming the loop. I put preliminary code and some more explanation on Github: https://github.com/takano-akio/ww-fusion Thank you, Takano Akio -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 15 20:31:07 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 15 Jan 2014 20:31:07 +0000 Subject: [Haskell-cafe] GHC API: runStmt not taking into account reloaded module In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148716293@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <59543203684B2244980D7E4057D5FBC148717468@DB3EX14MBXC306.europe.corp.microsoft.com> Great. Any chance you could add a section to http://www.haskell.org/haskellwiki/GHC/As_a_library to elucidate this point. It's not an easy API, and we know for sure that this particular point has tripped at least one person up. Simon From: JP Moresmau [mailto:jpmoresmau at gmail.com] Sent: 15 January 2014 16:38 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: [Haskell-cafe] GHC API: runStmt not taking into account reloaded module While writing a smaller test case, I managed to get a different error, which in turn allowed me to find this: http://www.haskell.org/pipermail/glasgow-haskell-users/2008-May/014841.html. It turns out I had the ghcLink session flag set to NoLink, it works fine with LinkInMemory! I knew it was something stupid! Thanks, there is no need to open a ticket in the end. JP On Wed, Jan 15, 2014 at 10:40 AM, JP Moresmau > wrote: Thanks Simon, I will write a simple test case and see what happens. JP On Wed, Jan 15, 2014 at 10:19 AM, Simon Peyton Jones > wrote: That sounds odd. Can you make a small reproducible test case, open a ticket, and attach the test? Thanks SImon From: Haskell-Cafe [mailto:haskell-cafe-bounces at haskell.org] On Behalf Of JP Moresmau Sent: 14 January 2014 20:14 To: Haskell Cafe Subject: [Haskell-cafe] GHC API: runStmt not taking into account reloaded module It's late here and I'm probably overlooking something stupid, so I'd like if somebody could put my nose on it... I'm using the GHC API to evaluate statements. I use runStmt to get a RunResult, lookupName to get the ID for the bound names, obtainTermFromId to get the term and showTerm to display it. So I can call a function from the loaded module with some parameters and get the result. Good! However, if I reload a module and I change the implementation of the function, runStmt still returns the old value! I know the reload worked because if I added new names, getNamesInScope returns the new names. What do I need to do to make sure the new function definitions are used? I've perused the source code of InteractiveEval and ghci but nothing stood out. I am calling setContext after load. Thanks a million! -- JP Moresmau http://jpmoresmau.blogspot.com/ -- JP Moresmau http://jpmoresmau.blogspot.com/ -- JP Moresmau http://jpmoresmau.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Wed Jan 15 21:34:58 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 15 Jan 2014 13:34:58 -0800 Subject: PLT Redex definition of STG as per fast curry paper Message-ID: <1389821447-sup-5356@sabre> For those of you who aren't following the commit list, I've just pushed a PLT Redex definition for an STG-like language as was defined in the fast curry paper. This language is *not* STG. The hope is that this will be a good starting point for actually formalizing STG as it exists today. I've included a hefty comment block at the top of the development summarizing ways in which this could be improved. Take a look if you're interested! Cheers, Edward From carter.schonwald at gmail.com Wed Jan 15 22:14:39 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 15 Jan 2014 17:14:39 -0500 Subject: PLT Redex definition of STG as per fast curry paper In-Reply-To: <1389821447-sup-5356@sabre> References: <1389821447-sup-5356@sabre> Message-ID: Very cool! Thanks for sharing! On Wed, Jan 15, 2014 at 4:34 PM, Edward Z. Yang wrote: > For those of you who aren't following the commit list, I've just pushed > a PLT Redex definition for an STG-like > language as was defined in the fast curry paper. This language is *not* > STG. The hope is that this will be a good starting point for actually > formalizing STG as it exists today. I've included a hefty comment block > at the top of the development summarizing ways in which this could be > improved. Take a look if you're interested! > > Cheers, > Edward > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Wed Jan 15 22:37:55 2014 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 15 Jan 2014 17:37:55 -0500 Subject: PLT Redex definition of STG as per fast curry paper In-Reply-To: <1389821447-sup-5356@sabre> References: <1389821447-sup-5356@sabre> Message-ID: Neat! I wish we'd had this a couple of years ago when Dylan Lukes was playing around with his STG-like toy. -Edward On Wed, Jan 15, 2014 at 4:34 PM, Edward Z. Yang wrote: > For those of you who aren't following the commit list, I've just pushed > a PLT Redex definition for an STG-like > language as was defined in the fast curry paper. This language is *not* > STG. The hope is that this will be a good starting point for actually > formalizing STG as it exists today. I've included a hefty comment block > at the top of the development summarizing ways in which this could be > improved. Take a look if you're interested! > > Cheers, > Edward > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvr at gnu.org Thu Jan 16 07:54:34 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Thu, 16 Jan 2014 08:54:34 +0100 Subject: Releasing containers 0.5.3.2 -- before GHC 7.8? In-Reply-To: (Ryan Newton's message of "Tue, 14 Jan 2014 11:41:48 -0500") References: Message-ID: <878uugpdj9.fsf@gnu.org> On 2014-01-14 at 17:41:48 +0100, Ryan Newton wrote: > I'm wondering if we can do a hackage release of 0.5.3.2? That "splitRoot" > function is in there, and my ability to deploy parallel code that uses > containers depends on people getting it! Are there any other changes since > 0.5.3.1? > > Replacing containers seems like a real pain for end users, so it would be > great if 0.5.3.2 could come with GHC 7.8. Currently, it looks like the GHC > repo is up to date in that it includes 0.5.3.1. Done (actually is do.. erm... containers-0.5.4.0): http://git.haskell.org/ghc.git/commitdiff/69cf5c4cb8aba309e5c495008b69089e5431a095 Cheers, hvr From marlowsd at gmail.com Thu Jan 16 09:26:35 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 16 Jan 2014 09:26:35 +0000 Subject: [commit: packages/integer-gmp] master: Allocate initial 1-limb mpz_t on the Stack and introduce MPZ# type (7bdcadd) In-Reply-To: <87y52jpn64.fsf@gmail.com> References: <20140113132526.74D922406B@ghc.haskell.org> <52D4531F.1030907@gmail.com> <87y52jpn64.fsf@gmail.com> Message-ID: <52D7A5CB.5040004@gmail.com> On 13/01/14 21:49, Herbert Valerio Riedel wrote: > On 2014-01-13 at 21:57:03 +0100, Simon Marlow wrote: >> On 13/01/14 13:25, git at git.haskell.org wrote: >>> Repository : ssh://git at git.haskell.org/integer-gmp >>> >>> On branch : master >>> Link : http://ghc.haskell.org/trac/ghc/changeset/7bdcadda7e884edffb1427f0685493f3a2e5c5fa/integer-gmp >>> >>>> --------------------------------------------------------------- >>> >>> commit 7bdcadda7e884edffb1427f0685493f3a2e5c5fa >>> Author: Herbert Valerio Riedel >>> Date: Thu Jan 9 00:19:31 2014 +0100 >>> >>> Allocate initial 1-limb mpz_t on the Stack and introduce MPZ# type >>> >>> We now allocate a 1-limb mpz_t on the stack instead of doing a more >>> expensive heap-allocation (especially if the heap-allocated copy becomes >>> garbage right away); this addresses #8647. >> >> While this is quite cool (turning some J# back into S#), I don't >> understand why you've done it this way. Couldn't it be done in the >> Haskell layer rather than modifying the primops? The ByteArray# has >> already been allocated by GMP, so you don't lose anything by returning >> it to Haskell and checking the size there. Then all the >> DUMMY_BYTEARRAY stuff would go away. > > Actually there isn't always a ByteArray# allocated; the patch got rid of > all mpz_init() calls for the result-mpz_t (which would have allocated > 1-limb ByteArray#s); > > Now instead, the single word-sized limb that would have been > heap-allocated via mpz_init() before calling the actual GMP operation, > is allocated on the stack instead, and only if the GMP routines need to > grow the passed in mpz_t's an actual ByteArray# is allocated. > > That's why I needed a way to return either a single stack-allocated limb > (hence the word#), *or* an heap-allocated 'ByteArray#', which lead to > the MPZ# 3-tuple. Ok, I see now. Thanks for the explanation. Cheers, Simon From johan.tibell at gmail.com Thu Jan 16 15:19:12 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 16 Jan 2014 07:19:12 -0800 Subject: [PATCH] platformFromTriple: fix to recognize Solaris triple (i386-pc-solaris2.11) In-Reply-To: References: <1388942048-16010-1-git-send-email-karel.gardas@centrum.cz> <8738l29w29.fsf@gmail.com> <52D41ED3.6070802@centrum.cz> <87txd7pkvw.fsf@gmail.com> Message-ID: I've merged the fix into the 1.18. I'm OK making one very last release in the 1.18 once we're done with the GHC 7.8 RC. Herbert promised to let me know when we are. On Tue, Jan 14, 2014 at 4:35 AM, Mikhail Glushenkov < the.dead.shall.rise at gmail.com> wrote: > Hi, > > On Mon, Jan 13, 2014 at 11:38 PM, Herbert Valerio Riedel > wrote: > > > > You'll need to persuade the Cabal devs to make the fix above available > > in a stable branch; if the fix makes it into a Cabal release in time for > > the final GHC 7.8 release, it will most likely be part of 7.8. However, > > I don't know if there's a concrete plan for a Cabal-1.18.1.3 release > > currently. > > Johan is the final authority on this, but IIRC we wanted 1.18.1.2 to > be the final 1.18 release. I'll merge that fix into the 1.18 branch. > > -- > () ascii ribbon campaign - against html e-mail > /\ www.asciiribbon.org - against proprietary attachments > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpmoresmau at gmail.com Fri Jan 17 11:35:52 2014 From: jpmoresmau at gmail.com (JP Moresmau) Date: Fri, 17 Jan 2014 12:35:52 +0100 Subject: [Haskell-cafe] GHC API: runStmt not taking into account reloaded module In-Reply-To: <59543203684B2244980D7E4057D5FBC148717468@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148716293@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC148717468@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Updated the wiki! Thanks On Wed, Jan 15, 2014 at 9:31 PM, Simon Peyton Jones wrote: > Great. > > > > Any chance you could add a section to > > http://www.haskell.org/haskellwiki/GHC/As_a_library > > to elucidate this point. It?s not an easy API, and we know for sure that > this particular point has tripped at least one person up. > > > > Simon > > > > *From:* JP Moresmau [mailto:jpmoresmau at gmail.com] > *Sent:* 15 January 2014 16:38 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: [Haskell-cafe] GHC API: runStmt not taking into account > reloaded module > > > > While writing a smaller test case, I managed to get a different error, > which in turn allowed me to find this: > http://www.haskell.org/pipermail/glasgow-haskell-users/2008-May/014841.html. > It turns out I had the ghcLink session flag set to NoLink, it works fine > with LinkInMemory! I knew it was something stupid! > > > > Thanks, there is no need to open a ticket in the end. > > > > JP > > > > On Wed, Jan 15, 2014 at 10:40 AM, JP Moresmau > wrote: > > Thanks Simon, I will write a simple test case and see what happens. > > > > JP > > > > On Wed, Jan 15, 2014 at 10:19 AM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > > That sounds odd. Can you make a small reproducible test case, open a > ticket, and attach the test? > > > > Thanks > > > > SImon > > > > *From:* Haskell-Cafe [mailto:haskell-cafe-bounces at haskell.org] *On Behalf > Of *JP Moresmau > *Sent:* 14 January 2014 20:14 > *To:* Haskell Cafe > *Subject:* [Haskell-cafe] GHC API: runStmt not taking into account > reloaded module > > > > It's late here and I'm probably overlooking something stupid, so I'd like > if somebody could put my nose on it... I'm using the GHC API to evaluate > statements. I use runStmt to get a RunResult, lookupName to get the ID for > the bound names, obtainTermFromId to get the term and showTerm to display > it. So I can call a function from the loaded module with some parameters > and get the result. Good! > > However, if I reload a module and I change the implementation of the > function, runStmt still returns the old value! I know the reload worked > because if I added new names, getNamesInScope returns the new names. What > do I need to do to make sure the new function definitions are used? I've > perused the source code of InteractiveEval and ghci but nothing stood out. > I am calling setContext after load. > > > > Thanks a million! > > > > -- > JP Moresmau > http://jpmoresmau.blogspot.com/ > > > > > > -- > JP Moresmau > http://jpmoresmau.blogspot.com/ > > > > > > -- > JP Moresmau > http://jpmoresmau.blogspot.com/ > -- JP Moresmau http://jpmoresmau.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergo at erdi.hu Sat Jan 18 13:36:33 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Sat, 18 Jan 2014 21:36:33 +0800 (SGT) Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Message-ID: Hi, I've now pushed a re-based version to wip/pattern-synonyms that 1. contains a users_guide entry for -XPatternSynonyms (mostly modelled on the -XViewPatterns docs and the existing Wiki pages for PatternSynonyms) 2. fixes all test failures except for the following two which also occur on master: perf/should_run T5237 [stat not good enough] (normal) th T8633 [bad exit code] (normal) 3. adds a note to the users_guide that pattern synonym declarations don't work in GHCi. Could a native English speaker please read through the documentation changes on wip/pattern-synonyms? Also, I guess this is my official submission of the wip/pattern-synonyms branch for merge into master. Thanks, Gergo On Mon, 13 Jan 2014, Dr. ERDI Gergo wrote: > On Thu, 9 Jan 2014, Austin Seipp wrote: > >> 1) As Richard pointed out, the docs are under docs/users_guide, as >> well as the release notes. Please feel free to elaborate however you >> want on the feature and the bulletpoint for the release notes. > > Hope to get around to these in the weekend. > >> 2) The failures are indeed a result of your code, in particular: >> >> driver T4437 [bad stdout] (normal) >> generics GenDerivOutput [stderr mismatch] (normal) >> generics GenDerivOutput1_0 [stderr mismatch] (normal) >> generics GenDerivOutput1_1 [stderr mismatch] (normal) >> rename/should_compile T7336 [stderr mismatch] (normal) > > Fixed these. > >> 3) It seems GHCi does not support declaring pattern synonyms at the >> REPL. I'm not sure if it's intentional, but if it goes in like this, >> please be sure to document it in the release notes. We can file a >> ticket later for supporting pattern synonyms at the REPL. > > It's definitely not intentional and I have no idea why it would be so. Isn't > GHCi a fairly thin wrapper around the GHC internals? Is there any wiki page > detailing the differences in GHCi vs GHC code paths? > > Thanks, > Gergo > -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' K?t pont k?z?tt a legr?videbb ?t ?p?t?s alatt ?ll. From pali.gabor at gmail.com Sat Jan 18 13:54:58 2014 From: pali.gabor at gmail.com (=?ISO-8859-1?Q?P=E1li_G=E1bor_J=E1nos?=) Date: Sat, 18 Jan 2014 14:54:58 +0100 Subject: RC Status In-Reply-To: References: <20140114.212338.1936899713892957648.kazu@iij.ad.jp> Message-ID: Hello there, On Tue, Jan 14, 2014 at 5:01 PM, Austin Seipp wrote: > Thanks a lot. I will be branching soon. Please let me know if you need > me to merge anything to the release branch. I have not seen a ghc-7.8 branch yet, but here are the hashes for the commits I would merge: 1ad599ea241626f47006fa386e4aaf38dc91fdbb -- Fixes #8451 bcc5c953f80c53732172345639f30974b9862043 -- DYNAMIC_GHC_PROGRAMS=YES for FreeBSD c3b8b3ab27f092c83e08915e3de0bde29321cd31 -- Minor fix in configure (I have been using it in the FreeBSD ports tree) 0d90cbc988af31ff8ea35120203bd9d252d8055e -- Enable the LLVM codegen for FreeBSD/amd64 (also used in the FreeBSD ports tree) I have also run through your 7.8 RC checklist [1] and I would update it with the following information: - FreeBSD (i386): builds clean, validate works, bootstrapping works. - FreeBSD (x86_64): builds clean, validate works, bootstrapping works. - Dynamic GHCi is now enabled and works for both i386 and x86_64 on FreeBSD. [1] https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8 From karel.gardas at centrum.cz Sat Jan 18 22:09:20 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Sat, 18 Jan 2014 23:09:20 +0100 Subject: ARM64 cross-compiler and --sysroot GNU C option. Message-ID: <52DAFB90.3050503@centrum.cz> Folks, just for fun I've built GHC cross-compiler for ARM64 platform using LLVM backend. I've documented it here: https://ghcarm.wordpress.com/2014/01/18/unregisterised-ghc-head-build-for-arm64-platform/ and I'm writing here just to ask if you think adding --with-gcc-sysroot option (or kind of it) to GHC's configure may be a good idea or not (i.e. you consider using shell script wrapper which passes --sysroot option to the GNU C cross compiler perfectly adequate for this job -- like I did in blog post above). See also #7754 which shows the same problem. Thanks! Karel From marlowsd at gmail.com Sun Jan 19 08:23:36 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Sun, 19 Jan 2014 08:23:36 +0000 Subject: [commit: ghc] master: Re-work the naming story for the GHCi prompt (Trac #8649) (73c08ab) In-Reply-To: <20140110085221.AFEF92406B@ghc.haskell.org> References: <20140110085221.AFEF92406B@ghc.haskell.org> Message-ID: <52DB8B88.4040109@gmail.com> On 10/01/14 08:52, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/ghc > > On branch : master > Link : http://ghc.haskell.org/trac/ghc/changeset/73c08ab10e4077e18e459a1325996bff110360c3/ghc > >> --------------------------------------------------------------- > > commit 73c08ab10e4077e18e459a1325996bff110360c3 > Author: Simon Peyton Jones > Date: Thu Jan 9 17:58:18 2014 +0000 > > Re-work the naming story for the GHCi prompt (Trac #8649) Thanks for going to the trouble of cleaning this up. I was never happy with how this all worked. The prefix colon in the name (":Interactive") was supposed to avoid the possibility of clashing with a user-defined module, rather like the ":Main" pseudo-module. Could that be a problem with "Ghci1" and so on? What about the "interactive" package? Perhaps these ought to be renamed to something that can't be user-defined? Cheers, Simon From simonpj at microsoft.com Sun Jan 19 21:12:52 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 19 Jan 2014 21:12:52 +0000 Subject: Pattern synonyms for 7.8? In-Reply-To: References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> Message-ID: <59543203684B2244980D7E4057D5FBC14872934D@DB3EX14MBXC306.europe.corp.microsoft.com> Austin, I'm away now, so I think you can go ahead and merge. (I guess the two failures below need attention though.) Thanks Gergo Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Dr. | ERDI Gergo | Sent: 18 January 2014 13:37 | To: Austin Seipp | Cc: Joachim Breitner; GHC Devs | Subject: Re: Pattern synonyms for 7.8? | | Hi, | | I've now pushed a re-based version to wip/pattern-synonyms that | | 1. contains a users_guide entry for -XPatternSynonyms (mostly modelled | on | the -XViewPatterns docs and the existing Wiki pages for | PatternSynonyms) | | 2. fixes all test failures except for the following two which also | occur | on master: | | perf/should_run T5237 [stat not good enough] (normal) | th T8633 [bad exit code] (normal) | | 3. adds a note to the users_guide that pattern synonym declarations | don't | work in GHCi. | | Could a native English speaker please read through the documentation | changes on wip/pattern-synonyms? | | Also, I guess this is my official submission of the wip/pattern- | synonyms | branch for merge into master. | | Thanks, | Gergo | | | On Mon, 13 Jan 2014, Dr. ERDI Gergo wrote: | | > On Thu, 9 Jan 2014, Austin Seipp wrote: | > | >> 1) As Richard pointed out, the docs are under docs/users_guide, as | >> well as the release notes. Please feel free to elaborate however you | >> want on the feature and the bulletpoint for the release notes. | > | > Hope to get around to these in the weekend. | > | >> 2) The failures are indeed a result of your code, in particular: | >> | >> driver T4437 [bad stdout] (normal) | >> generics GenDerivOutput [stderr mismatch] (normal) | >> generics GenDerivOutput1_0 [stderr mismatch] | (normal) | >> generics GenDerivOutput1_1 [stderr mismatch] | (normal) | >> rename/should_compile T7336 [stderr mismatch] (normal) | > | > Fixed these. | > | >> 3) It seems GHCi does not support declaring pattern synonyms at the | >> REPL. I'm not sure if it's intentional, but if it goes in like this, | >> please be sure to document it in the release notes. We can file a | >> ticket later for supporting pattern synonyms at the REPL. | > | > It's definitely not intentional and I have no idea why it would be | so. Isn't | > GHCi a fairly thin wrapper around the GHC internals? Is there any | wiki page | > detailing the differences in GHCi vs GHC code paths? | > | > Thanks, | > Gergo | > | | -- | | .--= ULLA! =-----------------. | \ http://gergo.erdi.hu \ | `---= gergo at erdi.hu =-------' | K?t pont k?z?tt a legr?videbb ?t ?p?t?s alatt ?ll. From mark.lentczner at gmail.com Mon Jan 20 06:45:39 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sun, 19 Jan 2014 22:45:39 -0800 Subject: unexpected failures expected? Message-ID: Hiho - I just built GHC HEAD on Mac OS X... and make test yields 4 unexpected failures. Is this something I should, er, expect? I'm just trying to calibrate myself for building GHC from source as part of Mac Haskell Platform packaging, and wondering if this (pre-final release) is something to expect, or something I should chase down: OVERALL SUMMARY for test run started at Sun Jan 19 21:02:36 2014 PST 0:33:45 spent to go through 3866 total tests, which gave rise to 16062 test cases, of which 12536 were skipped 26 had missing libraries 3458 expected passes 38 expected failures 0 caused framework failures 0 unexpected passes 4 unexpected failures Unexpected failures: cabal/cabal04 cabal04 [bad exit code] (normal) concurrent/should_run T5611 [bad stderr] (normal) perf/compiler T4801 [stat not good enough] (normal) th TH_spliceE5_prof [bad exit code] (normal) - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Mon Jan 20 14:52:40 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Mon, 20 Jan 2014 15:52:40 +0100 Subject: RC Status In-Reply-To: References: Message-ID: <201401201552.40073.jan.stolarek@p.lodz.pl> I just filled a bug report for a compile-time crash that happens on HEAD: https://ghc.haskell.org/trac/ghc/ticket/8686 Looks like this is fault of some dynamic stuff. I'm affraid that whatever the cause of this problem is it might have large impact if we release stable GHC with this issue unfixed. Janek Dnia sobota, 18 stycznia 2014, P?li G?bor J?nos napisa?: > Hello there, > > On Tue, Jan 14, 2014 at 5:01 PM, Austin Seipp wrote: > > Thanks a lot. I will be branching soon. Please let me know if you need > > me to merge anything to the release branch. > > I have not seen a ghc-7.8 branch yet, but here are the hashes for the > commits I would merge: > > 1ad599ea241626f47006fa386e4aaf38dc91fdbb -- Fixes #8451 > bcc5c953f80c53732172345639f30974b9862043 -- DYNAMIC_GHC_PROGRAMS=YES for > FreeBSD c3b8b3ab27f092c83e08915e3de0bde29321cd31 -- Minor fix in configure > (I have been using it in the FreeBSD ports tree) > 0d90cbc988af31ff8ea35120203bd9d252d8055e -- Enable the LLVM codegen > for FreeBSD/amd64 (also used in the FreeBSD ports tree) > > I have also run through your 7.8 RC checklist [1] and I would update > it with the following information: > > - FreeBSD (i386): builds clean, validate works, bootstrapping works. > - FreeBSD (x86_64): builds clean, validate works, bootstrapping works. > - Dynamic GHCi is now enabled and works for both i386 and x86_64 on > FreeBSD. > > [1] https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From austin at well-typed.com Mon Jan 20 18:22:23 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 20 Jan 2014 12:22:23 -0600 Subject: unexpected failures expected? In-Reply-To: References: Message-ID: Hi Mark, I believe some of these are mine from a patch I committed last week (I'm almost certain.) I'll be looking into these shortly. On Mon, Jan 20, 2014 at 12:45 AM, Mark Lentczner wrote: > Hiho - > > I just built GHC HEAD on Mac OS X... and make test yields 4 unexpected > failures. Is this something I should, er, expect? > > I'm just trying to calibrate myself for building GHC from source as part of > Mac Haskell Platform packaging, and wondering if this (pre-final release) is > something to expect, or something I should chase down: > > OVERALL SUMMARY for test run started at Sun Jan 19 21:02:36 2014 PST > 0:33:45 spent to go through > 3866 total tests, which gave rise to > 16062 test cases, of which > 12536 were skipped > > 26 had missing libraries > 3458 expected passes > 38 expected failures > > 0 caused framework failures > 0 unexpected passes > 4 unexpected failures > > Unexpected failures: > cabal/cabal04 cabal04 [bad exit code] (normal) > concurrent/should_run T5611 [bad stderr] (normal) > perf/compiler T4801 [stat not good enough] (normal) > th TH_spliceE5_prof [bad exit code] (normal) > > - Mark > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Mon Jan 20 18:23:15 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 20 Jan 2014 12:23:15 -0600 Subject: Pattern synonyms for 7.8? In-Reply-To: <59543203684B2244980D7E4057D5FBC14872934D@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148707649@DB3EX14MBXC306.europe.corp.microsoft.com> <1389014277.2952.9.camel@kirk> <41B0CF1C-C66D-4DDC-8C36-A691B83CF7E0@cis.upenn.edu> <4BA531AA-0E3E-48AA-91C9-CDD819D349A9@cis.upenn.edu> <59543203684B2244980D7E4057D5FBC14872934D@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: Hi Gergo, Thanks a bunch. I have a ./validate tree running which is almost done, so I will merge it shortly! On Sun, Jan 19, 2014 at 3:12 PM, Simon Peyton Jones wrote: > Austin, I'm away now, so I think you can go ahead and merge. (I guess the two failures below need attention though.) > > Thanks Gergo > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Dr. > | ERDI Gergo > | Sent: 18 January 2014 13:37 > | To: Austin Seipp > | Cc: Joachim Breitner; GHC Devs > | Subject: Re: Pattern synonyms for 7.8? > | > | Hi, > | > | I've now pushed a re-based version to wip/pattern-synonyms that > | > | 1. contains a users_guide entry for -XPatternSynonyms (mostly modelled > | on > | the -XViewPatterns docs and the existing Wiki pages for > | PatternSynonyms) > | > | 2. fixes all test failures except for the following two which also > | occur > | on master: > | > | perf/should_run T5237 [stat not good enough] (normal) > | th T8633 [bad exit code] (normal) > | > | 3. adds a note to the users_guide that pattern synonym declarations > | don't > | work in GHCi. > | > | Could a native English speaker please read through the documentation > | changes on wip/pattern-synonyms? > | > | Also, I guess this is my official submission of the wip/pattern- > | synonyms > | branch for merge into master. > | > | Thanks, > | Gergo > | > | > | On Mon, 13 Jan 2014, Dr. ERDI Gergo wrote: > | > | > On Thu, 9 Jan 2014, Austin Seipp wrote: > | > > | >> 1) As Richard pointed out, the docs are under docs/users_guide, as > | >> well as the release notes. Please feel free to elaborate however you > | >> want on the feature and the bulletpoint for the release notes. > | > > | > Hope to get around to these in the weekend. > | > > | >> 2) The failures are indeed a result of your code, in particular: > | >> > | >> driver T4437 [bad stdout] (normal) > | >> generics GenDerivOutput [stderr mismatch] (normal) > | >> generics GenDerivOutput1_0 [stderr mismatch] > | (normal) > | >> generics GenDerivOutput1_1 [stderr mismatch] > | (normal) > | >> rename/should_compile T7336 [stderr mismatch] (normal) > | > > | > Fixed these. > | > > | >> 3) It seems GHCi does not support declaring pattern synonyms at the > | >> REPL. I'm not sure if it's intentional, but if it goes in like this, > | >> please be sure to document it in the release notes. We can file a > | >> ticket later for supporting pattern synonyms at the REPL. > | > > | > It's definitely not intentional and I have no idea why it would be > | so. Isn't > | > GHCi a fairly thin wrapper around the GHC internals? Is there any > | wiki page > | > detailing the differences in GHCi vs GHC code paths? > | > > | > Thanks, > | > Gergo > | > > | > | -- > | > | .--= ULLA! =-----------------. > | \ http://gergo.erdi.hu \ > | `---= gergo at erdi.hu =-------' > | K?t pont k?z?tt a legr?videbb ?t ?p?t?s alatt ?ll. > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Mon Jan 20 18:24:56 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 20 Jan 2014 12:24:56 -0600 Subject: RC Status In-Reply-To: <201401201552.40073.jan.stolarek@p.lodz.pl> References: <201401201552.40073.jan.stolarek@p.lodz.pl> Message-ID: Hi Jan, I believe this is my fault (see the email I just sent to Mark about some failures in HEAD.) Sorry about that! On Mon, Jan 20, 2014 at 8:52 AM, Jan Stolarek wrote: > I just filled a bug report for a compile-time crash that happens on HEAD: > > https://ghc.haskell.org/trac/ghc/ticket/8686 > > Looks like this is fault of some dynamic stuff. I'm affraid that whatever the cause of this > problem is it might have large impact if we release stable GHC with this issue unfixed. > > Janek > > Dnia sobota, 18 stycznia 2014, P?li G?bor J?nos napisa?: >> Hello there, >> >> On Tue, Jan 14, 2014 at 5:01 PM, Austin Seipp wrote: >> > Thanks a lot. I will be branching soon. Please let me know if you need >> > me to merge anything to the release branch. >> >> I have not seen a ghc-7.8 branch yet, but here are the hashes for the >> commits I would merge: >> >> 1ad599ea241626f47006fa386e4aaf38dc91fdbb -- Fixes #8451 >> bcc5c953f80c53732172345639f30974b9862043 -- DYNAMIC_GHC_PROGRAMS=YES for >> FreeBSD c3b8b3ab27f092c83e08915e3de0bde29321cd31 -- Minor fix in configure >> (I have been using it in the FreeBSD ports tree) >> 0d90cbc988af31ff8ea35120203bd9d252d8055e -- Enable the LLVM codegen >> for FreeBSD/amd64 (also used in the FreeBSD ports tree) >> >> I have also run through your 7.8 RC checklist [1] and I would update >> it with the following information: >> >> - FreeBSD (i386): builds clean, validate works, bootstrapping works. >> - FreeBSD (x86_64): builds clean, validate works, bootstrapping works. >> - Dynamic GHCi is now enabled and works for both i386 and x86_64 on >> FreeBSD. >> >> [1] https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8 >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Mon Jan 20 19:20:08 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 20 Jan 2014 19:20:08 +0000 Subject: [commit: ghc] master: Re-work the naming story for the GHCi prompt (Trac #8649) (73c08ab) In-Reply-To: <52DB8B88.4040109@gmail.com> References: <20140110085221.AFEF92406B@ghc.haskell.org> <52DB8B88.4040109@gmail.com> Message-ID: <59543203684B2244980D7E4057D5FBC14872BC37@DB3EX14MBXC306.europe.corp.microsoft.com> Well, the module names can be disambiguated by the package name (it's already possible to have the same module in different packages). It is indeed possible to have a package called "interactive", and I simply assumed that would be unlikely. I'd be ok with changing it to ":interactive" if you want to do that. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon | Marlow | Sent: 19 January 2014 08:24 | To: ghc-devs at haskell.org | Subject: Re: [commit: ghc] master: Re-work the naming story for the | GHCi prompt (Trac #8649) (73c08ab) | | On 10/01/14 08:52, git at git.haskell.org wrote: | > Repository : ssh://git at git.haskell.org/ghc | > | > On branch : master | > Link : | http://ghc.haskell.org/trac/ghc/changeset/73c08ab10e4077e18e459a1325996 | bff110360c3/ghc | > | >> --------------------------------------------------------------- | > | > commit 73c08ab10e4077e18e459a1325996bff110360c3 | > Author: Simon Peyton Jones | > Date: Thu Jan 9 17:58:18 2014 +0000 | > | > Re-work the naming story for the GHCi prompt (Trac #8649) | | Thanks for going to the trouble of cleaning this up. I was never happy | with how this all worked. | | The prefix colon in the name (":Interactive") was supposed to avoid the | possibility of clashing with a user-defined module, rather like the | ":Main" pseudo-module. Could that be a problem with "Ghci1" and so on? | What about the "interactive" package? Perhaps these ought to be | renamed to something that can't be user-defined? | | Cheers, | Simon | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From jan.stolarek at p.lodz.pl Mon Jan 20 19:35:13 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Mon, 20 Jan 2014 20:35:13 +0100 Subject: RC Status In-Reply-To: References: <201401201552.40073.jan.stolarek@p.lodz.pl> Message-ID: <201401202035.13305.jan.stolarek@p.lodz.pl> Hi Austin, Richard pointed out that one of your commits last week solved the problem. I wasn't using the latest HEAD. My bad. I'm closing the ticket. Janek Dnia poniedzia?ek, 20 stycznia 2014, Austin Seipp napisa?: > Hi Jan, > > I believe this is my fault (see the email I just sent to Mark about > some failures in HEAD.) Sorry about that! > > On Mon, Jan 20, 2014 at 8:52 AM, Jan Stolarek wrote: > > I just filled a bug report for a compile-time crash that happens on HEAD: > > > > https://ghc.haskell.org/trac/ghc/ticket/8686 > > > > Looks like this is fault of some dynamic stuff. I'm affraid that whatever > > the cause of this problem is it might have large impact if we release > > stable GHC with this issue unfixed. > > > > Janek > > > > Dnia sobota, 18 stycznia 2014, P?li G?bor J?nos napisa?: > >> Hello there, > >> > >> On Tue, Jan 14, 2014 at 5:01 PM, Austin Seipp wrote: > >> > Thanks a lot. I will be branching soon. Please let me know if you need > >> > me to merge anything to the release branch. > >> > >> I have not seen a ghc-7.8 branch yet, but here are the hashes for the > >> commits I would merge: > >> > >> 1ad599ea241626f47006fa386e4aaf38dc91fdbb -- Fixes #8451 > >> bcc5c953f80c53732172345639f30974b9862043 -- DYNAMIC_GHC_PROGRAMS=YES for > >> FreeBSD c3b8b3ab27f092c83e08915e3de0bde29321cd31 -- Minor fix in > >> configure (I have been using it in the FreeBSD ports tree) > >> 0d90cbc988af31ff8ea35120203bd9d252d8055e -- Enable the LLVM codegen > >> for FreeBSD/amd64 (also used in the FreeBSD ports tree) > >> > >> I have also run through your 7.8 RC checklist [1] and I would update > >> it with the following information: > >> > >> - FreeBSD (i386): builds clean, validate works, bootstrapping works. > >> - FreeBSD (x86_64): builds clean, validate works, bootstrapping works. > >> - Dynamic GHCi is now enabled and works for both i386 and x86_64 on > >> FreeBSD. > >> > >> [1] https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8 > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs From kyrab at mail.ru Tue Jan 21 22:38:55 2014 From: kyrab at mail.ru (kyra) Date: Wed, 22 Jan 2014 02:38:55 +0400 Subject: [commit: ghc] master: Add Windows to NoSharedLibsPlatformList (4af1e76) In-Reply-To: References: <20140113062821.1C7D92406B@ghc.haskell.org> <52D3B968.6020005@mail.ru> <52D3C546.8010307@mail.ru> <52D3C91C.1070608@mail.ru> Message-ID: <52DEF6FF.7000903@mail.ru> On 1/14/2014 14:16, Austin Seipp wrote: > However, dynamic for Windows is the biggest thing holding up the RC, > and we're behind schedule (people are ready to move on) - so in light > of this, the RC will likely move forward shortly with these in the > same state (which is unfortunate, but we decided to punt it in a > decision last week.) It's sad to say that we have so few Windows > hackers, it's hard to hold up for so long on this issue. But you can > help change all of this! > > During the RC period, I would very much welcome fixes for some of > these issues, and be more than willing to assist you where possible to > do that (including detailing what I've learned.) I've posted a patch (https://ghc.haskell.org/trac/ghc/ticket/7134#comment:42) which rather satisfactory fixes things on ghc-7.6.3. Now, when I tried to port it to HEAD I've faced numerous problems because things went long way since 7.6.3. For example, recently introduced .ctors support is definitely broken on x86_64 mingw32 - it barfs "HSinteger-gmp-0.5.1.0.o: can't find section `'". I've disabled .ctors support here and there and this bug disappeared, but another bug popped out immediately, I've fixed it too and now the next bug is runtime linker can't find symbols from 'base' package. I guess, all this is because nobody bothered to test new developments against x86_64 mingw32 since it was broken long ago anyway. Hence, to make things more clear for me: does there exist at least *someone* besides me, who tried to make things working on x86_64 mingw32 recently? Regards, Kyra From tkn.akio at gmail.com Wed Jan 22 11:37:31 2014 From: tkn.akio at gmail.com (Akio Takano) Date: Wed, 22 Jan 2014 20:37:31 +0900 Subject: Extending fold/build fusion In-Reply-To: <59543203684B2244980D7E4057D5FBC1487169CD@DB3EX14MBXC306.europe.corp.microsoft.com> References: <59543203684B2244980D7E4057D5FBC148713626@DB3EX14MBXC306.europe.corp.microsoft.com> <59543203684B2244980D7E4057D5FBC1487169CD@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: On Thu, Jan 16, 2014 at 4:20 AM, Simon Peyton Jones wrote: > > Akio > > > > Aha! So you are really talking about replacing the *entire* foldr/build story with a new one, namely a foldW/buildW story. Presumably all producers and consumers (map, filter, take, drop etc) must be redefined using foldW and buildW instead of fold and build. Is that right? Yes > > > > That is much more significant than the wiki page describes. If you are serious about this, could you perhaps update the wiki page to describe what you propose? Do you believe that the new story will catch every case that the old one does? (Plus some new ones.) Does your data support that? I updated the file. Please see the section "Will the functions currently fusible continue to fuse well?" https://github.com/takano-akio/ww-fusion#will-the-functions-currently-fusible-continue-to-fuse-well > > > > I?m really not sure about your Tree example. I agree that the foldl? style code gives the result that you show. But I tried the more straightforward version: > > sumT :: Tree -> Int > > sumT t = foldr (+) 0 (build (toListFB t)) > > > > This yielded pretty decent code: > > FB.$wgo = > > \ (w_sio :: FB.Tree) (ww_sir :: GHC.Prim.Int#) -> > > case w_sio of _ { > > FB.Tip rb_dgM -> GHC.Prim.+# rb_dgM ww_sir; > > FB.Bin x_af0 y_af1 -> > > case FB.$wgo y_af1 ww_sir of ww1_siv { __DEFAULT -> > > FB.$wgo x_af0 ww1_siv > > } > > } > > > > This builds no thunks. It does build stack equal to the depth of the tree. But your desired go1 code will also do exactly the same; go1 is strict in its second argument and hence will use call-by-value, and hence will build stack equal to the depth of the tree. I don't think using foldr is a general replacement for foldl', because (1) it is less efficient when the input is a list and (2) it will change the meaning of the code when the operator to fold with is not associative. -- Akio > > > > In short, I?m not yet seeing a benefit. > > I am probably missing something important. > > Suggestion: rather than just reply to this email (soon lost in the email stream), it would be easier for others to join in if you updated your wiki page to say (a) what you propose, and (b) how it can yield benefits that the current setup cannot. Then an email reply can say ?go look at section 3? or whatever. > > > > best wishes > > > > Simon > From carter.schonwald at gmail.com Wed Jan 22 16:29:52 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 22 Jan 2014 11:29:52 -0500 Subject: m4 / autconf help? (WIP patch for improving the CPP sitch) Message-ID: Hey all, I"ve a WIP patch https://ghc.haskell.org/trac/ghc/ticket/8683 for making it easy to separately specify the CPP program command and associated flags in the ghc settings file. The haskell code part, for handling the dynflags / parsing the settings file etc tests out fine, i'm a bit stumped on how to get autoconf /m4 to work correctly for instantiating the desired values into the settings.intemplate though! (which happens at configure time) any help / wisdom? thanks! -Carter -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Thu Jan 23 14:26:19 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Thu, 23 Jan 2014 14:26:19 +0000 Subject: 32-bit Linux perf numbers Message-ID: <52E1268B.7070002@fuuzetsu.co.uk> Hi, I just ran validate on 32-bit Linux with the latest HEAD. Here's the end of the log: > Unexpected results from: > TEST="T1969 haddock.Cabal haddock.base" > > OVERALL SUMMARY for test run started at Thu Jan 23 13:35:12 2014 GMT > 0:18:40 spent to go through > 3881 total tests, which gave rise to > 15164 test cases, of which > 11619 were skipped > > 28 had missing libraries > 3456 expected passes > 58 expected failures > > 3 caused framework failures > 0 unexpected passes > 3 unexpected failures > > Unexpected failures: > perf/compiler T1969 [stat not good enough] (normal) > perf/haddock haddock.Cabal [stat not good enough] (normal) > perf/haddock haddock.base [stat not good enough] (normal) I think that the Haddock numbers were adjusted recently (and no further changes have been made to Haddock since) but it seems that it was not enough. T1969 also seems to need inspection: should I be filing a bug for that? You can see the full log at http://fuuzetsu.co.uk/misc/validate23012014 Perhaps there should be a separate thread about this but what's happening with the nightly builds that would catch these? The builds have been stopped for months now and it doesn't seem like they'll ever resume at this rate. I can offer a 32-bit Linux box to run a validate build daily if finding the machines to build on is a problem. Thanks -- Mateusz K. From carter.schonwald at gmail.com Thu Jan 23 17:49:16 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 23 Jan 2014 12:49:16 -0500 Subject: feedback on adding CPP program / flags to ghc settings file patch please :) (it works now!) Message-ID: Hey all, the patch for augmenting the ghc settings file to help decouple CPP from the C Compiler choice now works, on both the configure/autconf and ghc builds and uses it correctly fronts! It also results in removing the cRAWCPP variable from Config.hs, which I think is a good thing! (removes a hack / potential source of cruft). I'd love some feedback/ suggestions for this patch please. I'd also like to thank Peter Trommler for helping me debug / fix up my incorrect m4/autoconf component of the patch. the point of this patch is to make it very easy to provide a GHC that by default uses a different program for "Haskell CPP" than for compiling C code (which can only be done currently via a wrapper script hack thats a bit fragile, or via passing -pgmP everywhere! ) *https://ghc.haskell.org/trac/ghc/ticket/8683#comment:11 * thanks -Carter -------------- next part -------------- An HTML attachment was scrubbed... URL: From chak at cse.unsw.edu.au Fri Jan 24 01:38:01 2014 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Fri, 24 Jan 2014 12:38:01 +1100 Subject: GHC API: Using runGhc twice or from multiple threads? In-Reply-To: <52D3BC0F.7010000@gmail.com> References: <52D3BC0F.7010000@gmail.com> Message-ID: <6FC4A415-7043-45DE-87A5-DBC6F663A5F2@cse.unsw.edu.au> Simon Marlow : >> And what about this one: >> >> main = do >> forkIO $ runGhc libdir $ do ... >> forkIO $ runGhc libdir $ do ... > > The problem with this is the RTS linker, which is a single piece of shared global state. We could actually fix that if it became important. If you?re not running interpreted code, this should be fine (apart from the static flags issue mentioned above). I?m curious, what is the issue with interpreted code? Does the interpreter store interpreter state in the RTS, which would get mixed up between the two instances? If so, wouldn?t the same thing happen if I use forkIO in interpreted code? Manuel From marlowsd at gmail.com Fri Jan 24 09:13:51 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 24 Jan 2014 09:13:51 +0000 Subject: GHC API: Using runGhc twice or from multiple threads? In-Reply-To: <6FC4A415-7043-45DE-87A5-DBC6F663A5F2@cse.unsw.edu.au> References: <52D3BC0F.7010000@gmail.com> <6FC4A415-7043-45DE-87A5-DBC6F663A5F2@cse.unsw.edu.au> Message-ID: <52E22ECF.4010208@gmail.com> On 24/01/14 01:38, Manuel M T Chakravarty wrote: > Simon Marlow : >>> And what about this one: >>> >>> main = do >>> forkIO $ runGhc libdir $ do ... >>> forkIO $ runGhc libdir $ do ... >> >> The problem with this is the RTS linker, which is a single piece of shared global state. We could actually fix that if it became important. If you?re not running interpreted code, this should be fine (apart from the static flags issue mentioned above). > > I?m curious, what is the issue with interpreted code? Does the interpreter store interpreter state in the RTS, which would get mixed up between the two instances? > > If so, wouldn?t the same thing happen if I use forkIO in interpreted code? It is the linker state that is shared, that is, the mapping from symbol names to object code addresses. So you can certainly do concurrency in an interpreted program, but you can't load two different sets of object files into two instances of GHC running in separate threads. This is true regardless of whether we're using the system linker or the RTS linker. In the RTS linker case it's fixable easily enough, in the system linker case there's really only one global symbol table (populated by RTLD_GLOBAL) so I'm not sure whether there's a way around that. Cheers, Simon From marlowsd at gmail.com Fri Jan 24 10:17:50 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 24 Jan 2014 10:17:50 +0000 Subject: [commit: ghc] master: Fix more 32 bit performance fallout. (c5088e2) In-Reply-To: <20140122233202.95D0F2406B@ghc.haskell.org> References: <20140122233202.95D0F2406B@ghc.haskell.org> Message-ID: <52E23DCE.8050508@gmail.com> On 22/01/14 23:32, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/ghc > > On branch : master > Link : http://ghc.haskell.org/trac/ghc/changeset/c5088e299a66109346057afc151c33e47b850b92/ghc > >> --------------------------------------------------------------- > > commit c5088e299a66109346057afc151c33e47b850b92 > Author: Austin Seipp > Date: Wed Jan 22 17:30:54 2014 -0600 > > Fix more 32 bit performance fallout. > > Some of these are actually worse than I thought upon inspection, but > after a little digging I haven't found exactly what has caused them. > Some were certainly bitrotted, but others seem updated more recently, so > something has slipped. > > I'll be filing a ticket about these. Please do! These look very bad indeed. Cheers, Simon From marlowsd at gmail.com Fri Jan 24 10:20:25 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 24 Jan 2014 10:20:25 +0000 Subject: [commit: ghc] master: Fix more 32 bit performance fallout. (c5088e2) In-Reply-To: <52E23DCE.8050508@gmail.com> References: <20140122233202.95D0F2406B@ghc.haskell.org> <52E23DCE.8050508@gmail.com> Message-ID: <52E23E69.4040406@gmail.com> On 24/01/14 10:17, Simon Marlow wrote: > On 22/01/14 23:32, git at git.haskell.org wrote: >> Repository : ssh://git at git.haskell.org/ghc >> >> On branch : master >> Link : >> http://ghc.haskell.org/trac/ghc/changeset/c5088e299a66109346057afc151c33e47b850b92/ghc >> >> >>> --------------------------------------------------------------- >> >> commit c5088e299a66109346057afc151c33e47b850b92 >> Author: Austin Seipp >> Date: Wed Jan 22 17:30:54 2014 -0600 >> >> Fix more 32 bit performance fallout. >> >> Some of these are actually worse than I thought upon inspection, but >> after a little digging I haven't found exactly what has caused them. >> Some were certainly bitrotted, but others seem updated more >> recently, so >> something has slipped. >> >> I'll be filing a ticket about these. > > Please do! These look very bad indeed. Sorry - I see you fixed these later, ignore me. Cheers, Simon From jan.stolarek at p.lodz.pl Fri Jan 24 10:45:30 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 24 Jan 2014 11:45:30 +0100 Subject: fPIC issues Message-ID: <201401241145.30093.jan.stolarek@p.lodz.pl> A couple of days ago I realized that I can't compile latest HEAD on my Debian Squeeze laptop. Some -fPIC issues prevented compilation of integer-gmp library. I reported this as #8666. Today I got another PIC-related error on a different machine with openSUSE 11.4: /usr/lib64/gcc/x86_64-suse-linux/4.5/../../../../x86_64-suse-linux/bin/ld: dist/build/compile/compile-tmp/Data/Singletons/Core.dyn_o: re location R_X86_64_PC32 against undefined symbol `DataziSingletonsziTypes_Proved_con_info' can not be used when making a shared object; recompile with -fPIC /usr/lib64/gcc/x86_64-suse-linux/4.5/../../../../x86_64-suse-linux/bin/ld: final link failed: Bad value collect2: ld returned 1 exit status This happened with HEAD when I tried to compile testsuite configured via cabal file (on 7.6.3 all is fine): test-suite compile type: exitcode-stdio-1.0 ghc-options: -Wall -O0 -main-is Test.Main default-language: Haskell2010 main-is: Test/Main.hs Before I fil in another bug report could someone offer me a straightforward explanation of what is this whole -fPIC thing? Why does it break my code? Is this a known issue? Is there any kind of workaround for this? Janek From carter.schonwald at gmail.com Fri Jan 24 14:12:55 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 24 Jan 2014 09:12:55 -0500 Subject: fPIC issues In-Reply-To: <201401241145.30093.jan.stolarek@p.lodz.pl> References: <201401241145.30093.jan.stolarek@p.lodz.pl> Message-ID: What version of cabal-install are you using? On Friday, January 24, 2014, Jan Stolarek wrote: > A couple of days ago I realized that I can't compile latest HEAD on my > Debian Squeeze laptop. > Some -fPIC issues prevented compilation of integer-gmp library. I reported > this as #8666. Today I > got another PIC-related error on a different machine with openSUSE 11.4: > > /usr/lib64/gcc/x86_64-suse-linux/4.5/../../../../x86_64-suse-linux/bin/ld: > dist/build/compile/compile-tmp/Data/Singletons/Core.dyn_o: re > location R_X86_64_PC32 against undefined symbol > `DataziSingletonsziTypes_Proved_con_info' can not > be used when making a shared object; recompile with -fPIC > /usr/lib64/gcc/x86_64-suse-linux/4.5/../../../../x86_64-suse-linux/bin/ld: > final link failed: Bad > value > collect2: ld returned 1 exit status > > This happened with HEAD when I tried to compile testsuite configured via > cabal file (on 7.6.3 all > is fine): > > test-suite compile > type: exitcode-stdio-1.0 > ghc-options: -Wall -O0 -main-is Test.Main > default-language: Haskell2010 > main-is: Test/Main.hs > > Before I fil in another bug report could someone offer me a > straightforward explanation of what is > this whole -fPIC thing? Why does it break my code? Is this a known issue? > Is there any kind of > workaround for this? > > Janek > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Fri Jan 24 15:24:03 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 24 Jan 2014 16:24:03 +0100 Subject: fPIC issues In-Reply-To: References: <201401241145.30093.jan.stolarek@p.lodz.pl> Message-ID: <201401241624.03902.jan.stolarek@p.lodz.pl> > What version of cabal-install are you using? [killy at xerxes : ~] cabal --version cabal-install version 1.18.0.2 using version 1.18.1.3 of the Cabal library Janek From fuuzetsu at fuuzetsu.co.uk Sun Jan 26 02:22:03 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Sun, 26 Jan 2014 02:22:03 +0000 Subject: Nightlies Message-ID: <52E4714B.5060905@fuuzetsu.co.uk> Hi all, I'd just like to query the status of the nightly builds. Is anything happening in that area? [1] is right on the front page of the GHC Trac even though no builds were ran for ~5 months. Perhaps it should be moved out of the way if there's no plan to resume these in the near future. Does anything specific need doing to get these to run again? [1]: https://ghc.haskell.org/trac/ghc/wiki/Builder -- Mateusz K. From austin at well-typed.com Sun Jan 26 03:29:11 2014 From: austin at well-typed.com (Austin Seipp) Date: Sat, 25 Jan 2014 21:29:11 -0600 Subject: Nightlies In-Reply-To: <52E4714B.5060905@fuuzetsu.co.uk> References: <52E4714B.5060905@fuuzetsu.co.uk> Message-ID: As of right now, Pali's FreeBSD builds seem to be the only nightly that is still consistently running (and thanks to him for that!) The build infrastructure in its current status is mainly just 'unmaintained'. Furthermore there's not really a good roster of machines that were/were not part of the system AFAIK aside from the old list, and it's unclear what the status of many of those machines are (as you said, many haven't checked in in a while.) There is much interest in a better nightly infrastructure and people have asked me several times about setting one up on IRC. We have historically had some problems with the nightly infrastructure, mainly things like network disconnectivity or firewalling policies, since most people aren't running dedicated internet facing machines (or even a dedicated machine at all. Firewalls have been a problem for places like MSR from what I understand.) Several individual people run Jenkins individually, and I like it, but I'm not sure how well it does when spread across the globe in terms of networking (and realistically builders will look like that, as we can't possibly have a dedicated farm somewhere.) I was also at one point worried about the size of such a tool on systems like ARM machines where resources are at a premium, but in hindsight this looks OK. I'd like any opinions on this if people have deployed things in these highly distributed scenarios. I have had some ideas for an extremely-minimal nightly build infrastructure that would ideally require minimal setup and let clients have power over choosing how and when to build, but I have yet to find the time to finish the basic implementation to try it. On Sat, Jan 25, 2014 at 8:22 PM, Mateusz Kowalczyk wrote: > Hi all, > > I'd just like to query the status of the nightly builds. Is anything > happening in that area? [1] is right on the front page of the GHC Trac > even though no builds were ran for ~5 months. Perhaps it should be > moved out of the way if there's no plan to resume these in the near > future. > > Does anything specific need doing to get these to run again? > > [1]: https://ghc.haskell.org/trac/ghc/wiki/Builder > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From fuuzetsu at fuuzetsu.co.uk Sun Jan 26 03:50:10 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Sun, 26 Jan 2014 03:50:10 +0000 Subject: Nightlies In-Reply-To: References: <52E4714B.5060905@fuuzetsu.co.uk> Message-ID: <52E485F2.8010101@fuuzetsu.co.uk> On 26/01/14 03:29, Austin Seipp wrote: > As of right now, Pali's FreeBSD builds seem to be the only nightly > that is still consistently running (and thanks to him for that!) > > The build infrastructure in its current status is mainly just > 'unmaintained'. Furthermore there's not really a good roster of > machines that were/were not part of the system AFAIK aside from the > old list, and it's unclear what the status of many of those machines > are (as you said, many haven't checked in in a while.) > > There is much interest in a better nightly infrastructure and people > have asked me several times about setting one up on IRC. We have > historically had some problems with the nightly infrastructure, mainly > things like network disconnectivity or firewalling policies, since > most people aren't running dedicated internet facing machines (or even > a dedicated machine at all. Firewalls have been a problem for places > like MSR from what I understand.) Why not simply have the clients post the results once a night? If the builds are nightly, is there really any need to have an open daemon listening? From what I can tell from http://darcs.haskell.org/ghcBuilder/builders/ it is simply the matter of building once a day/night and then posting the results in an e-mail to the list and uploading the binaries and test results elsewhere. Could we not simply have a wrapper script around GHC build process that in the end posts all these results to relevant places? The clients could simply have a nightly cron job and it'd be up to the slave owner to keep these builds going as often or as rarely as they want. The only downside is that you guys can't tell the clients precisely when to run but looking at build times, it's only once a day anyway. > Several individual people run Jenkins individually, and I like it, but > I'm not sure how well it does when spread across the globe in terms of > networking (and realistically builders will look like that, as we > can't possibly have a dedicated farm somewhere.) I was also at one > point worried about the size of such a tool on systems like ARM > machines where resources are at a premium, but in hindsight this looks > OK. I'd like any opinions on this if people have deployed things in > these highly distributed scenarios. > > I have had some ideas for an extremely-minimal nightly build > infrastructure that would ideally require minimal setup and let > clients have power over choosing how and when to build, but I have yet > to find the time to finish the basic implementation to try it. > > > On Sat, Jan 25, 2014 at 8:22 PM, Mateusz Kowalczyk > wrote: >> Hi all, >> >> I'd just like to query the status of the nightly builds. Is anything >> happening in that area? [1] is right on the front page of the GHC Trac >> even though no builds were ran for ~5 months. Perhaps it should be >> moved out of the way if there's no plan to resume these in the near >> future. >> >> Does anything specific need doing to get these to run again? >> >> [1]: https://ghc.haskell.org/trac/ghc/wiki/Builder >> -- >> Mateusz K. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > -- Mateusz K. From mail at joachim-breitner.de Sun Jan 26 11:16:09 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 26 Jan 2014 11:16:09 +0000 Subject: Nightlies In-Reply-To: <52E4714B.5060905@fuuzetsu.co.uk> References: <52E4714B.5060905@fuuzetsu.co.uk> Message-ID: <1390734969.2515.6.camel@kirk> Hi, Am Sonntag, den 26.01.2014, 02:22 +0000 schrieb Mateusz Kowalczyk: > I'd just like to query the status of the nightly builds. Is anything > happening in that area? [1] is right on the front page of the GHC Trac > even though no builds were ran for ~5 months. Perhaps it should be > moved out of the way if there's no plan to resume these in the near > future. just to clarify: For what purpose do you want the nightlies? To check whether GHC validates cleanly, to compare performance numbers, or to get hold of up-to-date binary distributions? For the first, I?d really really like to see something that runs before a change enters master, so that non-validating mistakes like http://git.haskell.org/ghc.git/commitdiff/b26e2f92c5c6f77fe361293a128da637e728959c (without the corresponding change in http://git.haskell.org/ghc.git/commitdiff/59f491a933ec7380698b776e14c3753c2a318a89) do not reach master in the first place. I?m happy to help setting up such an infrastructure, including designing the precise workflow. For the second and third, a build farm like the builders would of course be great. I actually once got a Igloo snowboall from Linaro for that purpose, but never finished setting it up properly. So once the builders are going to be revived, I?d like to finally do that. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: This is a digitally signed message part URL: From pali.gabor at gmail.com Sun Jan 26 12:19:16 2014 From: pali.gabor at gmail.com (=?ISO-8859-1?Q?P=E1li_G=E1bor_J=E1nos?=) Date: Sun, 26 Jan 2014 13:19:16 +0100 Subject: Nightlies In-Reply-To: <1390734969.2515.6.camel@kirk> References: <52E4714B.5060905@fuuzetsu.co.uk> <1390734969.2515.6.camel@kirk> Message-ID: On Sun, Jan 26, 2014 at 12:16 PM, Joachim Breitner wrote: > just to clarify: For what purpose do you want the nightlies? To check > whether GHC validates cleanly, to compare performance numbers, or to get > hold of up-to-date binary distributions? Well, I run those clients primarily because that is (was?) one of the primary requirements for Tier-1 platforms :-) And this indeed greatly helps me to see if something has gone wrong on FreeBSD -- so I can track down the problems and fix them gradually continuously, therefore birthing a new release becomes a bit easier. But yes, I also feel useful to offer daily snapshots for the interested parties as a side effect. > For the first, I?d really really like to see something that runs before > a change enters master I am afraid that you may not want to pass each change through all the supported platforms before moving it to master. Of course, that is the ideal case, but it adds some operational cost, and can easily frustrate developers who do not have access to the given platform where it fails. > For the second and third, a build farm like the builders would of course > be great. I believe Ian's original project (the builder-server I use) [1] was to have a distributed farm of builders where anybody is allowed to dedicate a machine. Therefore GHC may be built on various platforms while the cost maintenance is shared between the operators of the respective platforms. I think it worked pretty well until the disappearance of the coordinator machine. We also used the binary tarballs produced by the builders for the latest releases -- Ian just set the release flag, waited for the next day, picked the release tarballs and published them, without any further interaction. [1] https://ghc.haskell.org/trac/ghc/wiki/Builder From chak at cse.unsw.edu.au Sun Jan 26 12:21:13 2014 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Sun, 26 Jan 2014 23:21:13 +1100 Subject: GHC API: Using runGhc twice or from multiple threads? In-Reply-To: <52E22ECF.4010208@gmail.com> References: <52D3BC0F.7010000@gmail.com> <6FC4A415-7043-45DE-87A5-DBC6F663A5F2@cse.unsw.edu.au> <52E22ECF.4010208@gmail.com> Message-ID: I should have thought of that. Thanks for the clarification. Cheers, Manuel Simon Marlow : > On 24/01/14 01:38, Manuel M T Chakravarty wrote: >> Simon Marlow : >>>> And what about this one: >>>> >>>> main = do >>>> forkIO $ runGhc libdir $ do ... >>>> forkIO $ runGhc libdir $ do ... >>> >>> The problem with this is the RTS linker, which is a single piece of shared global state. We could actually fix that if it became important. If you?re not running interpreted code, this should be fine (apart from the static flags issue mentioned above). >> >> I?m curious, what is the issue with interpreted code? Does the interpreter store interpreter state in the RTS, which would get mixed up between the two instances? >> >> If so, wouldn?t the same thing happen if I use forkIO in interpreted code? > > It is the linker state that is shared, that is, the mapping from symbol names to object code addresses. So you can certainly do concurrency in an interpreted program, but you can't load two different sets of object files into two instances of GHC running in separate threads. This is true regardless of whether we're using the system linker or the RTS linker. In the RTS linker case it's fixable easily enough, in the system linker case there's really only one global symbol table (populated by RTLD_GLOBAL) so I'm not sure whether there's a way around that. > > Cheers, > Simon > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From fuuzetsu at fuuzetsu.co.uk Mon Jan 27 00:35:35 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Mon, 27 Jan 2014 00:35:35 +0000 Subject: Nightlies In-Reply-To: <1390734969.2515.6.camel@kirk> References: <52E4714B.5060905@fuuzetsu.co.uk> <1390734969.2515.6.camel@kirk> Message-ID: <52E5A9D7.1000302@fuuzetsu.co.uk> On 26/01/14 11:16, Joachim Breitner wrote: > Hi, > > Am Sonntag, den 26.01.2014, 02:22 +0000 schrieb Mateusz Kowalczyk: >> I'd just like to query the status of the nightly builds. Is anything >> happening in that area? [1] is right on the front page of the GHC Trac >> even though no builds were ran for ~5 months. Perhaps it should be >> moved out of the way if there's no plan to resume these in the near >> future. > > just to clarify: For what purpose do you want the nightlies? To check > whether GHC validates cleanly, to compare performance numbers, or to get > hold of up-to-date binary distributions? Personally it's to see what validates that day and while we're at it, I don't see the reason to not use this to get the nightly binaries as well. I agree with pretty much everything that P?li said in his reply. If we can get the validate results from other people's machines, at the very least we have a sanity check: does it only fail for me or for everyone else too? I think that if we have a list of platforms with angry red everywhere, accessible to everyone, people are more likely to react to build failures and we're less likely to have e-mails on ghc-devs from people going ?is it just me or is it failing for everyone??. > For the first, I?d really really like to see something that runs before > a change enters master, so that non-validating mistakes like > http://git.haskell.org/ghc.git/commitdiff/b26e2f92c5c6f77fe361293a128da637e728959c > (without the corresponding change in > http://git.haskell.org/ghc.git/commitdiff/59f491a933ec7380698b776e14c3753c2a318a89) > do not reach master in the first place. > > I?m happy to help setting up such an infrastructure, including designing > the precise workflow. I think doing a per-commit validate before something enters master would be difficult simply because one would have to wait a long time before their commit is allowed in. Even on the fast boxes, the quick validate from clean checkout seems to take about an hour at best. > For the second and third, a build farm like the builders would of course > be great. I actually once got a Igloo snowboall from Linaro for that > purpose, but never finished setting it up properly. So once the builders > are going to be revived, I?d like to finally do that. > > > Greetings, > Joachim > -- Mateusz K. From austin at well-typed.com Mon Jan 27 00:57:02 2014 From: austin at well-typed.com (Austin Seipp) Date: Sun, 26 Jan 2014 18:57:02 -0600 Subject: Nightlies In-Reply-To: <1390734969.2515.6.camel@kirk> References: <52E4714B.5060905@fuuzetsu.co.uk> <1390734969.2515.6.camel@kirk> Message-ID: On Sun, Jan 26, 2014 at 5:16 AM, Joachim Breitner wrote: > just to clarify: For what purpose do you want the nightlies? To check > whether GHC validates cleanly, to compare performance numbers, or to get > hold of up-to-date binary distributions? In practice: all three. Developers want logs to see what went wrong. Users want snapshot consistent distribution of snapshots to test against. Both are legitimate uses that are covered by such infrastructure. > For the first, I?d really really like to see something that runs before > a change enters master, so that non-validating mistakes like > http://git.haskell.org/ghc.git/commitdiff/b26e2f92c5c6f77fe361293a128da637e728959c > (without the corresponding change in > http://git.haskell.org/ghc.git/commitdiff/59f491a933ec7380698b776e14c3753c2a318a89) > do not reach master in the first place. > > I?m happy to help setting up such an infrastructure, including designing > the precise workflow. This is doable, but the question is to what extent? There are literally dozens of build configurations that could break with any given patch, without others breaking: * Profiling could break. Or profiling GHC (but not other smaller things) could break. * Dynamic linking could break. * Rarer configurations could break but only for some cases, e.g. threaded + profiling. Or LLVM + Profiling, or LLVM + dynamic linking, etc etc. * Static linking for GHCi could break on platforms that now use dynamic linking by default (as we saw happen when I broke it.) * GHC may only expose certain faulty behavior at certain optimization levels (both in bootstrapping itself and in the tests - so maybe ./validate looks mostly OK, but -O2 is not.) * Bootstrapping the build with different compilers may break (i.e. an unintentional backwards incompatible change is introduced in the stage1 build) * Any of these could theoretically break depending on things like the host platform. * The testsuite runs 'fast' by default. It would need to run slowly to potentially uncover more problems, but this greatly increases the runtime. * Not all machines are equal, and some will take dramatically longer or shorter amounts of time to build (and subsequently) uncover these problems. In my experience, all of the above are absolutely possible scenarios for something wrong to happen. Also, in practice, a lot of these things either need an incredible amount of cross-communication to fix (between the bot runner and the developer,) or require direct access to the machine in order to debug. Not everyone has that hardware, and not everyone will even be willing to give access (for legitimate reasons - some people have offered to run build bots, but behind corporate infrastructure at places like IBM.) And with the amount of time that many configurations requires, the turnaround time for some things could become incredibly large and frustrating. I think if we were to introduce pre-push validation, the only thing it could reasonably test would be ./validate and nothing else. And even then, e.g. on high-powered ARM platforms, this will still seriously take *hours*, and that's a significantly longer time-to-wait than most people are used to. > For the second and third, a build farm like the builders would of course > be great. I actually once got a Igloo snowboall from Linaro for that > purpose, but never finished setting it up properly. So once the builders > are going to be revived, I?d like to finally do that. If you're willing to contribute ARM builders, both Ben Gamari and I would be very happy to have you do so (me and him are the only people actively doing lots of ARM work, and frankly, Ben is doing most of it.) > Greetings, > Joachim > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C > Debian Developer: nomeata at debian.org > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Mon Jan 27 01:04:44 2014 From: austin at well-typed.com (Austin Seipp) Date: Sun, 26 Jan 2014 19:04:44 -0600 Subject: Nightlies In-Reply-To: <52E485F2.8010101@fuuzetsu.co.uk> References: <52E4714B.5060905@fuuzetsu.co.uk> <52E485F2.8010101@fuuzetsu.co.uk> Message-ID: FWIW, this is pretty much what it was going to do. Except it also needs to host things like publicly accessible binary snapshots, as people want to use them. And that's a no-go for firewalls or otherwise non-controllable infrastructure that people may have bots on, so it must have a central place for the results to be located on haskell.org. It can then send summarized reports to the list based e.g. on a cron job (asking bot runners to manage their own emails for bots is annoying and painful, and doesn't scale nicely for them as we add more.) And also, of course, there needs to be some aspect of non-repudiation to the results, so that people know builds and emails are legitimate (i.e. signed by an GPG pubkey and verified on the server.) On Sat, Jan 25, 2014 at 9:50 PM, Mateusz Kowalczyk wrote: > On 26/01/14 03:29, Austin Seipp wrote: >> As of right now, Pali's FreeBSD builds seem to be the only nightly >> that is still consistently running (and thanks to him for that!) >> >> The build infrastructure in its current status is mainly just >> 'unmaintained'. Furthermore there's not really a good roster of >> machines that were/were not part of the system AFAIK aside from the >> old list, and it's unclear what the status of many of those machines >> are (as you said, many haven't checked in in a while.) >> >> There is much interest in a better nightly infrastructure and people >> have asked me several times about setting one up on IRC. We have >> historically had some problems with the nightly infrastructure, mainly >> things like network disconnectivity or firewalling policies, since >> most people aren't running dedicated internet facing machines (or even >> a dedicated machine at all. Firewalls have been a problem for places >> like MSR from what I understand.) > > Why not simply have the clients post the results once a night? If the > builds are nightly, is there really any need to have an open daemon > listening? From what I can tell from > http://darcs.haskell.org/ghcBuilder/builders/ it is simply the matter of > building once a day/night and then posting the results in an e-mail to > the list and uploading the binaries and test results elsewhere. Could we > not simply have a wrapper script around GHC build process that in the > end posts all these results to relevant places? The clients could simply > have a nightly cron job and it'd be up to the slave owner to keep these > builds going as often or as rarely as they want. The only downside is > that you guys can't tell the clients precisely when to run but looking > at build times, it's only once a day anyway. > >> Several individual people run Jenkins individually, and I like it, but >> I'm not sure how well it does when spread across the globe in terms of >> networking (and realistically builders will look like that, as we >> can't possibly have a dedicated farm somewhere.) I was also at one >> point worried about the size of such a tool on systems like ARM >> machines where resources are at a premium, but in hindsight this looks >> OK. I'd like any opinions on this if people have deployed things in >> these highly distributed scenarios. >> >> I have had some ideas for an extremely-minimal nightly build >> infrastructure that would ideally require minimal setup and let >> clients have power over choosing how and when to build, but I have yet >> to find the time to finish the basic implementation to try it. >> >> >> On Sat, Jan 25, 2014 at 8:22 PM, Mateusz Kowalczyk >> wrote: >>> Hi all, >>> >>> I'd just like to query the status of the nightly builds. Is anything >>> happening in that area? [1] is right on the front page of the GHC Trac >>> even though no builds were ran for ~5 months. Perhaps it should be >>> moved out of the way if there's no plan to resume these in the near >>> future. >>> >>> Does anything specific need doing to get these to run again? >>> >>> [1]: https://ghc.haskell.org/trac/ghc/wiki/Builder >>> -- >>> Mateusz K. >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >> >> >> > > > -- > Mateusz K. > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Mon Jan 27 01:08:39 2014 From: austin at well-typed.com (Austin Seipp) Date: Sun, 26 Jan 2014 19:08:39 -0600 Subject: Nightlies In-Reply-To: <52E5A9D7.1000302@fuuzetsu.co.uk> References: <52E4714B.5060905@fuuzetsu.co.uk> <1390734969.2515.6.camel@kirk> <52E5A9D7.1000302@fuuzetsu.co.uk> Message-ID: On Sun, Jan 26, 2014 at 6:35 PM, Mateusz Kowalczyk wrote: > If we can get the validate results from other people's machines, at the > very least we have a sanity check: does it only fail for me or for > everyone else too? I think that if we have a list of platforms with > angry red everywhere, accessible to everyone, people are more likely to > react to build failures and we're less likely to have e-mails on > ghc-devs from people going ?is it just me or is it failing for everyone??. And for the record, I do agree with this. I think a historic problem is the results have never been public enough to most developers, and unfortunately not everyone is trained to respond to just the emails sent to ghc-builds at haskell.org to diagnose a problem. Seeing a gigantic angry red build failure that blames you directly is likely much easier to for most people as opposed to sorting through emails from bots every morning. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Mon Jan 27 01:18:25 2014 From: austin at well-typed.com (Austin Seipp) Date: Sun, 26 Jan 2014 19:18:25 -0600 Subject: Nightlies In-Reply-To: References: <52E4714B.5060905@fuuzetsu.co.uk> <1390734969.2515.6.camel@kirk> Message-ID: On Sun, Jan 26, 2014 at 6:57 PM, Austin Seipp wrote: > * Profiling could break. Or profiling GHC (but not other smaller > things) could break. > * Dynamic linking could break. > * Rarer configurations could break but only for some cases, e.g. > threaded + profiling. Or LLVM + Profiling, or LLVM + dynamic linking, > etc etc. > * Static linking for GHCi could break on platforms that now use > dynamic linking by default (as we saw happen when I broke it.) > * GHC may only expose certain faulty behavior at certain optimization > levels (both in bootstrapping itself and in the tests - so maybe > ./validate looks mostly OK, but -O2 is not.) > * Bootstrapping the build with different compilers may break (i.e. an > unintentional backwards incompatible change is introduced in the > stage1 build) > * Any of these could theoretically break depending on things like the > host platform. > * The testsuite runs 'fast' by default. It would need to run slowly > to potentially uncover more problems, but this greatly increases the > runtime. Disregard all this, upon closer inspection I see you only wanted ./validate anyway.* :) * But it still will hurt more when you add in low-powered builders. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From karel.gardas at centrum.cz Mon Jan 27 07:59:38 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 27 Jan 2014 08:59:38 +0100 Subject: Nightlies In-Reply-To: References: <52E4714B.5060905@fuuzetsu.co.uk> Message-ID: <52E611EA.9000109@centrum.cz> Austin, On 01/26/14 04:29 AM, Austin Seipp wrote: > As of right now, Pali's FreeBSD builds seem to be the only nightly > that is still consistently running (and thanks to him for that!) > > The build infrastructure in its current status is mainly just > 'unmaintained'. Furthermore there's not really a good roster of > machines that were/were not part of the system AFAIK aside from the > old list, and it's unclear what the status of many of those machines > are (as you said, many haven't checked in in a while.) honestly speaking, last message from Ian was that builder server waits for "abbot" update. That's IIRC. So my i.MX/ARM buildbot and solaris buildbot waits for abbot to be update to connect again. > There is much interest in a better nightly infrastructure and people > have asked me several times about setting one up on IRC. We have > historically had some problems with the nightly infrastructure, mainly > things like network disconnectivity or firewalling policies, since I got those disconnectivity issue on builder client v2, I've not seen them on v1, but this may be just a coincidence. > Several individual people run Jenkins individually, and I like it, but > I'm not sure how well it does when spread across the globe in terms of > networking (and realistically builders will look like that, as we > can't possibly have a dedicated farm somewhere.) I was also at one > point worried about the size of such a tool on systems like ARM > machines where resources are at a premium, but in hindsight this looks > OK. I'd like any opinions on this if people have deployed things in > these highly distributed scenarios. ARM is all right, at least cortex-Ax boards provides 1GB usually and sometimes even more. Using NFS or attached drive I've been able to perform ghc build as dictated by the builder server in several days (4-5 IIRC). Pandaboard would be a lot faster (2 days IIRC) but is not that stable and I don't have modern cortex-a15 boards here or quad A9, those would be even faster. Anyway, if you do not require build every night than this is doable on one board. If you require better coverage, then more than one board will be needed. > I have had some ideas for an extremely-minimal nightly build > infrastructure that would ideally require minimal setup and let > clients have power over choosing how and when to build, but I have yet > to find the time to finish the basic implementation to try it. Why to waste your precious time on something which was basically done already several times in the past and what in its last incarnation done by Ian worked quite well? Just please start the venerable builder server and let's see people connect again and buildbots running... Thanks! Karel From mail at joachim-breitner.de Mon Jan 27 09:47:24 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 27 Jan 2014 09:47:24 +0000 Subject: Nightlies In-Reply-To: References: <52E4714B.5060905@fuuzetsu.co.uk> <1390734969.2515.6.camel@kirk> Message-ID: <1390816044.2549.3.camel@kirk> Good morning, Am Sonntag, den 26.01.2014, 19:18 -0600 schrieb Austin Seipp: > Disregard all this, upon closer inspection I see you only wanted > ./validate anyway.* :) > > * But it still will hurt more when you add in low-powered builders. right, I did not want to highjack the thread with my wishlist. And even then I would not want low-powered builders, but rather one strong, ?most typical? setup. I think it would already by a big win if we ensured statically (heh) that every change to master has been validated completely once somewhere. And if for changes like the one I liked (removing dead code), as a developer I don?t have to laboriously enforce this invariant (which I obviously then didn?t do), but can rely on the safety nets of the infrastructure. Greetings, Joachim PS: I?m subscribed to the list, no need to mail me personally. -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: This is a digitally signed message part URL: From marlowsd at gmail.com Tue Jan 28 11:03:29 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 28 Jan 2014 11:03:29 +0000 Subject: fPIC issues In-Reply-To: References: <201401241145.30093.jan.stolarek@p.lodz.pl> Message-ID: <52E78E81.6060707@gmail.com> Why is the installed version of cabal-install relevant? We don't use it in the GHC build. On 24/01/14 14:12, Carter Schonwald wrote: > What version of cabal-install are you using? > > On Friday, January 24, 2014, Jan Stolarek > wrote: > > A couple of days ago I realized that I can't compile latest HEAD on > my Debian Squeeze laptop. > Some -fPIC issues prevented compilation of integer-gmp library. I > reported this as #8666. Today I > got another PIC-related error on a different machine with openSUSE 11.4: > > /usr/lib64/gcc/x86_64-suse-linux/4.5/../../../../x86_64-suse-linux/bin/ld: > dist/build/compile/compile-tmp/Data/Singletons/Core.dyn_o: re > location R_X86_64_PC32 against undefined symbol > `DataziSingletonsziTypes_Proved_con_info' can not > be used when making a shared object; recompile with -fPIC > /usr/lib64/gcc/x86_64-suse-linux/4.5/../../../../x86_64-suse-linux/bin/ld: > final link failed: Bad > value > collect2: ld returned 1 exit status > > This happened with HEAD when I tried to compile testsuite configured > via cabal file (on 7.6.3 all > is fine): > > test-suite compile > type: exitcode-stdio-1.0 > ghc-options: -Wall -O0 -main-is Test.Main > default-language: Haskell2010 > main-is: Test/Main.hs > > Before I fil in another bug report could someone offer me a > straightforward explanation of what is > this whole -fPIC thing? Why does it break my code? Is this a known > issue? Is there any kind of > workaround for this? > > Janek > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From jan.stolarek at p.lodz.pl Tue Jan 28 11:24:59 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Tue, 28 Jan 2014 12:24:59 +0100 Subject: fPIC issues In-Reply-To: <52E78E81.6060707@gmail.com> References: <201401241145.30093.jan.stolarek@p.lodz.pl> <52E78E81.6060707@gmail.com> Message-ID: <201401281224.59052.jan.stolarek@p.lodz.pl> Someone reported similar problem on Trac: https://ghc.haskell.org/trac/ghc/ticket/8696 So I'm not the only one affected. Janek Dnia wtorek, 28 stycznia 2014, Simon Marlow napisa?: > Why is the installed version of cabal-install relevant? We don't use it > in the GHC build. > > On 24/01/14 14:12, Carter Schonwald wrote: > > What version of cabal-install are you using? > > > > On Friday, January 24, 2014, Jan Stolarek > > wrote: > > > > A couple of days ago I realized that I can't compile latest HEAD on > > my Debian Squeeze laptop. > > Some -fPIC issues prevented compilation of integer-gmp library. I > > reported this as #8666. Today I > > got another PIC-related error on a different machine with openSUSE > > 11.4: > > > > > > /usr/lib64/gcc/x86_64-suse-linux/4.5/../../../../x86_64-suse-linux/bin/ld > >: dist/build/compile/compile-tmp/Data/Singletons/Core.dyn_o: re location > > R_X86_64_PC32 against undefined symbol > > `DataziSingletonsziTypes_Proved_con_info' can not > > be used when making a shared object; recompile with -fPIC > > > > /usr/lib64/gcc/x86_64-suse-linux/4.5/../../../../x86_64-suse-linux/bin/ld > >: final link failed: Bad > > value > > collect2: ld returned 1 exit status > > > > This happened with HEAD when I tried to compile testsuite configured > > via cabal file (on 7.6.3 all > > is fine): > > > > test-suite compile > > type: exitcode-stdio-1.0 > > ghc-options: -Wall -O0 -main-is Test.Main > > default-language: Haskell2010 > > main-is: Test/Main.hs > > > > Before I fil in another bug report could someone offer me a > > straightforward explanation of what is > > this whole -fPIC thing? Why does it break my code? Is this a known > > issue? Is there any kind of > > workaround for this? > > > > Janek > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs From mail at joachim-breitner.de Tue Jan 28 18:06:36 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 28 Jan 2014 18:06:36 +0000 Subject: Extending fold/build fusion In-Reply-To: References: Message-ID: <1390932396.2641.46.camel@kirk> Dear Akio, Am Freitag, den 03.01.2014, 23:20 +0900 schrieb Akio Takano: > I have been thinking about how foldl' can be turned into a good > consumer, and I came up with something that I thought would work. So > I'd like to ask for opinions from the ghc devs: if this idea looks > good, if it is a known bad idea, if there is a better way to do it, > etc. I?d like to evaluate your approach, but let me first note that I had been working on #7994 (make foldl a good consumer), and with my patches the compiler is smart enough to eta-expand go in all cases covered by nofib, using the existing foldr/build-fusion. That said, I do like your idea of making the worker/wrapper a bit more explicit, instead of relying on the compiler to do the transformation for us. So let?s see in what ways your proposal surpasses a smarter GHC. The Tree example is a good one, because there any form of eta expansion, just as you write, will not help. And I find that that Simons?s solution of using a foldr-based sum for Trees unsatisfying: We should indeed aim for ?sum $ toList tree? to produce good results. Given that Data.Map is a tree, and that is a common data structure and it?s toList a good producer, this is relevant. Can you implement build via buildW, so that existing code like "map" [~1] forall f xs. map f xs = build (\c n -> foldr (mapFB c f) n xs) can be used unmodified? But probably not... but that would mean a noticeable incompatibility and a burden on library authors using list fusion. In any case, I suggest you just dig in, create a branch of libraries/base and replace everything related to foldr/builder with your approach. First, do not actually change the definition of foldl. Then compare the nofib testruns (probably best with two separate working repo clones, starting from "make distclean"): Do the results differ? A lot of work went into foldr/build-fusion, so we want to be sure that we are not losing anything anywhere (or if we are, we want to know why). Then make foldl and foldl' a good consumer, as in the patch at the beginning of #7994. How large are the gains? How do they compare with the gains from the smarter GHC (numbers also in the ticket). If by then we have not found any regression, things look promising. Greetings, and I hope the delayed responses do not lesen your motivation, Joachim PS: I?m subscribed to the mailinglist, no need to CC me explicitly. -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: This is a digitally signed message part URL: From austin at well-typed.com Wed Jan 29 09:49:24 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 29 Jan 2014 03:49:24 -0600 Subject: 7.8 branch is created, HEAD is now open, and a note on merges Message-ID: Hello all, I've just created the 7.8 branch after tying off some of the final loose ends. In its current state, I expect the branch as it is now to become RC1 within the day. I plan on starting builds for the following soon: - OS X 10.7 and OS X 10.9 - Linux i386/amd64 (likely based on Debian Wheezey) - Windows i386/amd64 (many thanks to Kyrill Briantsev for the heroic last-minute linker fixes!) I'll send a (GPG-signed) email containing SHA1 hashes when they're done. Two systems I won't make builds for RC1 by default (but could be persuaded to if nobody else does, and people want it): - Older glibc-2.5 based systems (e.g. CentOS, - a few users have talked about this wrt binary releases, where I don't think GHC works.) - FreeBSD - Pali, if you'd like to do this, feel free, and let me know. This means I'll (mostly) be waiting around today, so feel free to shoot questions. As of now, this means HEAD is now version 7.9, and you're free to push wacky experiments or changes now, if you've been holding off. You'll probably want to clean your whole tree, since this means the interface file versions etc will change. Finally, we picked up a good amount of new committers this year, so let's remind people of the merging policy: what happens if you need to merge something you did to the 7.8 branch? There are two main avenues for this to happen: * Someone reports a bug against the 7.8 RC on Trac. You decide to fix it and do so. Now what? 1) Please commit the bug to master, and confirm it's a fix. 2) Go to the bug, and instead of closing it, change the ticket status to 'merge'. 3) I will cherry-pick it over to the 7.8 branch for you - nothing else needed. * There's not a recorded bug, but you do push a change, and you think it should be merged (maybe a typo or something.) In this case, I'd ask you please CC me on the email sent to ghc-commits at haskell.org which is related to your commit, and just say "Please merge" or somesuch. I'll come over the commits with such a response. This goes for all changes - submodule updates, typos, real fixes, etc. It's likely me and Herbert will restrict the Gitolite permissions to only allow the two of us to touch the ghc-7.8 branch. So it's really important you put us in the loop, ASAP. If you don't do one of these two things, it's highly likely I will miss it, and not merge it. If you have questions, please ask me or Herbert. If there's a merge conflict, we can discuss it. Thanks -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From p.k.f.holzenspies at utwente.nl Wed Jan 29 09:54:48 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Wed, 29 Jan 2014 09:54:48 +0000 Subject: Request: export runTcInteractive from TcRnDriver Message-ID: Dear GHC-devs, Is there a reason why, in HEAD, TcRnDriver does *not* export runTcInteractive? If not, can it please be added? (I considered sending a patch with this email, but it's so trivial a change that the check of the patch is more work than manually adding runTcInteractive to the export list.) I'm developing against the GHC API of 7.6.3 and it would have saved me hours of work to have precisely that function. Seeing it's in HEAD, but not being exported seems a shame ;) Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Wed Jan 29 09:58:08 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 29 Jan 2014 09:58:08 +0000 Subject: Unit tests for GHC code? Message-ID: <1390989488.2560.10.camel@kirk> Hi, I am currently working on a piece of code (an analysis to solve #7994) where I?d like to make sure that my changes do not regress over what I had before. But I find it unnecessarily hard to write our usual test-case styles for them: * I?d like to test against very small Core that does not involve anything unnecessary. But it is hard to write Haskell that has, when it hits my analysis, this shape. It requires lots of {-# NOINLINE #-} and other tricks. * To test the result, I either have to write a performance test, but it is not always easy to come up with a program where the gains are massive enough to become a reliable test. It is possible, but work, and doing it maybe half a dozen times for various inputs is tricky. * Alternative, I can dump the Core and add that to the test cases. But now other changes to the compiler can easily trigger my test case failing. So I thought about writing a test case that simply imports my module from the ghc library, generates artificial, minimal core, and checks the output for precisely what I want (in my case, some fields of the IdInfo of various binders). I don?t see any examples for that in the test suite. Is that just because noone has done that before, or is there inherently bad about this approach that we do _not_ want to that? Also, we don?t have a parser for Core, so I?ll have to build my syntax trees using the stuff from MkCore et al, right? Thanks, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: This is a digitally signed message part URL: From pali.gabor at gmail.com Wed Jan 29 12:47:44 2014 From: pali.gabor at gmail.com (=?ISO-8859-1?Q?P=E1li_G=E1bor_J=E1nos?=) Date: Wed, 29 Jan 2014 13:47:44 +0100 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: Message-ID: On Wed, Jan 29, 2014 at 10:49 AM, Austin Seipp wrote: > Two systems I won't make builds for RC1 by default (but could be > persuaded to if nobody else does, and people want it): [..] > - FreeBSD - Pali, if you'd like to do this, feel free, and let me know. Sure, I can do it. > This means I'll (mostly) be waiting around today, so feel free to > shoot questions. I guess it would useful to know exactly which version to build. That is, is it enough to do release builds (by setting RELEASE to "yes") with the HEAD of the ghc-7.8 branch (of today)...? From austin at well-typed.com Wed Jan 29 13:13:41 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 29 Jan 2014 07:13:41 -0600 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: Message-ID: Thanks! I'll follow up with you shortly. My plan was actually to create a fingerprint of the repository, so you can just checkout a tree, use the fingerprint, and build the RC from that. The setup would just be a default perf build (i.e. no custom build.mk at all, just a regular boot+configure+make+binary-dist.) Right now I'm going over a few final touch-ups with Herbert before I start building everything (we might cherry-pick one or two minor things to base here momentarily.) After that I'll fingerprint and send it out. Also, for RCs, I believe we traditionally keep RELEASE=NO, so the version number doesn't come off as "7.8" but as "7.8." instead - indicative of it being "not the final version". So you shouldn't need to tweak anything - just use the fingerprint and build. On Wed, Jan 29, 2014 at 6:47 AM, P?li G?bor J?nos wrote: > On Wed, Jan 29, 2014 at 10:49 AM, Austin Seipp wrote: >> Two systems I won't make builds for RC1 by default (but could be >> persuaded to if nobody else does, and people want it): > [..] >> - FreeBSD - Pali, if you'd like to do this, feel free, and let me know. > > Sure, I can do it. > >> This means I'll (mostly) be waiting around today, so feel free to >> shoot questions. > > I guess it would useful to know exactly which version to build. That > is, is it enough to do release builds (by setting RELEASE to "yes") > with the HEAD of the ghc-7.8 branch (of today)...? > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From pali.gabor at gmail.com Wed Jan 29 13:16:49 2014 From: pali.gabor at gmail.com (=?ISO-8859-1?Q?P=E1li_G=E1bor_J=E1nos?=) Date: Wed, 29 Jan 2014 14:16:49 +0100 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: Message-ID: On Wed, Jan 29, 2014 at 2:13 PM, Austin Seipp wrote: > Right now I'm going over a few final touch-ups with Herbert before I > start building everything (we might cherry-pick one or two minor > things to base here momentarily.) Okay. By the way, are you aware of this error (and there may be similar ones -- it is due to the version bump, so it happens with ghc-7.8): utils/haddock/src/Haddock/InterfaceFile.hs:85:2: error: #error Unsupported GHC version gmake[1]: *** [utils/haddock/dist/build/.depend.haskell] Error 1 gmake: *** [all] Error 2 > So you shouldn't need to tweak anything - just use the fingerprint and build. Excellent! From austin at well-typed.com Wed Jan 29 13:18:58 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 29 Jan 2014 07:18:58 -0600 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: Message-ID: Blargh, I'll fix this shortly. Thanks Pali. On Wed, Jan 29, 2014 at 7:16 AM, P?li G?bor J?nos wrote: > On Wed, Jan 29, 2014 at 2:13 PM, Austin Seipp wrote: >> Right now I'm going over a few final touch-ups with Herbert before I >> start building everything (we might cherry-pick one or two minor >> things to base here momentarily.) > > Okay. By the way, are you aware of this error (and there may be > similar ones -- it is due to the version bump, so it happens with > ghc-7.8): > > utils/haddock/src/Haddock/InterfaceFile.hs:85:2: > error: #error Unsupported GHC version > gmake[1]: *** [utils/haddock/dist/build/.depend.haskell] Error 1 > gmake: *** [all] Error 2 > >> So you shouldn't need to tweak anything - just use the fingerprint and build. > > Excellent! > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From kazu at iij.ad.jp Wed Jan 29 14:04:27 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 29 Jan 2014 23:04:27 +0900 (JST) Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: Message-ID: <20140129.230427.1626186367262598672.kazu@iij.ad.jp> Austin, > Blargh, I'll fix this shortly. Thanks Pali. Please do. I also hit upon this bug. I could build GHC head yesterday but not today. P.S. Even if this is fixed, "validate" does not work on my Mac recently due to a haddock problem of the xhtml library. Does anyone see this problem? --Kazu From fuuzetsu at fuuzetsu.co.uk Wed Jan 29 14:05:29 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Wed, 29 Jan 2014 14:05:29 +0000 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: <20140129.230427.1626186367262598672.kazu@iij.ad.jp> References: <20140129.230427.1626186367262598672.kazu@iij.ad.jp> Message-ID: <52E90AA9.6020000@fuuzetsu.co.uk> On 29/01/14 14:04, Kazu Yamamoto (????) wrote: > Austin, > >> Blargh, I'll fix this shortly. Thanks Pali. > > Please do. I also hit upon this bug. I could build GHC head yesterday > but not today. > > P.S. > > Even if this is fixed, "validate" does not work on my Mac recently due > to a haddock problem of the xhtml library. Does anyone see this > problem? > > --Kazu > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > I'll try building now. What's the error? -- Mateusz K. From kazu at iij.ad.jp Wed Jan 29 14:14:02 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 29 Jan 2014 23:14:02 +0900 (JST) Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: <52E90AA9.6020000@fuuzetsu.co.uk> References: <20140129.230427.1626186367262598672.kazu@iij.ad.jp> <52E90AA9.6020000@fuuzetsu.co.uk> Message-ID: <20140129.231402.29577024173614644.kazu@iij.ad.jp> Hi Mateusz, > I'll try building now. What's the error? Not building but "validate". "validate" stops due to an error from haddock in the xhtml library. --Kazu From austin at well-typed.com Wed Jan 29 14:20:08 2014 From: austin at well-typed.com (Austin Seipp) Date: Wed, 29 Jan 2014 08:20:08 -0600 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: <20140129.230427.1626186367262598672.kazu@iij.ad.jp> References: <20140129.230427.1626186367262598672.kazu@iij.ad.jp> Message-ID: Also sending to the list. --------------------------- Kazu, I fixed the __GLASGOW_HASKELL__ check on both HEAD and the 7.8 branch. As for the bug you are talking of - yes, I made a release note about this the other day. The problem is that due to a bad interaction with Clang, somehow the module downsweep gets broken in an odd way (presumably due to something being preprocessed incorrectly.) This does not affect every library actually - but xhtml is one of them (for example, 'text' works fine on Mavericks using Clang.) I also haven't yet pinned down why for example GHC is fine, but Haddock is not (considering they should mostly do the same thing.) Unfortunately I do not have any more time to fix this, because keeping the RC delayed continuously has a cost (and it's been delayed a lot.) I will reapproach this problem shortly after the first RC is done as it will undoubtly cause some issues. But I just don't have time at this exact moment to fix it (which may end up in integrating cpphs into GHC. I'm not sure yet.) For the record, this error occurs fairly late in the ./validate script - it's for testing the in-place binary distribution after a build, but before the testsuite is run. In the mean time, you can just do 'cd testsuite && make fast' to run all your tests after you have seen this error. I simply do not have a sensible, easy fix right now. I sincerely apologize since this is annoying. But I really am out of time right now and I think we'll have to eat this one for the RC. (I should also note that, in the Real World, this will only make a difference in documentation building, not package installation. The xhtml library actually installs just fine - it is the doc generation which fails.) On Wed, Jan 29, 2014 at 8:04 AM, Kazu Yamamoto wrote: > Austin, > >> Blargh, I'll fix this shortly. Thanks Pali. > > Please do. I also hit upon this bug. I could build GHC head yesterday > but not today. > > P.S. > > Even if this is fixed, "validate" does not work on my Mac recently due > to a haddock problem of the xhtml library. Does anyone see this > problem? > > --Kazu > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From kazu at iij.ad.jp Wed Jan 29 14:28:45 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Wed, 29 Jan 2014 23:28:45 +0900 (JST) Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: <20140129.230427.1626186367262598672.kazu@iij.ad.jp> Message-ID: <20140129.232845.1934103935159898965.kazu@iij.ad.jp> Austin, > For the record, this error occurs fairly late in the ./validate script > - it's for testing the in-place binary distribution after a build, but > before the testsuite is run. In the mean time, you can just do 'cd > testsuite && make fast' to run all your tests after you have seen this > error. I simply do not have a sensible, easy fix right now. OK. I understand. P.S. I obtained Mac pro. Building and validating GHC on Mac pro are very quick. So, if necessary, please feel free to ask me to check building or validating GHC on Mac anytime. --Kazu From eir at cis.upenn.edu Wed Jan 29 15:12:27 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 29 Jan 2014 10:12:27 -0500 Subject: Unit tests for GHC code? In-Reply-To: <1390989488.2560.10.camel@kirk> References: <1390989488.2560.10.camel@kirk> Message-ID: <95778709-E468-4A5C-B37E-09C9446C42AE@cis.upenn.edu> Let me take a different slice at this question, inspired more by Joachim's subject line than his text: On a number of occasions I've wanted to write unit tests against a certain function or set of functions. The role inference algorithm is a prime example, but it's happened elsewhere, too. The testsuite only performs end-to-end testing. Sometimes it's easy/possible to build a test that gets at what I want, but sometimes it's very hard. (Case in point: I revised the varSetElemsKvsFirst function on a branch -- it's really hard to test that thoroughly in an end-to-end test!) So, is there a way / does someone know how to make a way to do proper unit testing? The ability to do such tests is treated as a key virtue of (pure) functional programming, and yet we don't do it! :) For my varSetElemsKvsFirst problem, I ended up copying the code to a new file, writing dummy data structures to get it to compile, and then ran unit tests. I fixed my bug, but there was no way to integrate the testing work into a regression test, sadly. Any thoughts? Thanks, Richard On Jan 29, 2014, at 4:58 AM, Joachim Breitner wrote: > Hi, > > I am currently working on a piece of code (an analysis to solve #7994) > where I?d like to make sure that my changes do not regress over what I > had before. But I find it unnecessarily hard to write our usual > test-case styles for them: > * I?d like to test against very small Core that does not involve > anything unnecessary. But it is hard to write Haskell that has, > when it hits my analysis, this shape. It requires lots of {-# > NOINLINE #-} and other tricks. > * To test the result, I either have to write a performance test, > but it is not always easy to come up with a program where the > gains are massive enough to become a reliable test. It is > possible, but work, and doing it maybe half a dozen times for > various inputs is tricky. > * Alternative, I can dump the Core and add that to the test cases. > But now other changes to the compiler can easily trigger my test > case failing. > > So I thought about writing a test case that simply imports my module > from the ghc library, generates artificial, minimal core, and checks the > output for precisely what I want (in my case, some fields of the IdInfo > of various binders). > > I don?t see any examples for that in the test suite. Is that just > because noone has done that before, or is there inherently bad about > this approach that we do _not_ want to that? > > Also, we don?t have a parser for Core, so I?ll have to build my syntax > trees using the stuff from MkCore et al, right? > > Thanks, > Joachim > > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C > Debian Developer: nomeata at debian.org > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From mail at joachim-breitner.de Wed Jan 29 15:20:55 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 29 Jan 2014 15:20:55 +0000 Subject: Unit tests for GHC code? In-Reply-To: <95778709-E468-4A5C-B37E-09C9446C42AE@cis.upenn.edu> References: <1390989488.2560.10.camel@kirk> <95778709-E468-4A5C-B37E-09C9446C42AE@cis.upenn.edu> Message-ID: <1391008855.2560.14.camel@kirk> Hi, Am Mittwoch, den 29.01.2014, 10:12 -0500 schrieb Richard Eisenberg: > So, is there a way / does someone know how to make a way to do proper > unit testing? The ability to do such tests is treated as a key virtue > of (pure) functional programming, and yet we don't do it! :) I?m now doing this: http://git.haskell.org/ghc.git/commitdiff/aa970ca1e81118bbf37386b8833a01a3791cee62 (Patch ?Add a unit test for CallArity? on branch wip/T7994, in case the hash id is invalid later) which I believe is okaish. Greetings, Joachim PS: I?m subscribed to the list, no need to send a copy to my private address. -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: This is a digitally signed message part URL: From hvr at gnu.org Wed Jan 29 16:32:46 2014 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Wed, 29 Jan 2014 17:32:46 +0100 Subject: GHC boot-library package changelogs & release-notes Message-ID: <87bnyubvf5.fsf@gnu.org> Hello fellow GHC devs, As some of you might have noticed, I added a changelog.md file to libraries/base: https://github.com/ghc/packages-base/blob/c8634027d4e3315a2276fb1be8168c486419785a/changelog.md (please feel free to fix any typos/omissions/whatever you notice) My hope/motivation is that since Hackage gained the ability to display changelog files, the rather extensive changes in `base` might be a bit more easily/conveniently accessible on Hackage. I chose to use Markdown format, as I believe it may be more convenient to maintain/edit the `base` changelog as plain text rather than having to edit XML after each noteworthy change in `base`. And as the release-notes typically only exploit a subset of the Docbook facilities, the conversion to Docbook XML could be performed semi-automatically shortly before a release. Moreover, the release notes from previous major GHC release (which in the past contained the major changes in `base` et al.) are usually removed again. While a separate changelog file would usually retain (more) version history. Therefore, I'd propose to switch from editing the user's guide release note for library release notes to using Hackage-changelog files in Markdown format (following a common structural convention) and make it the release-manager's responsibility to integrate the respective package's changelog content into the user's guide. Any comments? Cheers, hvr From rarash at student.chalmers.se Wed Jan 29 17:21:05 2014 From: rarash at student.chalmers.se (Arash Rouhani) Date: Wed, 29 Jan 2014 18:21:05 +0100 Subject: GHC boot-library package changelogs & release-notes In-Reply-To: <87bnyubvf5.fsf@gnu.org> References: <87bnyubvf5.fsf@gnu.org> Message-ID: <52E93881.1060201@student.chalmers.se> Hi Herbert, So who should add to the changelog? If I'm committing a new feature to the base library, should my commit include a small addition to the changelog describing my change? Good idea btw! :) Best, Arash On 2014-01-29 17:32, Herbert Valerio Riedel wrote: > Hello fellow GHC devs, > > As some of you might have noticed, I added a changelog.md file to > libraries/base: > > https://github.com/ghc/packages-base/blob/c8634027d4e3315a2276fb1be8168c486419785a/changelog.md > > (please feel free to fix any typos/omissions/whatever you notice) > > My hope/motivation is that since Hackage gained the ability to display > changelog files, the rather extensive changes in `base` might be a bit > more easily/conveniently accessible on Hackage. > > I chose to use Markdown format, as I believe it may be more convenient > to maintain/edit the `base` changelog as plain text rather than having > to edit XML after each noteworthy change in `base`. And as the > release-notes typically only exploit a subset of the Docbook facilities, > the conversion to Docbook XML could be performed semi-automatically > shortly before a release. > > Moreover, the release notes from previous major GHC release (which in > the past contained the major changes in `base` et al.) are usually > removed again. While a separate changelog file would usually retain > (more) version history. > > Therefore, I'd propose to switch from editing the user's guide release > note for library release notes to using Hackage-changelog files in > Markdown format (following a common structural convention) and make it > the release-manager's responsibility to integrate the respective > package's changelog content into the user's guide. > > Any comments? > > Cheers, > hvr > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From hvriedel at gmail.com Wed Jan 29 17:40:33 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 29 Jan 2014 18:40:33 +0100 Subject: GHC boot-library package changelogs & release-notes In-Reply-To: <52E93881.1060201@student.chalmers.se> (Arash Rouhani's message of "Wed, 29 Jan 2014 18:21:05 +0100") References: <87bnyubvf5.fsf@gnu.org> <52E93881.1060201@student.chalmers.se> Message-ID: <877g9ibsa6.fsf@gmail.com> Hello Arash, On 2014-01-29 at 18:21:05 +0100, Arash Rouhani wrote: > So who should add to the changelog? If I'm committing a new feature to > the base library, should my commit include a small addition to the > changelog describing my change? If you deem your modification release-note-worthy, then yes. The idea is to keep the code-change and the changelog modification more localized to each other. With the current scheme, you have to make two commits, as the the release-notes file is usually in a different repository. Cheers, hvr From lukexipd at gmail.com Wed Jan 29 21:26:01 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Wed, 29 Jan 2014 13:26:01 -0800 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: <20140129.232845.1934103935159898965.kazu@iij.ad.jp> References: <20140129.230427.1626186367262598672.kazu@iij.ad.jp> <20140129.232845.1934103935159898965.kazu@iij.ad.jp> Message-ID: Hi Austin, all, FYI: I'm building GHC iOS binaries for RC1 right now. It would be awesome to release them along side if possible! Cheers Luke On Wed, Jan 29, 2014 at 6:28 AM, Kazu Yamamoto wrote: > Austin, > > > For the record, this error occurs fairly late in the ./validate script > > - it's for testing the in-place binary distribution after a build, but > > before the testsuite is run. In the mean time, you can just do 'cd > > testsuite && make fast' to run all your tests after you have seen this > > error. I simply do not have a sensible, easy fix right now. > > OK. I understand. > > P.S. > > I obtained Mac pro. Building and validating GHC on Mac pro are very > quick. So, if necessary, please feel free to ask me to check building > or validating GHC on Mac anytime. > > --Kazu > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From djsamperi at gmail.com Wed Jan 29 21:59:23 2014 From: djsamperi at gmail.com (Dominick Samperi) Date: Wed, 29 Jan 2014 16:59:23 -0500 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: <20140129.230427.1626186367262598672.kazu@iij.ad.jp> <20140129.232845.1934103935159898965.kazu@iij.ad.jp> Message-ID: That is great Luke, I hope your work gets rolled into 7.8. Is this just "experimental," or is it possible to actually develop an app that will pass Apple's QA and can be hosted on the app store? Are there any examples currently hosted on the App Store? Thanks, Dominick On Wed, Jan 29, 2014 at 4:26 PM, Luke Iannini wrote: > Hi Austin, all, > > FYI: I'm building GHC iOS binaries for RC1 right now. It would be awesome to > release them along side if possible! > > Cheers > Luke > > > On Wed, Jan 29, 2014 at 6:28 AM, Kazu Yamamoto wrote: >> >> Austin, >> >> > For the record, this error occurs fairly late in the ./validate script >> > - it's for testing the in-place binary distribution after a build, but >> > before the testsuite is run. In the mean time, you can just do 'cd >> > testsuite && make fast' to run all your tests after you have seen this >> > error. I simply do not have a sensible, easy fix right now. >> >> OK. I understand. >> >> P.S. >> >> I obtained Mac pro. Building and validating GHC on Mac pro are very >> quick. So, if necessary, please feel free to ask me to check building >> or validating GHC on Mac anytime. >> >> --Kazu >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From lukexipd at gmail.com Wed Jan 29 22:06:28 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Wed, 29 Jan 2014 14:06:28 -0800 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: <20140129.230427.1626186367262598672.kazu@iij.ad.jp> <20140129.232845.1934103935159898965.kazu@iij.ad.jp> Message-ID: Hi Dominick, I've been using it every day for about 2 years on my primary project with great success -- I don't know if anyone's actually released anything into the store with it yet but I don't anticipate any problems whatsoever with Apple (I know many still think Apple doesn't allow other languages than ObjC/C/C++ but they actually lifted that policy years ago -- if the app works, they're happy : )). Cheers Luke On Wed, Jan 29, 2014 at 1:59 PM, Dominick Samperi wrote: > That is great Luke, > > I hope your work gets rolled into 7.8. > > Is this just "experimental," or is it possible to actually develop > an app that will pass Apple's QA and can be hosted on the > app store? Are there any examples currently hosted on the App Store? > > Thanks, > Dominick > > > On Wed, Jan 29, 2014 at 4:26 PM, Luke Iannini wrote: > > Hi Austin, all, > > > > FYI: I'm building GHC iOS binaries for RC1 right now. It would be > awesome to > > release them along side if possible! > > > > Cheers > > Luke > > > > > > On Wed, Jan 29, 2014 at 6:28 AM, Kazu Yamamoto wrote: > >> > >> Austin, > >> > >> > For the record, this error occurs fairly late in the ./validate script > >> > - it's for testing the in-place binary distribution after a build, but > >> > before the testsuite is run. In the mean time, you can just do 'cd > >> > testsuite && make fast' to run all your tests after you have seen this > >> > error. I simply do not have a sensible, easy fix right now. > >> > >> OK. I understand. > >> > >> P.S. > >> > >> I obtained Mac pro. Building and validating GHC on Mac pro are very > >> quick. So, if necessary, please feel free to ask me to check building > >> or validating GHC on Mac anytime. > >> > >> --Kazu > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Wed Jan 29 22:18:58 2014 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 29 Jan 2014 23:18:58 +0100 Subject: Unit tests for GHC code? In-Reply-To: <1391008855.2560.14.camel@kirk> References: <1390989488.2560.10.camel@kirk> <95778709-E468-4A5C-B37E-09C9446C42AE@cis.upenn.edu> <1391008855.2560.14.camel@kirk> Message-ID: <201401292318.58699.jan.stolarek@p.lodz.pl> Having an easy way to write unit tests would be great. I made an attempt to do this during my internship (this wasn't merged): https://github.com/jstolarek/testsuite/blob/0829be3a00eecfab0d7026c9a5dc18dc1d669a07/tests/ghc-api/CmmCopyPropagationTest.hs https://github.com/jstolarek/testsuite/blob/0829be3a00eecfab0d7026c9a5dc18dc1d669a07/tests/ghc-api/CmmCopyPropagationTest.stdout But that is far from a real unit test: results of many unit tests coupled together in one stdout file, lots of boilerplate and reinventing the wheel. I would love to see the testsuite extended with an easy way to write unit tests but I guess that might require some substantial effort. > PS: I?m subscribed to the list, no need to send a copy to my private address. This is typically done to alert someone that he/she has been addressed directly in a discussion. I for example have my filters set in such a way that all ghc-devs mails are automatically marked as read unless I am CC'd. Janek From mail at joachim-breitner.de Wed Jan 29 22:47:18 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 29 Jan 2014 22:47:18 +0000 Subject: Reply etiquette In-Reply-To: <201401292318.58699.jan.stolarek@p.lodz.pl> References: <1390989488.2560.10.camel@kirk> <95778709-E468-4A5C-B37E-09C9446C42AE@cis.upenn.edu> <1391008855.2560.14.camel@kirk> <201401292318.58699.jan.stolarek@p.lodz.pl> Message-ID: <1391035638.3029.17.camel@kirk> Hi, Am Mittwoch, den 29.01.2014, 23:18 +0100 schrieb Jan Stolarek: > > PS: I?m subscribed to the list, no need to send a copy to my private address. > > This is typically done to alert someone that he/she has been addressed directly in a discussion. I > for example have my filters set in such a way that all ghc-devs mails are automatically marked as > read unless I am CC'd. if someone really needs urgent attention from me, putting me in CC is fine: Mail directed to me will cause popups and land in my Inbox. But doing so carelessly makes this distinction useless; for example with the recent pattern synonym thread, I once made a minor comment and got a dozend mails explicitly sent to me. This is alerting thing is clearly not working ? and I am tempted to the opposite of what you do: Automatically delete any mail reaching my inbox that also goes to ghc-dev (and stop whining here). But before doing that, I?ll try using the Reply-To header, let?s see if that works better. BTW, does everyone know about Reply-To-List (sometimes calle Group Reply, Ctrl-L in evolution) instead of Reply-To-All? But I heard rumors that Outlook does not support that, and ? unlike in the Debian community ? that would be a problem. I guess Reply-To can help then. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: This is a digitally signed message part URL: From fuuzetsu at fuuzetsu.co.uk Wed Jan 29 23:40:50 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Wed, 29 Jan 2014 23:40:50 +0000 Subject: Pattern synonyms for 7.8? In-Reply-To: References: Message-ID: <52E99182.4040208@fuuzetsu.co.uk> On 05/01/14 12:16, Dr. ERDI Gergo wrote: > Hi, > > When I started working on pattern synonyms (#5144) back in August, it > seemed the GHC 7.8 freeze was imminent, so I was planning for a > first version in 7.10/8.0 (whatever it will be called). However, since not > much has happened re: 7.8 since then (at least not much publicly visible), > and on the other hand, my implementation of pattern synonyms is ready, I > am now starting to wonder if it could be squeezed into 7.8. What are your > thoughts on this? > > Thanks, > Gergo > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > Hi again, We've ran into some trouble over at #ghc regarding Haddock updates to do with PatternSynonyms. You have updated Haddock accordingly but at the same time, you haven't checked that you haven't broken it. This means that when we were trying to quickly fix a bug today, the Haddock tests came up as failing. There at least two faults that I've spotted: * Single space added in front of every function name. This isn't visible by the user but is visible by the test-suite and would require that we update every test file for no good reason. After a long while, I narrowed it down to the line ?leader <+> ppTypeSig summary occnames pp_typ unicode? as it seems that ?leader? is empty for a lot of time and the (<+>) function adds a single space. It'd be an easy fix if it was only this but? * Data types using infix notations are now parenthesised Haddock now renders ?data a :- b? as ?data a (:-) b?. This is a problem. I don't know what else is broken but I can't go on trying to fix this because you haven't added any tests for the features you put in! I have no idea what I'm breaking in PatternSynonyms when making changes. For now we have to revert some of the Haddock changes, namely the XHtml back-end stuff you added. The proposed revert is currently at [1] and will probably be put into the 7.8 RC very soon because the documentation for ?base? has to be generated. Please have a look and see what you can fix in the XHtml back-end for your feature. This includes making sure that the existing tests pass (you do this by running ?cabal test?, just running validate for GHC is _not_ enough) and adding new tests for the things you add (you're going to be interested in adding test cases in html-test/src and adding the expected test results in html-test/ref). Thanks [1]: https://github.com/Fuuzetsu/haddock/tree/codeblockfix -- Mateusz K. From gergo at erdi.hu Wed Jan 29 23:44:09 2014 From: gergo at erdi.hu (=?UTF-8?B?RHIuIMOJUkRJIEdlcmfFkQ==?=) Date: Thu, 30 Jan 2014 07:44:09 +0800 Subject: Pattern synonyms for 7.8? In-Reply-To: <52E99182.4040208@fuuzetsu.co.uk> References: <52E99182.4040208@fuuzetsu.co.uk> Message-ID: Hi, Sorry, I wasn't aware running validate was not enough. I'll check out the problems in ~10 hours. Bye, Gergo On Jan 30, 2014 7:41 AM, "Mateusz Kowalczyk" wrote: > On 05/01/14 12:16, Dr. ERDI Gergo wrote: > > Hi, > > > > When I started working on pattern synonyms (#5144) back in August, it > > seemed the GHC 7.8 freeze was imminent, so I was planning for a > > first version in 7.10/8.0 (whatever it will be called). However, since > not > > much has happened re: 7.8 since then (at least not much publicly > visible), > > and on the other hand, my implementation of pattern synonyms is ready, I > > am now starting to wonder if it could be squeezed into 7.8. What are your > > thoughts on this? > > > > Thanks, > > Gergo > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > Hi again, > > We've ran into some trouble over at #ghc regarding Haddock updates to > do with PatternSynonyms. You have updated Haddock accordingly but at > the same time, you haven't checked that you haven't broken it. This > means that when we were trying to quickly fix a bug today, the Haddock > tests came up as failing. There at least two faults that I've spotted: > > * Single space added in front of every function name. > > This isn't visible by the user but is visible by the test-suite and > would require that we update every test file for no good reason. > After a long while, I narrowed it down to the line > "leader <+> ppTypeSig summary occnames pp_typ unicode" as it seems > that 'leader' is empty for a lot of time and the (<+>) function adds > a single space. It'd be an easy fix if it was only this but... > > * Data types using infix notations are now parenthesised > > Haddock now renders 'data a :- b' as 'data a (:-) b'. This is a > problem. > > I don't know what else is broken but I can't go on trying to fix this > because you haven't added any tests for the features you put in! I > have no idea what I'm breaking in PatternSynonyms when making changes. > For now we have to revert some of the Haddock changes, namely the > XHtml back-end stuff you added. The proposed revert is currently at > [1] and will probably be put into the 7.8 RC very soon because the > documentation for 'base' has to be generated. > > Please have a look and see what you can fix in the XHtml back-end for > your feature. This includes making sure that the existing tests pass > (you do this by running 'cabal test', just running validate for GHC is > _not_ enough) and adding new tests for the things you add (you're > going to be interested in adding test cases in html-test/src and > adding the expected test results in html-test/ref). > > Thanks > > [1]: https://github.com/Fuuzetsu/haddock/tree/codeblockfix > > -- > Mateusz K. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Wed Jan 29 23:53:42 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Wed, 29 Jan 2014 23:53:42 +0000 Subject: Reply etiquette In-Reply-To: <1391035638.3029.17.camel@kirk> References: <1390989488.2560.10.camel@kirk> <95778709-E468-4A5C-B37E-09C9446C42AE@cis.upenn.edu> <1391008855.2560.14.camel@kirk> <201401292318.58699.jan.stolarek@p.lodz.pl> <1391035638.3029.17.camel@kirk> Message-ID: <52E99486.8030907@fuuzetsu.co.uk> On 29/01/14 22:47, Joachim Breitner wrote: > Hi, > > Am Mittwoch, den 29.01.2014, 23:18 +0100 schrieb Jan Stolarek: >>> PS: I?m subscribed to the list, no need to send a copy to my private address. >> >> This is typically done to alert someone that he/she has been addressed directly in a discussion. I >> for example have my filters set in such a way that all ghc-devs mails are automatically marked as >> read unless I am CC'd. > > if someone really needs urgent attention from me, putting me in CC is > fine: Mail directed to me will cause popups and land in my Inbox. But > doing so carelessly makes this distinction useless; for example with the > recent pattern synonym thread, I once made a minor comment and got a > dozend mails explicitly sent to me. This is alerting thing is clearly > not working ? and I am tempted to the opposite of what you do: > Automatically delete any mail reaching my inbox that also goes to > ghc-dev (and stop whining here). > > But before doing that, I?ll try using the Reply-To header, let?s see if > that works better. > > BTW, does everyone know about Reply-To-List (sometimes calle Group > Reply, Ctrl-L in evolution) instead of Reply-To-All? But I heard rumors > that Outlook does not support that, and ? unlike in the Debian community > ? that would be a problem. I guess Reply-To can help then. Thunderbird has this, Reply List. I think the problem is that the reply headers sometimes get messed up: I always use Reply List but even then, Thunderbird often either replies to the person in question and CCs the list or the other way around. For this e-mail, I am only sending to the list, using Reply List. I think it adds the person by default if they address mail e-mail and similar thing probably happens for others. I try to remove such occurrences manually but I'm sure it sometimes slips by. I think that as long as you read ghc-devs, it's fine to set up your client to not show you things in your inbox. > > > Greetings, > Joachim > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Mateusz K. From kazu at iij.ad.jp Thu Jan 30 01:05:39 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Thu, 30 Jan 2014 10:05:39 +0900 (JST) Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: Message-ID: <20140130.100539.175538283497902324.kazu@iij.ad.jp> Hi Austin, It seems to me that the patch for Cabal in ticket 8266 is still missing: https://ghc.haskell.org/trac/ghc/ticket/8266 diff --git a/Cabal/Distribution/Simple/GHC.hs b/Cabal/Distribution/Simple/GHC.hs index c7ea633..78cdcbb 100644 --- a/Cabal/Distribution/Simple/GHC.hs +++ b/Cabal/Distribution/Simple/GHC.hs @@ -867,11 +867,6 @@ buildOrReplLib forRepl verbosity pkg_descr lbi lib clbi = do ghcOptDynLinkMode = toFlag GhcDynamicOnly, ghcOptInputFiles = dynamicObjectFiles, ghcOptOutputFile = toFlag sharedLibFilePath, - -- For dynamic libs, Mac OS/X needs to know the install location - -- at build time. - ghcOptDylibName = if buildOS == OSX - then toFlag sharedLibInstallPath - else mempty, ghcOptPackageName = toFlag pkgid, ghcOptNoAutoLinkPackages = toFlag True, ghcOptPackageDBs = withPackageDB lbi, If Duncan is busy at this moment, can you take over the merge job? --Kazu > Hello all, > > I've just created the 7.8 branch after tying off some of the final loose ends. > > In its current state, I expect the branch as it is now to become RC1 > within the day. I plan on starting builds for the following soon: > > - OS X 10.7 and OS X 10.9 > - Linux i386/amd64 (likely based on Debian Wheezey) > - Windows i386/amd64 (many thanks to Kyrill Briantsev for the heroic > last-minute linker fixes!) > > I'll send a (GPG-signed) email containing SHA1 hashes when they're done. > > Two systems I won't make builds for RC1 by default (but could be > persuaded to if nobody else does, and people want it): > > - Older glibc-2.5 based systems (e.g. CentOS, - a few users have > talked about this wrt binary releases, where I don't think GHC works.) > - FreeBSD - Pali, if you'd like to do this, feel free, and let me know. > > This means I'll (mostly) be waiting around today, so feel free to > shoot questions. > > As of now, this means HEAD is now version 7.9, and you're free to push > wacky experiments or changes now, if you've been holding off. You'll > probably want to clean your whole tree, since this means the interface > file versions etc will change. > > Finally, we picked up a good amount of new committers this year, so > let's remind people of the merging policy: what happens if you need to > merge something you did to the 7.8 branch? There are two main avenues > for this to happen: > > * Someone reports a bug against the 7.8 RC on Trac. You decide to fix > it and do so. Now what? > > 1) Please commit the bug to master, and confirm it's a fix. > 2) Go to the bug, and instead of closing it, change the ticket > status to 'merge'. > 3) I will cherry-pick it over to the 7.8 branch for you - nothing > else needed. > > * There's not a recorded bug, but you do push a change, and you think > it should be merged (maybe a typo or something.) In this case, I'd ask > you please CC me on the email sent to ghc-commits at haskell.org which is > related to your commit, and just say "Please merge" or somesuch. I'll > come over the commits with such a response. > > This goes for all changes - submodule updates, typos, real fixes, etc. > It's likely me and Herbert will restrict the Gitolite permissions to > only allow the two of us to touch the ghc-7.8 branch. So it's really > important you put us in the loop, ASAP. > > If you don't do one of these two things, it's highly likely I will > miss it, and not merge it. If you have questions, please ask me or > Herbert. If there's a merge conflict, we can discuss it. > > Thanks > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From eir at cis.upenn.edu Thu Jan 30 02:15:26 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 29 Jan 2014 21:15:26 -0500 Subject: Reply etiquette In-Reply-To: <1391035638.3029.17.camel@kirk> References: <1390989488.2560.10.camel@kirk> <95778709-E468-4A5C-B37E-09C9446C42AE@cis.upenn.edu> <1391008855.2560.14.camel@kirk> <201401292318.58699.jan.stolarek@p.lodz.pl> <1391035638.3029.17.camel@kirk> Message-ID: Ah, yes, the reply-to worked great. I use the Mac Mail app, which lacks a reply-to-list feature (as far as I can tell). So, without a reply-to in the header, I have two choices without manual override: reply sender (omits the list) or reply all (includes both sender and list). Of course, I can tailor my headers before I send, but it honestly never occurred to me that this mattered. Now that you point out your filtering strategy, it's obvious. In any case, by putting in the reply-to, all is better. Keeping in mind that others are likely using a similar filtering strategy, I will try to pay more attention to this in the future. Richard On Jan 29, 2014, at 5:47 PM, Joachim Breitner wrote: > Hi, > > Am Mittwoch, den 29.01.2014, 23:18 +0100 schrieb Jan Stolarek: >>> PS: I?m subscribed to the list, no need to send a copy to my private address. >> >> This is typically done to alert someone that he/she has been addressed directly in a discussion. I >> for example have my filters set in such a way that all ghc-devs mails are automatically marked as >> read unless I am CC'd. > > if someone really needs urgent attention from me, putting me in CC is > fine: Mail directed to me will cause popups and land in my Inbox. But > doing so carelessly makes this distinction useless; for example with the > recent pattern synonym thread, I once made a minor comment and got a > dozend mails explicitly sent to me. This is alerting thing is clearly > not working ? and I am tempted to the opposite of what you do: > Automatically delete any mail reaching my inbox that also goes to > ghc-dev (and stop whining here). > > But before doing that, I?ll try using the Reply-To header, let?s see if > that works better. > > BTW, does everyone know about Reply-To-List (sometimes calle Group > Reply, Ctrl-L in evolution) instead of Reply-To-All? But I heard rumors > that Outlook does not support that, and ? unlike in the Debian community > ? that would be a problem. I guess Reply-To can help then. > > > Greetings, > Joachim > > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C > Debian Developer: nomeata at debian.org > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Thu Jan 30 03:13:10 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 29 Jan 2014 22:13:10 -0500 Subject: Reply etiquette In-Reply-To: <1391035638.3029.17.camel@kirk> References: <1390989488.2560.10.camel@kirk> <95778709-E468-4A5C-B37E-09C9446C42AE@cis.upenn.edu> <1391008855.2560.14.camel@kirk> <201401292318.58699.jan.stolarek@p.lodz.pl> <1391035638.3029.17.camel@kirk> Message-ID: Ironically, this is the first ghc-devs email to ever get sent to my bulk folder. :) I can't speak for others, but I try to read most of the email in devs and related lists (for good or for ill). Though this certainly gets in the way of getting work done sometimes :) On Wednesday, January 29, 2014, Joachim Breitner wrote: > Hi, > > Am Mittwoch, den 29.01.2014, 23:18 +0100 schrieb Jan Stolarek: > > > PS: I?m subscribed to the list, no need to send a copy to my private > address. > > > > This is typically done to alert someone that he/she has been addressed > directly in a discussion. I > > for example have my filters set in such a way that all ghc-devs mails > are automatically marked as > > read unless I am CC'd. > > if someone really needs urgent attention from me, putting me in CC is > fine: Mail directed to me will cause popups and land in my Inbox. But > doing so carelessly makes this distinction useless; for example with the > recent pattern synonym thread, I once made a minor comment and got a > dozend mails explicitly sent to me. This is alerting thing is clearly > not working ? and I am tempted to the opposite of what you do: > Automatically delete any mail reaching my inbox that also goes to > ghc-dev (and stop whining here). > > But before doing that, I?ll try using the Reply-To header, let?s see if > that works better. > > BTW, does everyone know about Reply-To-List (sometimes calle Group > Reply, Ctrl-L in evolution) instead of Reply-To-All? But I heard rumors > that Outlook does not support that, and ? unlike in the Debian community > ? that would be a problem. I guess Reply-To can help then. > > > Greetings, > Joachim > > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C > Debian Developer: nomeata at debian.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukexipd at gmail.com Thu Jan 30 05:27:34 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Wed, 29 Jan 2014 21:27:34 -0800 Subject: GHC for iOS (arm) 7.8 Preview Build ready for upload Message-ID: Hi folks! I've just finished a preview build of GHC for iOS off the 7.8 branch, but could use a place to upload it. And once that's done, if anyone would like to help me test it that has Xcode 5 and a device, that would be excellent! Cheers Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukexipd at gmail.com Thu Jan 30 05:39:46 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Wed, 29 Jan 2014 21:39:46 -0800 Subject: GHC for iOS (arm) 7.8 Preview Build ready for upload In-Reply-To: References: Message-ID: OK, it's up here now: https://github.com/ghc-ios/ghc-ios-scripts/releases/tag/7.8RC1Preview1 You'll need the scripts from https://github.com/ghc-ios/ghc-ios-scripts in your path along with LLVM 3.0 (it's the only version that works so far). On Wed, Jan 29, 2014 at 9:27 PM, Luke Iannini wrote: > Hi folks! > > I've just finished a preview build of GHC for iOS off the 7.8 branch, but > could use a place to upload it. > > And once that's done, if anyone would like to help me test it that has > Xcode 5 and a device, that would be excellent! > > Cheers > Luke > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Thu Jan 30 08:43:18 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 30 Jan 2014 09:43:18 +0100 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: <20140130.100539.175538283497902324.kazu@iij.ad.jp> ("Kazu Yamamoto \=\?utf-8\?B\?KOWxseacrOWSjOW9pikiJ3M\=\?\= message of "Thu, 30 Jan 2014 10:05:39 +0900 (JST)") References: <20140130.100539.175538283497902324.kazu@iij.ad.jp> Message-ID: <8738k5yi55.fsf@gmail.com> Hello Kazu, ..as this is a Cabal issue, this needs to be handled upstream; could you please file an issue at https://github.com/haskell/cabal/issues/new and mention there that we need that to be cherry-picked into the `1.18` branch as well -- as soon as it's in the 1.18 branch, we can update the GHC source tree to include that fix. Thanks, hvr On 2014-01-30 at 02:05:39 +0100, Kazu Yamamoto (????) wrote: > Hi Austin, > > It seems to me that the patch for Cabal in ticket 8266 is still missing: > > https://ghc.haskell.org/trac/ghc/ticket/8266 > > diff --git a/Cabal/Distribution/Simple/GHC.hs b/Cabal/Distribution/Simple/GHC.hs > index c7ea633..78cdcbb 100644 > --- a/Cabal/Distribution/Simple/GHC.hs > +++ b/Cabal/Distribution/Simple/GHC.hs > @@ -867,11 +867,6 @@ buildOrReplLib forRepl verbosity pkg_descr lbi lib clbi = do > ghcOptDynLinkMode = toFlag GhcDynamicOnly, > ghcOptInputFiles = dynamicObjectFiles, > ghcOptOutputFile = toFlag sharedLibFilePath, > - -- For dynamic libs, Mac OS/X needs to know the install location > - -- at build time. > - ghcOptDylibName = if buildOS == OSX > - then toFlag sharedLibInstallPath > - else mempty, > ghcOptPackageName = toFlag pkgid, > ghcOptNoAutoLinkPackages = toFlag True, > ghcOptPackageDBs = withPackageDB lbi, > > If Duncan is busy at this moment, can you take over the merge job? From kazu at iij.ad.jp Thu Jan 30 10:31:57 2014 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Thu, 30 Jan 2014 19:31:57 +0900 (JST) Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: <8738k5yi55.fsf@gmail.com> References: <20140130.100539.175538283497902324.kazu@iij.ad.jp> <8738k5yi55.fsf@gmail.com> Message-ID: <20140130.193157.1138127216634278028.kazu@iij.ad.jp> Hello Herbert, > Hello Kazu, > > ..as this is a Cabal issue, this needs to be handled upstream; could you > please file an issue at > > https://github.com/haskell/cabal/issues/new Done. https://github.com/haskell/cabal/issues/1660 --Kazu From nicolas.frisby at gmail.com Thu Jan 30 21:07:33 2014 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Thu, 30 Jan 2014 15:07:33 -0600 Subject: workaround to get both domain-specific errors and also multi-modal type inference? Message-ID: Hi all. I have a question for those savvy to the type-checker's internal workings. For uses of the following function, can anyone suggest a means of forcing GHC to attempt to solve C a b even if a~b fails? > dslAsTypeOf :: (C a b,a~b) => a -> b -> a > dslAsTypeOf x _ = x > > class C a b -- no methods Just for concreteness, the following are indicative of the variety of instances that I expect C to have. (I don't think this actually affects the question above.) > instance C DSLType1 DSLType1 > instance C DSLType2 DSLType2 > instance C x y => C (DSLType3 x) (DSLType3 y) > > instance IndicativeErrorMessage1 => C DSLType1 DSLType2 > instance IndicativeErrorMessage2 => C DSLType2 (DSLType3 y) > > class IndicativeErrorMessage1 -- will never have instances > class IndicativeErrorMessage2 -- will never have instances Thank you for your time. =================================== Keep reading for the "short story", the "long story", and two ideas for a small GHC patch that would enable my technique outlined above. ===== short story ===== The motivation of dslAsTypeOf is to provide improved error messages when a and b are not equal. Unfortunately, the user will never see IndicativeErrorMessage. It appears that GHC does not attempt to solve C a b if a~b fails. That's understandable, since the solution of C a b almost certainly depends on the "value" of its arguments... Hence, the question at the top of this email. ===== long story ===== A lot of the modern type-level programming capabilities can be put to great use in creating type systems for embedded domain specific languages. These type systems can enforce some impressive properties. However, the error messages you get when one of those property is not satisfied can be pretty abysmal. In my particular case, I have a phantom type where the tag carries all the domain-specific information. > newtype U (tag :: [Info]) a = U a and I have an binary operation that requires the tag to be equivalent for all three types. > plus :: Num a => U tag a -> U tag a -> U tag a > plus (U x) (U y) = U $ x + y This effectively enforces the property I want for U values. Unfortunately, the error messages can seem dimwitted. > ill_typed = plus (U 1 :: U [Foo,Bar,Baz] Int) (U 2 :: U [Foo,Baz] Int) The `ill_typed` value gives an error message (in GHC 7.8) saying Bar : Baz : [] is not equal to Baz : [] (It's worse in GHC 7.4. I don't have access to a 7.6 at the moment.) I'd much rather have it say that "Bar is missing from the first summand's list." And I can define a class that effectively conveys that information in a "domain-specific error message" which is actually a "no instance of tag1 tag2" message. > friendlier_plus :: (FriendlyEqCheck tag1 tag2 tag3,Num a) => U tag1 a -> U tag2 a -> U tag3 a The `friendlier_plus' function gives useful error messages if tag1, tag2, and tag3 are all fully monomorphic. However, it does not support inference: > zero :: Num a => U tag a > zero = U 0 > > x = U 4 :: U [Foo,Baz] Int > > -- both of these are ill-typed :( > should_work1 = (friendlier_plus zero x) `asTypeOf` x -- tag1 is unconstrained > should_work2 = friendlier_plus x x -- tag3 is unconstrained Neither of those terms type-check, since FriendlyEqCheck can't determine if the unconstrained `tag' variable is equal to the other tags. So, can we get the best of both worlds? > best_plus :: > ( FriendlyEqCheck tag1 tag2 tag3 , tag1 ~ tag2, tag2 ~ tag3, Num a) => U tag1 a -> U tag2 a -> U tag3 a No, unfortunately not. Now the `should_work*' functions type-check, but an ill-typed use of `best_plus' gives the same poor error message that `plus' would give. Hence, the question at the top of this email. ===== two ideas ===== If my question at the top of this email cannot be answered in the affirmative, perhaps a small patch to GHC would make this sort of approach a viable lightweight workaround for improving domain-specific error messages. (I cannot estimate how difficult this would be to implement in the type-checker.) Two alternative ideas. 1) An "ALWAYS_ATTEMPT" PRAGMA that you can place on the class C so that it is attempted even if a related ~ constraint fails. 2) An OrElse constraint former, offering *very* constrained back-tracking. > possibleAsTypeOf :: ((a ~ b) `OrElse` C a b) => a -> b -> a > possibleAsTypeOf x _ = x Requirements: C must have no superclasses, no methods, and no fundeps. Specification: If (a~b) fails and (C a b) is satisfiable, then the original inequality error message would be shown to the user. Else, C's error message is used. =================================== You made it to the bottom of the email! Thanks again. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.frisby at gmail.com Thu Jan 30 21:09:58 2014 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Thu, 30 Jan 2014 15:09:58 -0600 Subject: workaround to get both domain-specific errors and also multi-modal type inference? In-Reply-To: References: Message-ID: Also, on the topic of patching GHC for domain-specific error messages, why not add a simple means to emit a custom error message? It would beat piggy-backing on the "no instance" messages as I currently plan to. This seems safe and straight-forward: > -- wired-in, cannot be instantiated > class GHC.Exts.PrintError (msg :: Symbol) (args :: [k]) Consider the class C fromy previous email. It's possible these two instances are now sufficient. > instance C a b > instance PrintError ("You used %1 on the left and %2 on the right!" :: Symbol) [a,b] => C a b And let's not forget warnings! > -- wired-in, cannot be instantiated > class GHC.Exts.PrintWarn (msg :: Symbol) (args :: '[k]) Thanks again. On Thu, Jan 30, 2014 at 3:07 PM, Nicolas Frisby wrote: > Hi all. I have a question for those savvy to the type-checker's internal > workings. > > For uses of the following function, can anyone suggest a means of forcing > GHC to attempt to solve C a b even if a~b fails? > > > dslAsTypeOf :: (C a b,a~b) => a -> b -> a > > dslAsTypeOf x _ = x > > > > class C a b -- no methods > > Just for concreteness, the following are indicative of the variety of > instances that I expect C to have. (I don't think this actually affects the > question above.) > > > instance C DSLType1 DSLType1 > > instance C DSLType2 DSLType2 > > instance C x y => C (DSLType3 x) (DSLType3 y) > > > > instance IndicativeErrorMessage1 => C DSLType1 DSLType2 > > instance IndicativeErrorMessage2 => C DSLType2 (DSLType3 y) > > > > class IndicativeErrorMessage1 -- will never have instances > > class IndicativeErrorMessage2 -- will never have instances > > Thank you for your time. > > =================================== > > Keep reading for the "short story", the "long story", and two ideas for a > small GHC patch that would enable my technique outlined above. > > ===== short story ===== > > The motivation of dslAsTypeOf is to provide improved error messages when a > and b are not equal. > > Unfortunately, the user will never see IndicativeErrorMessage. It appears > that GHC does not attempt to solve C a b if a~b fails. That's > understandable, since the solution of C a b almost certainly depends on the > "value" of its arguments... > > Hence, the question at the top of this email. > > ===== long story ===== > > A lot of the modern type-level programming capabilities can be put to > great use in creating type systems for embedded domain specific languages. > These type systems can enforce some impressive properties. > > However, the error messages you get when one of those property is not > satisfied can be pretty abysmal. > > In my particular case, I have a phantom type where the tag carries all the > domain-specific information. > > > newtype U (tag :: [Info]) a = U a > > and I have an binary operation that requires the tag to be equivalent for > all three types. > > > plus :: Num a => U tag a -> U tag a -> U tag a > > plus (U x) (U y) = U $ x + y > > This effectively enforces the property I want for U values. Unfortunately, > the error messages can seem dimwitted. > > > ill_typed = plus (U 1 :: U [Foo,Bar,Baz] Int) (U 2 :: U [Foo,Baz] Int) > > The `ill_typed` value gives an error message (in GHC 7.8) saying > > Bar : Baz : [] is not equal to Baz : [] > > (It's worse in GHC 7.4. I don't have access to a 7.6 at the moment.) > > I'd much rather have it say that "Bar is missing from the first summand's > list." And I can define a class that effectively conveys that information > in a "domain-specific error message" which is actually a "no instance of > tag1 tag2" message. > > > friendlier_plus :: (FriendlyEqCheck tag1 tag2 tag3,Num a) => U tag1 a -> > U tag2 a -> U tag3 a > > The `friendlier_plus' function gives useful error messages if tag1, tag2, > and tag3 are all fully monomorphic. > > However, it does not support inference: > > > zero :: Num a => U tag a > > zero = U 0 > > > > x = U 4 :: U [Foo,Baz] Int > > > > -- both of these are ill-typed :( > > should_work1 = (friendlier_plus zero x) `asTypeOf` x -- tag1 is > unconstrained > > should_work2 = friendlier_plus x x -- tag3 is unconstrained > > Neither of those terms type-check, since FriendlyEqCheck can't determine > if the unconstrained `tag' variable is equal to the other tags. > > So, can we get the best of both worlds? > > > best_plus :: > > ( FriendlyEqCheck tag1 tag2 tag3 > , tag1 ~ tag2, tag2 ~ tag3, Num a) => U tag1 a -> U tag2 a -> U tag3 a > > No, unfortunately not. Now the `should_work*' functions type-check, but an > ill-typed use of `best_plus' gives the same poor error message that `plus' > would give. > > Hence, the question at the top of this email. > > ===== two ideas ===== > > If my question at the top of this email cannot be answered in the > affirmative, perhaps a small patch to GHC would make this sort of approach > a viable lightweight workaround for improving domain-specific error > messages. > > (I cannot estimate how difficult this would be to implement in the > type-checker.) > > Two alternative ideas. > > 1) An "ALWAYS_ATTEMPT" PRAGMA that you can place on the class C so that it > is attempted even if a related ~ constraint fails. > > 2) An OrElse constraint former, offering *very* constrained back-tracking. > > > possibleAsTypeOf :: ((a ~ b) `OrElse` C a b) => a -> b -> a > > possibleAsTypeOf x _ = x > > Requirements: C must have no superclasses, no methods, and no fundeps. > > Specification: If (a~b) fails and (C a b) is satisfiable, then the > original inequality error message would be shown to the user. Else, C's > error message is used. > > =================================== > > You made it to the bottom of the email! Thanks again. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Thu Jan 30 23:03:22 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 30 Jan 2014 17:03:22 -0600 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: <20140130.100539.175538283497902324.kazu@iij.ad.jp> <8738k5yi55.fsf@gmail.com> <20140130.193157.1138127216634278028.kazu@iij.ad.jp> Message-ID: (Grr, resending to list...) Hello all, The 7.8 branch is officially ready for RC1 (after some final Haddock bugs got quickly squashed by Gergo and Mateusz.) Pali, Luke - this is specifically for you two as you have offered to make the FreeBSD and iOS builds (Luke - 7.8 should contain both the fix for __thread and the perf-cross flavor, so it should work out of the box for you.) Attached is a fingerprint file for the GHC repository. You can restore it with: $ ./utils/fingerprint/fingerprint.py restore -f ghc-7.8-rc1.fingerprint See here for more details - https://ghc.haskell.org/trac/ghc/wiki/Building/GettingTheSources#Trackingthefullrepositorystate Alternatively, simply checking out to the 'ghc-7.8' branch will result in the same thing - no new commits will go in until after RC1: $ git clone -b ghc-7.8 git://git.haskell.org/ghc ghc-7.8 $ cd ghc-7.8 $ ./sync-all get -b ghc-7.8 --extra --nofib Afterwords, just build and make the binaries as you normally would: $ ./boot; ./configure $ make $ make binary-dist Please let me know when the builds are done and somewhere to obtain them, and I'll upload them to haskell.org for the RC. I'll begin my builds now too... On Thu, Jan 30, 2014 at 5:02 PM, Austin Seipp wrote: > Hello all, > > The 7.8 branch is officially ready for RC1 (after some final Haddock > bugs got quickly squashed by Gergo and Mateusz.) > > Pali, Luke - this is specifically for you two as you have offered to > make the FreeBSD and iOS builds (Luke - 7.8 should contain both the > fix for __thread and the perf-cross flavor, so it should work out of > the box for you.) > > Attached is a fingerprint file for the GHC repository. You can restore it with: > > $ ./utils/fingerprint/fingerprint.py restore -f ghc-7.8-rc1.fingerprint > > See here for more details - > https://ghc.haskell.org/trac/ghc/wiki/Building/GettingTheSources#Trackingthefullrepositorystate > > Alternatively, simply checking out to the 'ghc-7.8' branch will result > in the same thing - no new commits will go in until after RC1: > > $ git clone -b ghc-7.8 git://git.haskell.org/ghc ghc-7.8 > $ cd ghc-7.8 > $ ./sync-all get -b ghc-7.8 --extra --nofib > > Afterwords, just build and make the binaries as you normally would: > > $ ./boot; ./configure > $ make > $ make binary-dist > > Please let me know when the builds are done and somewhere to obtain > them, and I'll upload them to haskell.org for the RC. I'll begin my > builds now too... > > > On Thu, Jan 30, 2014 at 4:31 AM, Kazu Yamamoto wrote: >> Hello Herbert, >> >>> Hello Kazu, >>> >>> ..as this is a Cabal issue, this needs to be handled upstream; could you >>> please file an issue at >>> >>> https://github.com/haskell/cabal/issues/new >> >> Done. >> >> https://github.com/haskell/cabal/issues/1660 >> >> --Kazu >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > > -- > Regards, > Austin - PGP: 4096R/0x91384671 -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: ghc-7.8-rc1.fingerprint Type: application/octet-stream Size: 2278 bytes Desc: not available URL: From lukexipd at gmail.com Thu Jan 30 23:15:27 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Thu, 30 Jan 2014 15:15:27 -0800 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: <20140130.100539.175538283497902324.kazu@iij.ad.jp> <8738k5yi55.fsf@gmail.com> <20140130.193157.1138127216634278028.kazu@iij.ad.jp> Message-ID: Hi Austin, Awesome. Builds are underway now. Cheers Luke On Thu, Jan 30, 2014 at 3:03 PM, Austin Seipp wrote: > (Grr, resending to list...) > > Hello all, > > The 7.8 branch is officially ready for RC1 (after some final Haddock > bugs got quickly squashed by Gergo and Mateusz.) > > Pali, Luke - this is specifically for you two as you have offered to > make the FreeBSD and iOS builds (Luke - 7.8 should contain both the > fix for __thread and the perf-cross flavor, so it should work out of > the box for you.) > > Attached is a fingerprint file for the GHC repository. You can restore it > with: > > $ ./utils/fingerprint/fingerprint.py restore -f ghc-7.8-rc1.fingerprint > > See here for more details - > > https://ghc.haskell.org/trac/ghc/wiki/Building/GettingTheSources#Trackingthefullrepositorystate > > Alternatively, simply checking out to the 'ghc-7.8' branch will result > in the same thing - no new commits will go in until after RC1: > > $ git clone -b ghc-7.8 git://git.haskell.org/ghc ghc-7.8 > $ cd ghc-7.8 > $ ./sync-all get -b ghc-7.8 --extra --nofib > > Afterwords, just build and make the binaries as you normally would: > > $ ./boot; ./configure > $ make > $ make binary-dist > > Please let me know when the builds are done and somewhere to obtain > them, and I'll upload them to haskell.org for the RC. I'll begin my > builds now too... > > On Thu, Jan 30, 2014 at 5:02 PM, Austin Seipp wrote: > > Hello all, > > > > The 7.8 branch is officially ready for RC1 (after some final Haddock > > bugs got quickly squashed by Gergo and Mateusz.) > > > > Pali, Luke - this is specifically for you two as you have offered to > > make the FreeBSD and iOS builds (Luke - 7.8 should contain both the > > fix for __thread and the perf-cross flavor, so it should work out of > > the box for you.) > > > > Attached is a fingerprint file for the GHC repository. You can restore > it with: > > > > $ ./utils/fingerprint/fingerprint.py restore -f ghc-7.8-rc1.fingerprint > > > > See here for more details - > > > https://ghc.haskell.org/trac/ghc/wiki/Building/GettingTheSources#Trackingthefullrepositorystate > > > > Alternatively, simply checking out to the 'ghc-7.8' branch will result > > in the same thing - no new commits will go in until after RC1: > > > > $ git clone -b ghc-7.8 git://git.haskell.org/ghc ghc-7.8 > > $ cd ghc-7.8 > > $ ./sync-all get -b ghc-7.8 --extra --nofib > > > > Afterwords, just build and make the binaries as you normally would: > > > > $ ./boot; ./configure > > $ make > > $ make binary-dist > > > > Please let me know when the builds are done and somewhere to obtain > > them, and I'll upload them to haskell.org for the RC. I'll begin my > > builds now too... > > > > > > On Thu, Jan 30, 2014 at 4:31 AM, Kazu Yamamoto wrote: > >> Hello Herbert, > >> > >>> Hello Kazu, > >>> > >>> ..as this is a Cabal issue, this needs to be handled upstream; could > you > >>> please file an issue at > >>> > >>> https://github.com/haskell/cabal/issues/new > >> > >> Done. > >> > >> https://github.com/haskell/cabal/issues/1660 > >> > >> --Kazu > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > >> > > > > > > > > -- > > Regards, > > Austin - PGP: 4096R/0x91384671 > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Thu Jan 30 23:48:01 2014 From: pali.gabor at gmail.com (=?ISO-8859-1?Q?P=E1li_G=E1bor_J=E1nos?=) Date: Fri, 31 Jan 2014 00:48:01 +0100 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: <20140130.100539.175538283497902324.kazu@iij.ad.jp> <8738k5yi55.fsf@gmail.com> <20140130.193157.1138127216634278028.kazu@iij.ad.jp> Message-ID: On Fri, Jan 31, 2014 at 12:02 AM, Austin Seipp wrote: > The 7.8 branch is officially ready for RC1 (after some final Haddock > bugs got quickly squashed by Gergo and Mateusz.) Excellent, folks! > Alternatively, simply checking out to the 'ghc-7.8' branch will result > in the same thing - no new commits will go in until after RC1: > > $ git clone -b ghc-7.8 git://git.haskell.org/ghc ghc-7.8 > $ cd ghc-7.8 > $ ./sync-all get -b ghc-7.8 --extra --nofib For some reason, I get this for the last command: == running git clone git://git.haskell.org/libffi-tarballs.git libffi-tarballs -b ghc-7.8 --extra --nofib error: unknown option `extra' From austin at well-typed.com Thu Jan 30 23:50:17 2014 From: austin at well-typed.com (Austin Seipp) Date: Thu, 30 Jan 2014 17:50:17 -0600 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: <20140130.100539.175538283497902324.kazu@iij.ad.jp> <8738k5yi55.fsf@gmail.com> <20140130.193157.1138127216634278028.kazu@iij.ad.jp> Message-ID: Whoops, I typo'd that. You need to specify '--extra --nofib' before the 'get', not after! On Thursday, January 30, 2014, P?li G?bor J?nos wrote: > On Fri, Jan 31, 2014 at 12:02 AM, Austin Seipp > > wrote: > > The 7.8 branch is officially ready for RC1 (after some final Haddock > > bugs got quickly squashed by Gergo and Mateusz.) > > Excellent, folks! > > > Alternatively, simply checking out to the 'ghc-7.8' branch will result > > in the same thing - no new commits will go in until after RC1: > > > > $ git clone -b ghc-7.8 git://git.haskell.org/ghc ghc-7.8 > > $ cd ghc-7.8 > > $ ./sync-all get -b ghc-7.8 --extra --nofib > > For some reason, I get this for the last command: > > == running git clone git://git.haskell.org/libffi-tarballs.git > libffi-tarballs -b ghc-7.8 --extra --nofib > error: unknown option `extra' > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Fri Jan 31 02:29:33 2014 From: pali.gabor at gmail.com (=?ISO-8859-1?Q?P=E1li_G=E1bor_J=E1nos?=) Date: Fri, 31 Jan 2014 03:29:33 +0100 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: <20140130.100539.175538283497902324.kazu@iij.ad.jp> <8738k5yi55.fsf@gmail.com> <20140130.193157.1138127216634278028.kazu@iij.ad.jp> Message-ID: On Fri, Jan 31, 2014 at 12:02 AM, Austin Seipp wrote: > Please let me know when the builds are done and somewhere to obtain > them, and I'll upload them to haskell.org for the RC. I'll begin my > builds now too... All right, I have put the 32-bit and 64-bit FreeBSD builds here: http://haskell.inf.elte.hu/ghc/ Note that I included the corresponding SHA-256 checksum as well. Also, I composed a brief README for the users on how to install and use the binary distributions. Let me know if there is anything else I can do. From lukexipd at gmail.com Fri Jan 31 03:06:04 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Thu, 30 Jan 2014 19:06:04 -0800 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: <20140130.100539.175538283497902324.kazu@iij.ad.jp> <8738k5yi55.fsf@gmail.com> <20140130.193157.1138127216634278028.kazu@iij.ad.jp> Message-ID: And I've placed the iOS simulator and device builds here: https://github.com/ghc-ios/ghc-ios-scripts/releases https://github.com/ghc-ios/ghc-ios-scripts/releases/download/7.8RC1Preview1/ghc-7.8.20140129-arm-apple-ios.tar.bz2 and https://github.com/ghc-ios/ghc-ios-scripts/releases/download/7.8RC1Previewi386/ghc-7.8.20140130-i386-apple-ios.tar.bz2 README coming up shortly. On Thu, Jan 30, 2014 at 6:29 PM, P?li G?bor J?nos wrote: > On Fri, Jan 31, 2014 at 12:02 AM, Austin Seipp wrote: > > Please let me know when the builds are done and somewhere to obtain > > them, and I'll upload them to haskell.org for the RC. I'll begin my > > builds now too... > > All right, I have put the 32-bit and 64-bit FreeBSD builds here: > > http://haskell.inf.elte.hu/ghc/ > > Note that I included the corresponding SHA-256 checksum as well. > Also, I composed a brief README for the users on how to install and > use the binary distributions. > > Let me know if there is anything else I can do. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukexipd at gmail.com Fri Jan 31 05:03:22 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Thu, 30 Jan 2014 21:03:22 -0800 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: <20140130.100539.175538283497902324.kazu@iij.ad.jp> <8738k5yi55.fsf@gmail.com> <20140130.193157.1138127216634278028.kazu@iij.ad.jp> Message-ID: Hm, these don't seem to have come together correctly. It looks like "make binary-dist" isn't ready for a stage1/cross-compiler... has anyone tried that before? Maybe someone can see what's wrong from the binaries. I'll start digging in now... Cheers Luke On Thu, Jan 30, 2014 at 7:06 PM, Luke Iannini wrote: > And I've placed the iOS simulator and device builds here: > https://github.com/ghc-ios/ghc-ios-scripts/releases > > > https://github.com/ghc-ios/ghc-ios-scripts/releases/download/7.8RC1Preview1/ghc-7.8.20140129-arm-apple-ios.tar.bz2 > > and > > > https://github.com/ghc-ios/ghc-ios-scripts/releases/download/7.8RC1Previewi386/ghc-7.8.20140130-i386-apple-ios.tar.bz2 > > README coming up shortly. > > > On Thu, Jan 30, 2014 at 6:29 PM, P?li G?bor J?nos wrote: > >> On Fri, Jan 31, 2014 at 12:02 AM, Austin Seipp wrote: >> > Please let me know when the builds are done and somewhere to obtain >> > them, and I'll upload them to haskell.org for the RC. I'll begin my >> > builds now too... >> >> All right, I have put the 32-bit and 64-bit FreeBSD builds here: >> >> http://haskell.inf.elte.hu/ghc/ >> >> Note that I included the corresponding SHA-256 checksum as well. >> Also, I composed a brief README for the users on how to install and >> use the binary distributions. >> >> Let me know if there is anything else I can do. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Fri Jan 31 07:45:29 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Fri, 31 Jan 2014 08:45:29 +0100 Subject: 7.8 branch is created, HEAD is now open, and a note on merges In-Reply-To: References: <20140130.100539.175538283497902324.kazu@iij.ad.jp> <8738k5yi55.fsf@gmail.com> <20140130.193157.1138127216634278028.kazu@iij.ad.jp> Message-ID: <52EB5499.8020302@centrum.cz> On 01/31/14 12:03 AM, Austin Seipp wrote: > Afterwords, just build and make the binaries as you normally would: > > $ ./boot; ./configure > $ make > $ make binary-dist > > Please let me know when the builds are done and somewhere to obtain > them, and I'll upload them to haskell.org for the RC. I've done that for i386-solaris2, the build is here: https://app.box.com/s/ppty9fhjvo4p82dcn6bj but you will also need a newer gmp lib installed in /opt. This is provided in https://app.box.com/s/t6tmyiew7jj2yrsgcs67 Thanks! Karel From tkn.akio at gmail.com Fri Jan 31 07:54:04 2014 From: tkn.akio at gmail.com (Akio Takano) Date: Fri, 31 Jan 2014 16:54:04 +0900 Subject: Extending fold/build fusion In-Reply-To: <1390932396.2641.46.camel@kirk> References: <1390932396.2641.46.camel@kirk> Message-ID: Hi Joachim, On Wed, Jan 29, 2014 at 3:06 AM, Joachim Breitner wrote: > Dear Akio, > > Am Freitag, den 03.01.2014, 23:20 +0900 schrieb Akio Takano: >> I have been thinking about how foldl' can be turned into a good >> consumer, and I came up with something that I thought would work. So >> I'd like to ask for opinions from the ghc devs: if this idea looks >> good, if it is a known bad idea, if there is a better way to do it, >> etc. > > I'd like to evaluate your approach, but let me first note that I had > been working on #7994 (make foldl a good consumer), and with my patches > the compiler is smart enough to eta-expand go in all cases covered by > nofib, using the existing foldr/build-fusion. Nice. > > That said, I do like your idea of making the worker/wrapper a bit more > explicit, instead of relying on the compiler to do the transformation > for us. So let's see in what ways your proposal surpasses a smarter GHC. > > > The Tree example is a good one, because there any form of eta expansion, > just as you write, will not help. And I find that that Simons's solution > of using a foldr-based sum for Trees unsatisfying: We should indeed aim > for "sum $ toList tree" to produce good results. Given that Data.Map is > a tree, and that is a common data structure and it's toList a good > producer, this is relevant. I agree. In fact, my original motivation was that I wanted to efficiently serialize a IntMap into a ByteString. > > > Can you implement build via buildW, so that existing code like > "map" [~1] forall f xs. map f xs = build (\c n -> foldr (mapFB c f) n xs) > can be used unmodified? But probably not... but that would mean a > noticeable incompatibility and a burden on library authors using list > fusion. You can implement build in terms of buildW. However any list producer defined using that definition of build would produce good code if the final consumer is a left fold. The resulting code will be in CPS. On the other hand, I imagine that if we also annotate foldl with oneShot, this problem may become less severe. > > > In any case, I suggest you just dig in, create a branch of > libraries/base and replace everything related to foldr/builder with your > approach. First, do not actually change the definition of foldl. Then > compare the nofib testruns (probably best with two separate working repo > clones, starting from "make distclean"): Do the results differ? A lot of > work went into foldr/build-fusion, so we want to be sure that we are not > losing anything anywhere (or if we are, we want to know why). > > Then make foldl and foldl' a good consumer, as in the patch at the > beginning of #7994. How large are the gains? How do they compare with > the gains from the smarter GHC (numbers also in the ticket). > > If by then we have not found any regression, things look promising. Thank you for the advice, I'll have a try. - Akio > > Greetings, and I hope the delayed responses do not lesen your > motivation, > Joachim > > PS: I'm subscribed to the mailinglist, no need to CC me explicitly. > > -- > Joachim "nomeata" Breitner > mail at joachim-breitner.de * http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de * GPG-Key: 0x4743206C > Debian Developer: nomeata at debian.org From mail at joachim-breitner.de Fri Jan 31 09:18:17 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 31 Jan 2014 09:18:17 +0000 Subject: Extending fold/build fusion In-Reply-To: References: <1390932396.2641.46.camel@kirk> Message-ID: <1391159897.3184.10.camel@kirk> Dear Akio, Am Freitag, den 31.01.2014, 16:54 +0900 schrieb Akio Takano: > > Can you implement build via buildW, so that existing code like > > "map" [~1] forall f xs. map f xs = build (\c n -> foldr (mapFB c f) n xs) > > can be used unmodified? But probably not... but that would mean a > > noticeable incompatibility and a burden on library authors using list > > fusion. > > You can implement build in terms of buildW. However any list producer > defined using that definition of build would produce good code if the > final consumer is a left fold. The resulting code will be in CPS. On > the other hand, I imagine that if we also annotate foldl with oneShot, > this problem may become less severe. Hmm, I guess my question was not precise enough. Let me rephrase: To what extend can you provide the exsting foldr/build API _without_ losing the advantages of your approach? Or put differently: Could you add a section to the wiki that serves as a migration guide to those who want to port their producers and consumers to your system, without having to fully understand what?s going on? Another thing that would be very interesting: Your framework seems to be quite general: Are there other useful worker-wrapper-transformations that one would possibly want to apply to a fused computations, besides the one that makes foldl work well? Other examples of w/w-transformations in GHC include * Unboxing of parameters * Unboxing of return values, returning multiple values but maybe you can think of other interesting examples. Am I right that the _consumer_ of a fused computation decides which worker-wrapper pair to use? I still quite like the approach, mostly because it does so well for lists. I still have to fully grok it, though :-) Greetings, Joachim -- Joachim Breitner e-Mail: mail at joachim-breitner.de Homepage: http://www.joachim-breitner.de Jabber-ID: nomeata at joachim-breitner.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Fri Jan 31 15:17:26 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 31 Jan 2014 15:17:26 +0000 Subject: Extending fold/build fusion In-Reply-To: <1391159897.3184.10.camel@kirk> References: <1390932396.2641.46.camel@kirk> <1391159897.3184.10.camel@kirk> Message-ID: <1391181446.3184.25.camel@kirk> Dar Akio, I just noticed that even with your approach, the code for foldl-as-foldr is not automatically beautiful. Consider this: I modified the eft function to do to some heavy work at each step (or at least to look like that): myEft :: Int -> Int -> [Int] myEft = \from to -> buildW (myEftFB from to) {-# INLINE myEft #-} expensive :: Int -> Int expensive = (1+) {-# NOINLINE expensive #-} myEftFB :: Int -> Int -> (Wrap f r) -> (Int -> r -> r) -> r -> r myEftFB from to (Wrap wrap unwrap) cons nil = wrap go from nil where go = unwrap $ \i rest -> if i <= to then cons i $ wrap go (expensive i) rest else rest {-# INLINE[0] myEftFB #-} Then I wanted to see if "sum [f..t]" using this code is good: sumUpTo :: Int -> Int -> Int sumUpTo f t = WW.foldl' (+) 0 (myEft f t) And this is the core I get for the inner loop: letrec { $wa :: GHC.Prim.Int# -> GHC.Types.Int -> GHC.Types.Int [LclId, Arity=1, Str=DmdType L] $wa = \ (ww2 :: GHC.Prim.Int#) -> case GHC.Prim.<=# ww2 ww1 of _ { GHC.Types.False -> GHC.Base.id @ GHC.Types.Int; GHC.Types.True -> let { e [Dmd=Just D(L)] :: GHC.Types.Int [LclId, Str=DmdType] e = F.expensive (GHC.Types.I# ww2) } in \ (acc :: GHC.Types.Int) -> case acc of _ { GHC.Types.I# x -> case e of _ { GHC.Types.I# ww3 -> $wa ww3 (GHC.Types.I# (GHC.Prim.+# x ww2)) } } }; } in $wa ww F.sumUpTo1 (GHC 7.6.3, -O). See how it is still building up partial applications. So I am a bit confused now: I thought the (or one) motivation for your proposal is to produce good code in these cases. Or am I using your code wrongly? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0x4743206C Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: This is a digitally signed message part URL: From ggreif at gmail.com Fri Jan 31 21:21:09 2014 From: ggreif at gmail.com (Gabor Greif) Date: Fri, 31 Jan 2014 22:21:09 +0100 Subject: [commit: haddock] master: Update tests (18e9417) In-Reply-To: <20140130162318.0CDFB2406B@ghc.haskell.org> References: <20140130162318.0CDFB2406B@ghc.haskell.org> Message-ID: Unless I missed something big style, -XTypeHoles has been renamed to -XTypedHoles and not removed at all. Are you sure this patch is valid? Cheers, Gabor On 1/30/14, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/haddock > > On branch : master > Link : > http://git.haskell.org/haddock.git/commitdiff/18e9417edcda21dd23edf675b41f46ab336d773f > >>--------------------------------------------------------------- > > commit 18e9417edcda21dd23edf675b41f46ab336d773f > Author: Mateusz Kowalczyk > Date: Wed Jan 29 21:41:58 2014 +0000 > > Update tests > > This updates tests due to Haddock Trac #271 fix and due to removal of > TypeHoles as an extension from GHC. > > >>--------------------------------------------------------------- > > 18e9417edcda21dd23edf675b41f46ab336d773f > html-test/ref/Extensions.html | 2 +- > html-test/ref/Test.html | 2 +- > html-test/src/Extensions.hs | 2 +- > 3 files changed, 3 insertions(+), 3 deletions(-) > > diff --git a/html-test/ref/Extensions.html b/html-test/ref/Extensions.html > index 82fd732..382083c 100644 > --- a/html-test/ref/Extensions.html > +++ b/html-test/ref/Extensions.html > @@ -47,7 +47,7 @@ window.onload = function () > {pageLoad();setSynopsis("mini_Extensions.html");}; > > >Extensions > - >TypeHoles + >ExplicitForAll > > >

diff --git a/html-test/ref/Test.html b/html-test/ref/Test.html > index 0214662..bd447ea 100644 > --- a/html-test/ref/Test.html > +++ b/html-test/ref/Test.html > @@ -41,7 +41,7 @@ window.onload = function () > {pageLoad();setSynopsis("mini_Test.html");}; > > >License > - >(c) Simon Marlow 2002 + >BSD-style > > > diff --git a/html-test/src/Extensions.hs b/html-test/src/Extensions.hs > index 6b3535c..61eac21 100644 > --- a/html-test/src/Extensions.hs > +++ b/html-test/src/Extensions.hs > @@ -1,4 +1,4 @@ > -{-# LANGUAGE Haskell2010, TypeHoles, MonomorphismRestriction #-} > +{-# LANGUAGE Haskell2010, ExplicitForAll, MonomorphismRestriction #-} > {-# OPTIONS_HADDOCK show-extensions #-} > module Extensions where > > > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-commits > From hvriedel at gmail.com Fri Jan 31 21:25:34 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 31 Jan 2014 22:25:34 +0100 Subject: [commit: haddock] master: Update tests (18e9417) In-Reply-To: (Gabor Greif's message of "Fri, 31 Jan 2014 22:21:09 +0100") References: <20140130162318.0CDFB2406B@ghc.haskell.org> Message-ID: <87a9ebq1wx.fsf@gmail.com> On 2014-01-31 at 22:21:09 +0100, Gabor Greif wrote: > Unless I missed something big style, -XTypeHoles has been > renamed to -XTypedHoles and not removed at all. fyi, http://git.haskell.org/ghc.git/commitdiff/235fd88a9a35a6ca1aed70ff71291d7b433e45e4 From ggreif at gmail.com Fri Jan 31 21:30:00 2014 From: ggreif at gmail.com (Gabor Greif) Date: Fri, 31 Jan 2014 22:30:00 +0100 Subject: [commit: haddock] master: Update tests (18e9417) In-Reply-To: <87a9ebq1wx.fsf@gmail.com> References: <20140130162318.0CDFB2406B@ghc.haskell.org> <87a9ebq1wx.fsf@gmail.com> Message-ID: Aha, I understand. Thanks for the hint! Gabor On 1/31/14, Herbert Valerio Riedel wrote: > On 2014-01-31 at 22:21:09 +0100, Gabor Greif wrote: >> Unless I missed something big style, -XTypeHoles has been >> renamed to -XTypedHoles and not removed at all. > > fyi, > http://git.haskell.org/ghc.git/commitdiff/235fd88a9a35a6ca1aed70ff71291d7b433e45e4 > From mark.lentczner at gmail.com Sun Jan 19 23:14:56 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Sun, 19 Jan 2014 23:14:56 -0000 Subject: A modest proposal (re the Platform) Message-ID: Looks like GHC 7.8 is pretty near release. And while I know that we really like to have a GHC out for a while, and perhaps see the .1 release, before we incorporate it into the Platform, this GHC, while including many new and anticipated things, seems pretty well hammered on. Combine that with the now two-month late (all my fault) HP release for 2013.4.0.0 isn't slated to really have all that much new in it, in part because it is the same GHC as the last HP release. Now - it would really look foolish, and taken poorly (methinks) if we release a HP this month - only to have GHC 7.8 release early Feb. Folks would really be head scratching, and wondering about the platform. SO - I'm proposing ditching the now late 2013.4.0.0 (I admit, I'm finding it hard to get excited by it!) and instead move right to putting out 2014.2.0.0 - aimed for mid-March to mid-April. This release would have several big changes: - GHC 7.8 - New shake based build for the Platform - Support for validation via package tests - Support for a "server variant" (no OpenGL or other GUI stuff if we had any) - Automated version info w/historical version matrix page - Several significant packages: I'd like to see Aeson at the very least, updated OpenGL stuff I'd also propose changes for the Mac build (though this is obviously independent): - Built from GHC source, not dist. release. (guarantees consistent release) - Only 64bit (I know, controversial...) Thoughts? -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sun Jan 19 23:19:18 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Sun, 19 Jan 2014 23:19:18 -0000 Subject: A modest proposal (re the Platform) In-Reply-To: References: Message-ID: +1 On Jan 19, 2014 3:15 PM, "Mark Lentczner" wrote: > Looks like GHC 7.8 is pretty near release. > > And while I know that we really like to have a GHC out for a while, and > perhaps see the .1 release, before we incorporate it into the Platform, > this GHC, while including many new and anticipated things, seems pretty > well hammered on. > > Combine that with the now two-month late (all my fault) HP release for > 2013.4.0.0 isn't slated to really have all that much new in it, in part > because it is the same GHC as the last HP release. > > Now - it would really look foolish, and taken poorly (methinks) if we > release a HP this month - only to have GHC 7.8 release early Feb. Folks > would really be head scratching, and wondering about the platform. > > SO - I'm proposing ditching the now late 2013.4.0.0 (I admit, I'm finding > it hard to get excited by it!) and instead move right to putting out > 2014.2.0.0 - aimed for mid-March to mid-April. > > This release would have several big changes: > > - GHC 7.8 > - New shake based build for the Platform > - Support for validation via package tests > - Support for a "server variant" (no OpenGL or other GUI stuff if we > had any) > - Automated version info w/historical version matrix page > - Several significant packages: I'd like to see Aeson at the very > least, updated OpenGL stuff > > I'd also propose changes for the Mac build (though this is obviously > independent): > > - Built from GHC source, not dist. release. (guarantees consistent > release) > - Only 64bit (I know, controversial...) > > Thoughts? > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://www.haskell.org/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bob at redivi.com Sun Jan 19 23:20:35 2014 From: bob at redivi.com (Bob Ippolito) Date: Sun, 19 Jan 2014 23:20:35 -0000 Subject: A modest proposal (re the Platform) In-Reply-To: References: Message-ID: +1 I'm just a user, but I'm very excited about the possibility of getting a GHC 7.8 platform release sooner than later (especially considering Mio and the other great additions). Another release with the same GHC wouldn't do me much good. On Sun, Jan 19, 2014 at 3:14 PM, Mark Lentczner wrote: > Looks like GHC 7.8 is pretty near release. > > And while I know that we really like to have a GHC out for a while, and > perhaps see the .1 release, before we incorporate it into the Platform, > this GHC, while including many new and anticipated things, seems pretty > well hammered on. > > Combine that with the now two-month late (all my fault) HP release for > 2013.4.0.0 isn't slated to really have all that much new in it, in part > because it is the same GHC as the last HP release. > > Now - it would really look foolish, and taken poorly (methinks) if we > release a HP this month - only to have GHC 7.8 release early Feb. Folks > would really be head scratching, and wondering about the platform. > > SO - I'm proposing ditching the now late 2013.4.0.0 (I admit, I'm finding > it hard to get excited by it!) and instead move right to putting out > 2014.2.0.0 - aimed for mid-March to mid-April. > > This release would have several big changes: > > - GHC 7.8 > - New shake based build for the Platform > - Support for validation via package tests > - Support for a "server variant" (no OpenGL or other GUI stuff if we > had any) > - Automated version info w/historical version matrix page > - Several significant packages: I'd like to see Aeson at the very > least, updated OpenGL stuff > > I'd also propose changes for the Mac build (though this is obviously > independent): > > - Built from GHC source, not dist. release. (guarantees consistent > release) > - Only 64bit (I know, controversial...) > > Thoughts? > > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://www.haskell.org/mailman/listinfo/libraries > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Mon Jan 20 11:25:43 2014 From: pali.gabor at gmail.com (=?ISO-8859-1?Q?P=E1li_G=E1bor_J=E1nos?=) Date: Mon, 20 Jan 2014 11:25:43 -0000 Subject: A modest proposal (re the Platform) In-Reply-To: References: Message-ID: On Mon, Jan 20, 2014 at 9:29 AM, Andres Löh wrote: > I can understand the motivation of this proposal, but I'm slightly worried: +1 > (2) Simply because GHC 7.8 is itself so long delayed and so full of > new features, I think it's realistic to assume that quite a few > library glitches will appear even after it's released. Also, GHC bugs > may be found only after formal release (despite all the hammering, the > use of GHC pre release isn't quite comparable with the amount of > testing it gets afterwards; IMHO, there might very well be need for a > GHC 7.8.2). I'm all for trying to get an HP based on GHC 7.8 out as > possible, but how soon would that actually happen, realistically? > Sooner than 6 months from now? I do not think so. Waiting a couple of months after 7.8.1 is released to let the dust settle will not hurt anyway. From juhp at community.haskell.org Thu Jan 23 10:13:44 2014 From: juhp at community.haskell.org (Jens Petersen) Date: Thu, 23 Jan 2014 10:13:44 -0000 Subject: A modest proposal (re the Platform) In-Reply-To: References: Message-ID: On 20 January 2014 17:29, Andres Löh wrote: > (2) Simply because GHC 7.8 is itself so long delayed and so full of > new features, I think it's realistic to assume that quite a few > library glitches will appear even after it's released. Also, GHC bugs > may be found only after formal release (despite all the hammering, the > use of GHC pre release isn't quite comparable with the amount of > testing it gets afterwards; IMHO, there might very well be need for a > GHC 7.8.2). I'm all for trying to get an HP based on GHC 7.8 out as > possible, but how soon would that actually happen, realistically? > I know I already said "+1" but this is also a very valid standpoint. I guess the question is how long does HP want to wait for a stable ghc-7.8 release? There were already a number of important library updates planned for the HP release with ghc-7.6.3 - doing things more incrementally is also good I believe. Also as Andres mentioned in view of Mavericks. I know it is difficult but if ghc and haskell-platform could align their schedules better then things might be easier to plan in the future. Jens -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Thu Jan 23 10:26:39 2014 From: svenpanne at gmail.com (Sven Panne) Date: Thu, 23 Jan 2014 10:26:39 -0000 Subject: A modest proposal (re the Platform) In-Reply-To: References: Message-ID: Just a quick +1 for including GHC 7.8 in the next HP release. Regarding compiler features, shipping GHC 7.6.3 again would mean that the HP is still roughly at September 2012 (the first release of GHC 7.6.x). Furthermore, I don't fully buy into the argument that we should wait for 7.8 to stabilize: Power users will use something near HEAD, anyway, almost all other users will probably use the HP. From carter.schonwald at gmail.com Thu Jan 23 13:23:32 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 23 Jan 2014 13:23:32 -0000 Subject: A modest proposal (re the Platform) In-Reply-To: References: Message-ID: Indeed. Perhaps more importantly: many long standing problems, relating to how ghci linking works on every major platform, and having win64 support, look to be resolved In 7.8. These are HUGE. Additionally the Cpp that's a bother on osx and bsd systems matter goes away for HP if the next release is using 7.8 (especially if a wee patch I wrote to kill the problem good this week gets merged in. ) On Thursday, January 23, 2014, Sven Panne wrote: > Just a quick +1 for including GHC 7.8 in the next HP release. > Regarding compiler features, shipping GHC 7.6.3 again would mean that > the HP is still roughly at September 2012 (the first release of GHC > 7.6.x). Furthermore, I don't fully buy into the argument that we > should wait for 7.8 to stabilize: Power users will use something near > HEAD, anyway, almost all other users will probably use the HP. > > _______________________________________________ > Haskell-platform mailing list > Haskell-platform at projects.haskell.org > http://projects.haskell.org/cgi-bin/mailman/listinfo/haskell-platform > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhp at community.haskell.org Fri Jan 24 04:04:39 2014 From: juhp at community.haskell.org (Jens Petersen) Date: Fri, 24 Jan 2014 04:04:39 -0000 Subject: A modest proposal (re the Platform) In-Reply-To: References: Message-ID: > > I know it is difficult but if ghc and haskell-platform could align > their schedules better then things might be easier to plan in the future. > Really I would like to see a HP release now *and* one after 7.8.1! :) I think HP beta releases should follow each ghc release and there can be additional point release updates as needed between major ghc releases. The stable HP release would come from the latest stable ghc release. So ideally we could have an updated stable release based on 7.6.3 and an alpha/beta release after 7.8.1 is released. For such pre-releases the binaries do not have to be ready on the release day just a source tarball. RCs with binaries once tested could be promoted to stable releases. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Fri Jan 24 05:25:30 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 24 Jan 2014 05:25:30 -0000 Subject: A modest proposal (re the Platform) In-Reply-To: References: Message-ID: Jens, are you willing to undertake providing support for all the problems in current 7.6 on ALL platforms right now? Are you willing to test the builds on all platforms, and be the person to help everyone who's hitting issues? doing an HP release now will push back the release timeline of an HP version with 7.8 that has full first class support for 10.9 clang quirks + the win64 fixes that are landing in head this past week, along with a whole slew of other ecosystem amazing improvements. I suspect the next HP will be using 7.8.2, because certain final steps of the windows fixes are slated for that release, though maybe things'll move up and get fixed in 7.8.1. I could be wrong mind you. I'm *quite super duper happy* that the next HP will be 7.8. On Thu, Jan 23, 2014 at 11:04 PM, Jens Petersen wrote: > I know it is difficult but if ghc and haskell-platform could align >> their schedules better then things might be easier to plan in the future. >> > > Really I would like to see a HP release now *and* one after 7.8.1! :) > I think HP beta releases should follow each ghc release > and there can be additional point release updates as needed > between major ghc releases. The stable HP release would come > from the latest stable ghc release. So ideally we could have an > updated stable release based on 7.6.3 and an alpha/beta release > after 7.8.1 is released. For such pre-releases the binaries do not > have to be ready on the release day just a source tarball. > RCs with binaries once tested could be promoted to stable releases. > > _______________________________________________ > Haskell-platform mailing list > Haskell-platform at projects.haskell.org > http://projects.haskell.org/cgi-bin/mailman/listinfo/haskell-platform > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Fri Jan 24 06:49:38 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Fri, 24 Jan 2014 06:49:38 -0000 Subject: A modest proposal (re the Platform) In-Reply-To: References: Message-ID: A few specific points: 1) 2013.4.0.0 isn't really "ready to be pushed" - there were delays, and then some rolling updates... and some churn. While there is a proposed set of packages... and it does compile... there is still some work on the Mac version (it needs to incorporate my patch script for Mavericks). 2) If we roll out 2013.4.0.0 - that will mean a fair bit of work for all the packagers... and they (and I) won't be up for doing it again for a few months. 3) For the Mac release, I've really shied away from solutions that have people install a second C compiler. While some solutions for Mavericks had people installing gcc from macports or the like, I think we are better served with a solution that works with the default tool chain for the platform. I have no experience with FreeBSD, but I would think similar considerations apply (though at least there, everyone has ports.) 4) Stability in both GHC and the library eco-system seems (perhaps subjectively) more stable to me now than it did three/four years ago. In particular, many of the package maintainers for packages in the platform are already ready for the 7.8 release. Further, several important packages (text, aseon, cabal) work best with newer versions of core packages (which will be in 7.8) and are a bit hacky when working with the core shipped with 7.6. All in all, I'm still seeing this discussion coming down strongly in favor of delaying for 7.8. Further, I believe everyone involved so far is on board with the stability aims of the platform. - Mark ​ -------------- next part -------------- An HTML attachment was scrubbed... URL: