From marlowsd at gmail.com Fri Aug 1 10:47:20 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 01 Aug 2014 11:47:20 +0100 Subject: Changing the -package dependency resolution algorithm In-Reply-To: <1406816361-sup-7168@sabre> References: <1406210353-sup-3241@sabre> <53D9F604.1000707@gmail.com> <1406816361-sup-7168@sabre> Message-ID: <53DB7038.80204@gmail.com> On 31/07/2014 15:31, Edward Z. Yang wrote: >> We need to rethink the shadowing behaviour. It is designed to handle >> the case where we have the same PackageId (name + version) in two >> different DBs (e.g. global and local). Shadowing takes the topmost one >> of these (e.g. local, or rightmost -package-db flag). We can relax this >> requirement so long as the InstalledPackageIds are different, but what >> if the InstalledPackageIds are the same? Right now that's OK, because >> identical InstalledPackageIds implies identical ABIs, but if we change >> that so that InstalledPackageId is derived from the source and not the >> ABI, we would not be able to assume that two identical >> InstalledPackageIds are compatible. > > I talked to Duncan about this, and he's asserted that under a Nix-like > model, equal InstalledPackageIds really would imply identical ABIs. > I think this would be pretty hard to achieve reliably (and if we get > it wrong, segfault); Simon, perhaps you would know more about this. Right now this is ok, because InstalledPackageIds contain the ABI, but the proposal we were going to follow (I believe) was to have InstalledPackageIds contain a hash of the source code. Since GHC compilation is (still) not deterministic, identical source hashes does not imply identical ABIs. It would be ok if the InstalledPackageId was Hash(Source,ABI), though. (or was there some reason we had to know the InstalledPackageId before compilation?) > Duncan (and shout if I understand wrong) is also keen on abolishing the > shadowing algorithm completely when we have package environments and > making a package environment mandatory (defaulting to a global package > environment when nothing available.) The reason for this is that in a > Nix model, we mostly abolish package database stacks and have a single > global package database, which all packages are chucked into. In this > case, the current shadowing really has no idea how to pick between two > packages which shadow each other in the same database. Yes, and I agree with Duncan, provided identical InstalledPackageId implies identical ABI. Cheers, Simon From ezyang at mit.edu Fri Aug 1 17:47:14 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 01 Aug 2014 18:47:14 +0100 Subject: Failing ASSERT in ghci044 and ghci047 Message-ID: <1406915224-sup-3729@sabre> CC'd Simon because you were touching these test-cases recently. You'll need to run with -DDEBUG, which is probably why validate didn't catch these. Maybe the ASSERT is out of date? =====> ghci044(ghci) 1719 of 4065 [0, 0, 0] [72/1822] cd ./ghci/scripts && HC='/home/hs01/ezyang/ghc-validate/inplace/bin/ghc-stage2' HC_OPTS='-dcore-lint - dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci-history ' '/home/hs01/ezyang/ghc-va lidate/inplace/bin/ghc-stage2' --interactive -v0 -ignore-dot-ghci -dcore-lint -dcmm-lint -dno-debug-ou tput -no-user-package-db -rtsopts -fno-ghci-history ghci044.run.stdout 2>ghci044. run.stderr Actual stderr output differs from expected: --- ./ghci/scripts/ghci044.stderr 2014-07-31 11:00:16.433141666 -0700 +++ ./ghci/scripts/ghci044.run.stderr 2014-08-01 10:38:17.352234466 -0700 @@ -6,3 +6,12 @@ instance C a => C [a] -- Defined at :8:10 In the expression: f [4 :: Int] In an equation for ?it?: it = f [4 :: Int] +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 + +:15:1: + No instance for (C Bool) arising from a use of ?f? + In the expression: f [True] + In an equation for ?it?: it = f [True] Actual stdout output differs from expected: =====> ghci047(ghci) 1723 of 4065 [0, 1, 0] cd ./ghci/scripts && HC='/home/hs01/ezyang/ghc-validate/inplace/bin/ghc-stage2' HC_OPTS='-dcore-lint - dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci-history ' '/home/hs01/ezyang/ghc-va lidate/inplace/bin/ghc-stage2' --interactive -v0 -ignore-dot-ghci -dcore-lint -dcmm-lint -dno-debug-ou tput -no-user-package-db -rtsopts -fno-ghci-history ghci047.run.stdout 2>ghci047. run.stderr Actual stderr output Actual stderr output differs from expected: --- ./ghci/scripts/ghci047.stderr 2014-05-28 15:38:19.608946057 -0700 +++ ./ghci/scripts/ghci047.run.stderr 2014-08-01 10:38:17.658906746 -0700 @@ -1,16 +1,14 @@ +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 Cheers, Edward --- End forwarded message --- From mail at joachim-breitner.de Fri Aug 1 17:48:15 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 01 Aug 2014 19:48:15 +0200 Subject: Build broken: Forall'd constraint =?UTF-8?Q?=E2=80=98Ix?= =?UTF-8?Q?_i=E2=80=99?= is not bound in RULE lhs Message-ID: <1406915295.31085.1.camel@joachim-breitner.de> Hi SPJ, it seems as if your latest commit ?Improve the desugaring of RULES, esp those from SPECIALISE pragmas? broke the build (when building with -Werror and -dcore-lint): libraries/array/Data/Array/Base.hs:476:1: Warning: Forall'd constraint ?Ix i? is not bound in RULE lhs showsIArray @ UArray @ i @ e $dIArray_a4L0 $dIx_a4L9 $dShow_a4L2 $dShow_a4L3 : Failing due to -Werror. make[1]: *** [libraries/array/dist-install/build/Data/Array/Base.o] Error 1 Full build log at https://api.travis-ci.org/jobs/31427050/log.txt?deansi=true Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Fri Aug 1 18:29:59 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 1 Aug 2014 18:29:59 +0000 Subject: =?utf-8?B?UkU6IEJ1aWxkIGJyb2tlbjogIEZvcmFsbCdkIGNvbnN0cmFpbnQg4oCYSXgg?= =?utf-8?B?aeKAmSBpcyBub3QgYm91bmQgaW4gUlVMRSBsaHM=?= In-Reply-To: <1406915295.31085.1.camel@joachim-breitner.de> References: <1406915295.31085.1.camel@joachim-breitner.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF22086158@DB3PRD3001MB020.064d.mgd.msft.net> Oh bother, my fault. Fix coming as soon as validate finishes! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Joachim | Breitner | Sent: 01 August 2014 18:48 | To: ghc-devs | Subject: Build broken: Forall'd constraint ?Ix i? is not bound in RULE | lhs | | Hi SPJ, | | it seems as if your latest commit | ?Improve the desugaring of RULES, esp those from SPECIALISE pragmas? | broke the build (when building with -Werror and -dcore-lint): | | libraries/array/Data/Array/Base.hs:476:1: Warning: | Forall'd constraint ?Ix i? is not bound in RULE lhs | showsIArray | @ UArray @ i @ e $dIArray_a4L0 $dIx_a4L9 $dShow_a4L2 $dShow_a4L3 | | : | Failing due to -Werror. | make[1]: *** [libraries/array/dist-install/build/Data/Array/Base.o] | Error 1 | | Full build log at | https://api.travis-ci.org/jobs/31427050/log.txt?deansi=true | | Greetings, | Joachim | | -- | Joachim ?nomeata? Breitner | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F | Debian Developer: nomeata at debian.org From simonpj at microsoft.com Fri Aug 1 18:38:12 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 1 Aug 2014 18:38:12 +0000 Subject: Failing ASSERT in ghci044 and ghci047 In-Reply-To: <1406914719-sup-3160@sabre> References: <1406914719-sup-3160@sabre> Message-ID: <618BE556AADD624C9C918AA5D5911BEF220861B1@DB3PRD3001MB020.064d.mgd.msft.net> Thanks. These are tests that over-ride one instance declaration with another, something that really wasn't working before. I have no idea what is going on in Linker.hs It's the weekend so I'm not going to have a chance to look at this for a bit -- and oddly it seems to work anyway. But asserts should not fail. If someone had time to make it an ASSERT2 and print out the relevant entrails (toplev_only, nms, and the context of ce_in (not the HValue component, obviously)), that would be helpful. Simon | -----Original Message----- | From: Edward Z.Yang [mailto:ezyang at cs.stanford.edu] | Sent: 01 August 2014 18:41 | To: ghc-devs | Cc: Simon Peyton Jones | Subject: Failing ASSERT in ghci044 and ghci047 | | CC'd Simon because you were touching these test-cases recently. | | You'll need to run with -DDEBUG, which is probably why validate didn't | catch these. Maybe the ASSERT is out of date? | | =====> ghci044(ghci) 1719 of 4065 [0, 0, 0] | [72/1822] | cd ./ghci/scripts && HC='/home/hs01/ezyang/ghc-validate/inplace/bin/ghc- | stage2' HC_OPTS='-dcore-lint - | dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci- | history ' '/home/hs01/ezyang/ghc-va | lidate/inplace/bin/ghc-stage2' --interactive -v0 -ignore-dot-ghci -dcore- | lint -dcmm-lint -dno-debug-ou | tput -no-user-package-db -rtsopts -fno-ghci-history ghci044.run.stdout 2>ghci044. | run.stderr | Actual stderr output differs from expected: | --- ./ghci/scripts/ghci044.stderr 2014-07-31 11:00:16.433141666 - | 0700 | +++ ./ghci/scripts/ghci044.run.stderr 2014-08-01 10:38:17.352234466 - | 0700 | @@ -6,3 +6,12 @@ | instance C a => C [a] -- Defined at :8:10 | In the expression: f [4 :: Int] | In an equation for ?it?: it = f [4 :: Int] | +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 | +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 | +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 | +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 | + | +:15:1: | + No instance for (C Bool) arising from a use of ?f? | + In the expression: f [True] | + In an equation for ?it?: it = f [True] | Actual stdout output differs from expected: | | =====> ghci047(ghci) 1723 of 4065 [0, 1, 0] | cd ./ghci/scripts && HC='/home/hs01/ezyang/ghc-validate/inplace/bin/ghc- | stage2' HC_OPTS='-dcore-lint - | dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci- | history ' '/home/hs01/ezyang/ghc-va | lidate/inplace/bin/ghc-stage2' --interactive -v0 -ignore-dot-ghci -dcore- | lint -dcmm-lint -dno-debug-ou | tput -no-user-package-db -rtsopts -fno-ghci-history ghci047.run.stdout 2>ghci047. | run.stderr | Actual stderr output | Actual stderr output differs from expected: | --- ./ghci/scripts/ghci047.stderr 2014-05-28 15:38:19.608946057 - | 0700 | +++ ./ghci/scripts/ghci047.run.stderr 2014-08-01 10:38:17.658906746 - | 0700 | @@ -1,16 +1,14 @@ | +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 | +*** Exception: ASSERT failed! file compiler/ghci/Linker.lhs, line 907 | | | Cheers, | Edward From simonpj at microsoft.com Fri Aug 1 20:27:39 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 1 Aug 2014 20:27:39 +0000 Subject: =?utf-8?B?UkU6IEJ1aWxkIGJyb2tlbjogIEZvcmFsbCdkIGNvbnN0cmFpbnQg4oCYSXgg?= =?utf-8?B?aeKAmSBpcyBub3QgYm91bmQgaW4gUlVMRSBsaHM=?= In-Reply-To: <1406915295.31085.1.camel@joachim-breitner.de> References: <1406915295.31085.1.camel@joachim-breitner.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF220862EF@DB3PRD3001MB020.064d.mgd.msft.net> OK fixed I think. Apologies. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Joachim | Breitner | Sent: 01 August 2014 18:48 | To: ghc-devs | Subject: Build broken: Forall'd constraint ?Ix i? is not bound in RULE | lhs | | Hi SPJ, | | it seems as if your latest commit | ?Improve the desugaring of RULES, esp those from SPECIALISE pragmas? | broke the build (when building with -Werror and -dcore-lint): | | libraries/array/Data/Array/Base.hs:476:1: Warning: | Forall'd constraint ?Ix i? is not bound in RULE lhs | showsIArray | @ UArray @ i @ e $dIArray_a4L0 $dIx_a4L9 $dShow_a4L2 $dShow_a4L3 | | : | Failing due to -Werror. | make[1]: *** [libraries/array/dist-install/build/Data/Array/Base.o] | Error 1 | | Full build log at | https://api.travis-ci.org/jobs/31427050/log.txt?deansi=true | | Greetings, | Joachim | | -- | Joachim ?nomeata? Breitner | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F | Debian Developer: nomeata at debian.org From simonpj at microsoft.com Fri Aug 1 20:28:31 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 1 Aug 2014 20:28:31 +0000 Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) In-Reply-To: <20140801175727.61A0A240EA@ghc.haskell.org> References: <20140801175727.61A0A240EA@ghc.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> Urk. It's quite surprising that this particular change would increase allocation significantly. I wonder whether it just pushed it over the threshold. Simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of | git at git.haskell.org | Sent: 01 August 2014 18:57 | To: ghc-commits at haskell.org | Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : master | Link : | http://ghc.haskell.org/trac/ghc/changeset/8df7fea7cf8a32d54ac3d6772432273 | 8595bf421/ghc | | >--------------------------------------------------------------- | | commit 8df7fea7cf8a32d54ac3d67724322738595bf421 | Author: Joachim Breitner | Date: Fri Aug 1 19:55:52 2014 +0200 | | Bump haddock.base max_bytes_used | | It has reliably increased with commit 1ae5fa45, and has been stable | since then, so it does not seem to be a fluke. I did not investigate | why | that commit might have increased this value. | | | >--------------------------------------------------------------- | | 8df7fea7cf8a32d54ac3d67724322738595bf421 | testsuite/tests/perf/haddock/all.T | 13 +++++++------ | 1 file changed, 7 insertions(+), 6 deletions(-) | | diff --git a/testsuite/tests/perf/haddock/all.T | b/testsuite/tests/perf/haddock/all.T | index b17d472..49321b9 100644 | --- a/testsuite/tests/perf/haddock/all.T | +++ b/testsuite/tests/perf/haddock/all.T | @@ -17,13 +17,14 @@ test('haddock.base', | # 2014-01-22: 168 (x86/Linux - new haddock) | # 2014-06-29: 156 (x86/Linux) | ,stats_num_field('max_bytes_used', | - [(wordsize(64), 115113864, 10) | - # 2012-08-14: 87374568 (amd64/Linux) | - # 2012-08-21: 86428216 (amd64/Linux) | - # 2012-09-20: 84794136 (amd64/Linux) | - # 2012-11-12: 87265136 (amd64/Linux) | - # 2013-01-29: 96022312 (amd64/Linux) | + [(wordsize(64), 127954488, 10) | + # 2012-08-14: 87374568 (amd64/Linux) | + # 2012-08-21: 86428216 (amd64/Linux) | + # 2012-09-20: 84794136 (amd64/Linux) | + # 2012-11-12: 87265136 (amd64/Linux) | + # 2013-01-29: 96022312 (amd64/Linux) | # 2013-10-18: 115113864 (amd64/Linux) | + # 2014-07-31: 127954488 (amd64/Linux), correlates with | 1ae5fa45 | ,(platform('i386-unknown-mingw32'), 58557136, 10) | # 2013-02-10: 47988488 (x86/Windows) | # 2013-11-13: 58557136 (x86/Windows, | 64bit machine) | | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-commits From mail at joachim-breitner.de Fri Aug 1 21:20:18 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 01 Aug 2014 23:20:18 +0200 Subject: Build broken: Forall'd constraint =?UTF-8?Q?=E2=80=98Ix?= =?UTF-8?Q?_i=E2=80=99?= is not bound in RULE lhs In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF220862EF@DB3PRD3001MB020.064d.mgd.msft.net> References: <1406915295.31085.1.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF220862EF@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1406928018.6407.0.camel@joachim-breitner.de> Hi, Am Freitag, den 01.08.2014, 20:27 +0000 schrieb Simon Peyton Jones: > OK fixed I think. Apologies. looks good, all green on https://travis-ci.org/ghc/ghc/builds/31457663 Thanks, Joachim -- Joachim Breitner e-Mail: mail at joachim-breitner.de Homepage: http://www.joachim-breitner.de Jabber-ID: nomeata at joachim-breitner.de -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Fri Aug 1 21:27:06 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 01 Aug 2014 23:27:06 +0200 Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> References: <20140801175727.61A0A240EA@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1406928426.6407.2.camel@joachim-breitner.de> Hi, Am Freitag, den 01.08.2014, 20:28 +0000 schrieb Simon Peyton Jones: > Urk. It's quite surprising that this particular change would increase allocation significantly. > I wonder whether it just pushed it over the threshold. I?m confident it was not just that: ~/logs $ fgrep 'Deviation haddock.base(normal) max_bytes_used' $(cd ghc-master; git log --oneline --first-parent db19c665ec5055c2193b2174519866045aeff09a..HEAD | cut -d\ -f1| (cd ..; while read x ; do test -e $x.log && echo $x.log; done) |tac )|tail -n 25 6fa6caa.log: Deviation haddock.base(normal) max_bytes_used: 2.2 % a0ff1eb.log: Deviation haddock.base(normal) max_bytes_used: -1.0 % 0be7c2c.log: Deviation haddock.base(normal) max_bytes_used: 2.2 % dc7d3c2.log: Deviation haddock.base(normal) max_bytes_used: 2.2 % 7381cee.log: Deviation haddock.base(normal) max_bytes_used: 2.2 % fe2d807.log: Deviation haddock.base(normal) max_bytes_used: 2.2 % bfaa179.log: Deviation haddock.base(normal) max_bytes_used: -0.9 % 1ae5fa4.log: Deviation haddock.base(normal) max_bytes_used: 11.0 % c97f853.log: Deviation haddock.base(normal) max_bytes_used: 11.0 % fd47e26.log: Deviation haddock.base(normal) max_bytes_used: 11.2 % bdf0ef0.log: Deviation haddock.base(normal) max_bytes_used: 11.1 % 58ed1cc.log: Deviation haddock.base(normal) max_bytes_used: 11.0 % 1c1ef82.log: Deviation haddock.base(normal) max_bytes_used: 11.2 % 52188ad.log: Deviation haddock.base(normal) max_bytes_used: 11.0 % 3b9fe0c.log: Deviation haddock.base(normal) max_bytes_used: 11.2 % 6483b8a.log: Deviation haddock.base(normal) max_bytes_used: 11.0 % 9d9a554.log: Deviation haddock.base(normal) max_bytes_used: 11.2 % 028630a.log: Deviation haddock.base(normal) max_bytes_used: 11.2 % aab5937.log: Deviation haddock.base(normal) max_bytes_used: 11.0 % 6c06db1.log: Deviation haddock.base(normal) max_bytes_used: 11.0 % 2989ffd.log: Deviation haddock.base(normal) max_bytes_used: 11.1 % d4d4bef.log: Deviation haddock.base(normal) max_bytes_used: 11.2 % 8df7fea.log: Deviation haddock.base(normal) max_bytes_used: -0.0 % 3faff73.log: Deviation haddock.base(normal) max_bytes_used: -0.0 % 02975c9.log: Deviation haddock.base(normal) max_bytes_used: -0.1 % (If this were a bytes_allocated test I could also show you nice graphs like http://ghcspeed-nomeata.rhcloud.com/timeline/?exe=2&base=2% 2B68&ben=tests%2Falloc%2FT6048&env=1&revs=50&equid=on but I didn?t add the max_bytes_used tests yet.) Interestingly, bytes_allocated did not change a bit! Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From d at davidterei.com Sat Aug 2 00:30:50 2014 From: d at davidterei.com (David Terei) Date: Fri, 1 Aug 2014 17:30:50 -0700 Subject: Overlapping and incoherent instances In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: This is great! It's nice to see something finer grain. We'd of course like to bring Safe Haskell into the picture though! Our concern with the new design as it stands is the OVERLAPS flag. We'd prefer it to be eliminated in favor of requiring developers specify both OVERLAPPABLE and OVERLAPS if that truly is their intention. # Why? ## Long Version https://ghc.haskell.org/trac/ghc/wiki/SafeHaskell/NewOverlappingInstances#NewOverlappingInstances--a.k.aInstanceSpecificPragmas ## Short Version (kind-of) The security implications of OVERLAPPABLE vs. OVERLAPPING are fairly different. Remember, in Safe Haskell we apply a policy of only allowing instances from a module M compiled with `-XSafe` to overlap other instances from module M. If it overlaps (and is the most specific overlap) instances from modules other than M then we don't allow this to succeed. This is done to ensure that untrusted code compiled with `-XSafe` can't alter the behavior of existing code, some of which may be part of the TCB and security critical. Brining the new finer grained pragmas into the story we get the following: * OVERLAPPABLE is the programmer communicating that they can be overlapped, an open instance if you will. We want to relax the above restriction and allow instances from `-XSafe` modules to overlap instances from their own module AND instances declared OVERLAPPABLE that reside in any module. * OVERLAPPING is the programming simply declaring they may overlap less specific instances. We want to keep the above restriction for these instances. That is, a instance I1 from a `-XSafe` module M won't be able to overlap as the most specific instance, a instance I2 from another module if I2 is marked as OVERLAPPING. This distinction enables new encodings in Safe Haskell by allowing security library authors to distinguish how untrusted code can overlap their instances. In some way giving them open vs closed instances. This distinction is subtle and important. Having a pragma OVERLAPS that implies both glosses over this and will encourage developers to use this without much thought. ## Safe Inference We can also safely infer a module that only has OVERLAPPABLE instances as safe, while ones that contain OVERLAPPING or OVERLAPS instances must be regarded as unsafe since there is a difference in semantics of these pragmas under Safe vs Unsafe. So we also have an advantage if developers are more specific about what they want, than just defaulting to OVERLAPS. Cheers, David On 29 July 2014 02:11, Simon Peyton Jones wrote: > Friends > > One of GHC?s more widely-used features is overlapping (and sometimes > incoherent) instances. The user-manual documentation is here. > > The use of overlapping/incoherent instances is controlled by LANGUAGE > pragmas: OverlappingInstances and IncoherentInstances respectively. > > However the overlap/incoherent-ness is a property of the *instance > declaration* itself, and has been for a long time. Using LANGUAGE > OverlappingInstances simply sets the ?I am an overlapping instance? flag for > every instance declaration in that module. > > This is a Big Hammer. It give no clue about *which* particular instances > the programmer is expecting to be overlapped, nor which are doing the > overlapping. It brutally applies to every instance in the module. > Moreover, when looking at an instance declaration, there is no nearby clue > that it might be overlapped. The clue might be in the command line that > compiles that module! > > Iavor has recently implemented per-instance-declaration pragmas, so you can > say > > instance {-# OVERLAPPABLE #-} Show a => Show [a] where ? > > instance {-# OVERLAPPING #-} Show [Char] where ? > > This is much more precise (it affects only those specific instances) and it > is much clearer (you see it when you see the instance declaration). > > This new feature will be in GHC 7.10 and I?m sure you will be happy about > that. But I propose also to deprecate the LANGUAGE pragmas > OverlappingInstances and IncoherentInstances, as way to encourage everyone > to use the new feature instead of the old big hammer. The old LANGUAGE > pragmas will continue to work, of course, for at least another complete > release cycle. We could make that two cycles if it was helpful. > > However, if you want deprecation-free libraries, it will entail a wave of > library updates. > > This email is just to warn you, and to let you yell if you think this is a > bad idea. It would actually not be difficult to retain the old LANGUAGE > pragmas indefinitely ? it just seems wrong not to actively push authors in > the right direction. > > These deprecations of course popped up in the test suite, so I?ve been > replacing them with per-instance pragmas there too. Interestingly in some > cases, when looking for which instances needed the pragmas, I found?none. So > OverlappingInstances was entirely unnecessary. Maybe library authors will > find that too! > > Simon > > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > From mark.lentczner at gmail.com Sat Aug 2 02:49:27 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Fri, 1 Aug 2014 22:49:27 -0400 Subject: Release building for Windows Message-ID: Randy Polen, undertook porting the new build of Haskell Platform to Windows. He did a great job... but as this is first time stepping up to such a big release, he has some questions about GHC and Windows, and the choices he had to make. He asked me to forward these to this list, as he's not a member. He's cc'd so you can reply to all and include him... or I can forward as needed. >From Randy: ------------------ I am building the Haskell Platform 2014.2.0.0 on the Windows side. Your advice would be very helpful to make sure the HP 2014 for Windows is as good as possible. There were some issues I worked-around, plus some features that seem to not be available in this particular GHC (7.8.3) on the 32-bit and 64-bit Windows platforms, and I would like to confirm that HP 2014.2.0.0 will be shipping something sensible and as expected for the Windows environment, noting things which are supported on other environments but not on Windows. * GHC 7.8.3 on Windows does not support building Haskell into shared libraries, (GHC ticket #8228) so all packages in HP 2014.2.0.0 for Windows have been built without --enable-shared * GHC 7.8.3 on Windows does not currently support LLVM (GHC ticket #7143) * All Windows HP 2014.2.0.0 packages have been built without --enabled-split-objs, in deference to the GHC 7.8 FAQ * Extra python, etc. bits included in the GHC 7.8.3 bindist for 64-bit Windows (GHC issue #9014) are not installed with Windows HP 2014.2.0.0. Is eliding them from the HP 2014.2.0.0 64-bit Windows installation safe and correct (i.e., are they truely not required)? * Missing src/html in GHC packages were worked around by replacing the entire GHC package doc tree of html files with the contents of the "Standard Libraries" tarball (but not for the two packages which are not built for Windows, terminfo and unix). Is this valid to do? Any issues might arise? * ref: http://www.haskell.org/ghc/docs/latest/libraries.html.tar.bz2 Thanks for any advice on these. I do want to make the Windows HP 2014.2.0.0 be as good as it can be. Randy -------------- next part -------------- An HTML attachment was scrubbed... URL: From mc.schroeder at gmail.com Sat Aug 2 06:40:15 2014 From: mc.schroeder at gmail.com (=?UTF-8?Q?Michael_Schr=C3=B6der?=) Date: Sat, 2 Aug 2014 08:40:15 +0200 Subject: GHC.Event.Unique vs Data.Unique Message-ID: Is there a reason GHC.Event.Unique exists, since we also have Data.Unique? Or is this just a historical artifact? It looks like they used to have the same implementation, but have now diverged, with Data.Unique being the more recent one. GHC.Event.Unique still uses STM internally, and is as far as I can see the only part of the base library to do so. Which wouldn't really matter, except that now basic IO stuff like threadDelay and even putStr (sometimes, especially in ghci) cannot be used inside unsafeIOToSTM. Which is somewhat annoying? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sat Aug 2 14:53:36 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 02 Aug 2014 16:53:36 +0200 Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) In-Reply-To: <1406928426.6407.2.camel@joachim-breitner.de> References: <20140801175727.61A0A240EA@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> <1406928426.6407.2.camel@joachim-breitner.de> Message-ID: <1406991216.26248.1.camel@joachim-breitner.de> Hi, Am Freitag, den 01.08.2014, 23:27 +0200 schrieb Joachim Breitner: > Am Freitag, den 01.08.2014, 20:28 +0000 schrieb Simon Peyton Jones: > > Urk. It's quite surprising that this particular change would increase allocation significantly. > > I wonder whether it just pushed it over the threshold. > > I?m confident it was not just that: > > ~/logs $ fgrep 'Deviation haddock.base(normal) max_bytes_used' $(cd ghc-master; git log --oneline --first-parent db19c665ec5055c2193b2174519866045aeff09a..HEAD | cut -d\ -f1| (cd ..; while read x ; do test -e $x.log && echo $x.log; done) |tac )|tail -n 25 > 6fa6caa.log: Deviation haddock.base(normal) max_bytes_used: 2.2 % > a0ff1eb.log: Deviation haddock.base(normal) max_bytes_used: -1.0 % > 0be7c2c.log: Deviation haddock.base(normal) max_bytes_used: 2.2 % > dc7d3c2.log: Deviation haddock.base(normal) max_bytes_used: 2.2 % > 7381cee.log: Deviation haddock.base(normal) max_bytes_used: 2.2 % > fe2d807.log: Deviation haddock.base(normal) max_bytes_used: 2.2 % > bfaa179.log: Deviation haddock.base(normal) max_bytes_used: -0.9 % > 1ae5fa4.log: Deviation haddock.base(normal) max_bytes_used: 11.0 % > c97f853.log: Deviation haddock.base(normal) max_bytes_used: 11.0 % > fd47e26.log: Deviation haddock.base(normal) max_bytes_used: 11.2 % > bdf0ef0.log: Deviation haddock.base(normal) max_bytes_used: 11.1 % [..] > Interestingly, bytes_allocated did not change a bit! your surprise made me investigate further. Could this have been caused by this change to how I run the testsuite, which I did roughly around that time? -run make -C testsuite fast VERBOSE=4 THREADS=8 +run make -C testsuite fast VERBOSE=4 THREADS=8 NoFibRuns=15 .... No, no difference. But the value changed again: d4d4bef.log: Deviation haddock.base(normal) max_bytes_used: 11.2 % 8df7fea.log: Deviation haddock.base(normal) max_bytes_used: -0.0 % 3faff73.log: Deviation haddock.base(normal) max_bytes_used: -0.0 % 0336588.log: Deviation haddock.base(normal) max_bytes_used: -0.2 % 02975c9.log: Deviation haddock.base(normal) max_bytes_used: -0.1 % 578fbec.log: Deviation haddock.base(normal) max_bytes_used: -0.2 % e69619e.log: Deviation haddock.base(normal) max_bytes_used: 0.0 % 105602f.log: Deviation haddock.base(normal) max_bytes_used: -0.2 % fbd0586.log: Deviation haddock.base(normal) max_bytes_used: -6.5 % ab90bf2.log: Deviation haddock.base(normal) max_bytes_used: -6.5 % f293931.log: Deviation haddock.base(normal) max_bytes_used: -6.6 % and fbd0586 really looks harmless. Not sure what is going on here. I find the changes to be too big (and within a certain range of commits too deterministic) to be just the consequence of RTS timer noise. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From andreas.voellmy at gmail.com Sat Aug 2 17:55:35 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Sat, 2 Aug 2014 13:55:35 -0400 Subject: Interrupt interruptible foreign calls on HS exit In-Reply-To: <1406761014-sup-6348@sabre> References: <1406761014-sup-6348@sabre> Message-ID: Thanks Edward! Another question... deleteThread() calls throwToSingleThreaded(). I can update this so that it also calls throwToSingleThreaded() in the case of BlockedOnCCall_Interruptible (currently it explicitly excludes this case), but this doesn't solve the problem, because throwToSingleThreaded() doesn't seem to interrupt blocked calls at all. That functionality is in throwTo(), which is not called by throwToSingleThreaded(). Why are we using throwToSingleThreaded() in deleteThread() rather than throwTo()? Can I switch deleteThread() to use throwTo()? Or should I use throwTo() in deleteThread() only for the special case of BlockedOnCCall_Interruptible? Or should throwToSingleThreaded() be updated to do the same thing that throwTo does for the case of BlockedOnCCall_Interruptible? Thanks, Andi On Wed, Jul 30, 2014 at 6:57 PM, Edward Z. Yang wrote: > Recalling when I implemented this functionality, I think not > interrupting threads in the exit sequence was just an oversight, > and I think we could implement it. Seems reasonable to me. > > Edward > > Excerpts from Andreas Voellmy's message of 2014-07-30 23:49:24 +0100: > > Hi GHCers, > > > > I've been looking into issue #9284, which boils down to getting certain > > foreign calls issued by HS threads to finish (i.e. return) in the exit > > sequence of forkProcess. > > > > There are several options for solving the particular problem in #9284; > one > > option is to issue the particular foreign calls causing that issue as > > "interruptible" and then have the exit sequence interrupt interruptible > > foreign calls. > > > > The exit sequence, starting from hs_exit(), goes through hs_exit_(), > > exitScheduler(), scheduleDoGC(), deleteAllThreads(), and then > > deleteThread(), where deleteThread is this: > > > > static void > > deleteThread (Capability *cap STG_UNUSED, StgTSO *tso) > > { > > // NOTE: must only be called on a TSO that we have exclusive > > // access to, because we will call throwToSingleThreaded() below. > > // The TSO must be on the run queue of the Capability we own, or > > // we must own all Capabilities. > > if (tso->why_blocked != BlockedOnCCall && > > tso->why_blocked != BlockedOnCCall_Interruptible) { > > throwToSingleThreaded(tso->cap,tso,NULL); > > } > > } > > > > So it looks like interruptible foreign calls are not interrupted in the > > exit sequence. > > > > Is there a good reason why we have this behavior? Could we change it to > > interrupt TSO's with why_blocked == BlockedOnCCall_Interruptible in the > > exit sequence? > > > > Thanks, > > Andi > > > > P.S. It looks like this was introduced in commit > > 83d563cb9ede0ba792836e529b1e2929db926355. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sat Aug 2 18:11:58 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Sat, 2 Aug 2014 20:11:58 +0200 Subject: GHC.Event.Unique vs Data.Unique In-Reply-To: References: Message-ID: Despite having the same name, these two are quite different from what I remember. The GHC.Event one (which me/Bryan added) is just a wrapped Int, while the Data one is really more of a mutable state thing. On Sat, Aug 2, 2014 at 8:40 AM, Michael Schr?der wrote: > Is there a reason GHC.Event.Unique exists, since we also have Data.Unique? > Or is this just a historical artifact? It looks like they used to have the > same implementation, but have now diverged, with Data.Unique being the more > recent one. > > GHC.Event.Unique still uses STM internally, and is as far as I can see the > only part of the base library to do so. Which wouldn't really matter, except > that now basic IO stuff like threadDelay and even putStr (sometimes, > especially in ghci) cannot be used inside unsafeIOToSTM. Which is somewhat > annoying? > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From andreas.voellmy at gmail.com Sat Aug 2 20:28:31 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Sat, 2 Aug 2014 16:28:31 -0400 Subject: Interrupt interruptible foreign calls on HS exit In-Reply-To: References: <1406761014-sup-6348@sabre> Message-ID: I tried to go ahead and call throwTo() instead of throwToSingleThreaded() for threads BlockedOnCCall_Interruptible state during the shutdown sequence. Unfortunately something goes wrong with this change. I haven't tracked it down yet, but it looks like the following happens... hs_exit() eventually result in a call to scheduleDoGC(), which does acquireAllCapabilities() and then deleteAllThreads() interrupts interruptible foreign calls. Those foreign calls come back and call waitForReturnCapability() but get stuck here: if (!task->wakeup) waitCondition(&task->cond, &task->lock); I guess the scheduleDoGC is blocking the interrupted Haskell threads from finishing. One possible fix is to have the returning foreign call see that we are in the exit sequence and avoid trying to return to the Haskell caller - I guess it can just exit. I tried adding some code in resumeThread() to exit if sched_state is SCHED_INTERRUPTING or SCHED_SHUTTING_DOWN, but this caused more trouble, so it seems that it's not a simple change. On Sat, Aug 2, 2014 at 1:55 PM, Andreas Voellmy wrote: > Thanks Edward! Another question... > > deleteThread() calls throwToSingleThreaded(). I can update this so that it > also calls throwToSingleThreaded() in the case > of BlockedOnCCall_Interruptible (currently it explicitly excludes this > case), but this doesn't solve the problem, because throwToSingleThreaded() > doesn't seem to interrupt blocked calls at all. That functionality is in > throwTo(), which is not called by throwToSingleThreaded(). Why are we using > throwToSingleThreaded() in deleteThread() rather than throwTo()? Can I > switch deleteThread() to use throwTo()? Or should I use throwTo() in > deleteThread() only for the special case of BlockedOnCCall_Interruptible? > Or should throwToSingleThreaded() be updated to do the same thing that > throwTo does for the case of BlockedOnCCall_Interruptible? > > Thanks, > Andi > > > On Wed, Jul 30, 2014 at 6:57 PM, Edward Z. Yang wrote: > >> Recalling when I implemented this functionality, I think not >> interrupting threads in the exit sequence was just an oversight, >> and I think we could implement it. Seems reasonable to me. >> >> Edward >> >> Excerpts from Andreas Voellmy's message of 2014-07-30 23:49:24 +0100: >> > Hi GHCers, >> > >> > I've been looking into issue #9284, which boils down to getting certain >> > foreign calls issued by HS threads to finish (i.e. return) in the exit >> > sequence of forkProcess. >> > >> > There are several options for solving the particular problem in #9284; >> one >> > option is to issue the particular foreign calls causing that issue as >> > "interruptible" and then have the exit sequence interrupt interruptible >> > foreign calls. >> > >> > The exit sequence, starting from hs_exit(), goes through hs_exit_(), >> > exitScheduler(), scheduleDoGC(), deleteAllThreads(), and then >> > deleteThread(), where deleteThread is this: >> > >> > static void >> > deleteThread (Capability *cap STG_UNUSED, StgTSO *tso) >> > { >> > // NOTE: must only be called on a TSO that we have exclusive >> > // access to, because we will call throwToSingleThreaded() below. >> > // The TSO must be on the run queue of the Capability we own, or >> > // we must own all Capabilities. >> > if (tso->why_blocked != BlockedOnCCall && >> > tso->why_blocked != BlockedOnCCall_Interruptible) { >> > throwToSingleThreaded(tso->cap,tso,NULL); >> > } >> > } >> > >> > So it looks like interruptible foreign calls are not interrupted in the >> > exit sequence. >> > >> > Is there a good reason why we have this behavior? Could we change it to >> > interrupt TSO's with why_blocked == BlockedOnCCall_Interruptible in the >> > exit sequence? >> > >> > Thanks, >> > Andi >> > >> > P.S. It looks like this was introduced in commit >> > 83d563cb9ede0ba792836e529b1e2929db926355. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haskell at benmachine.co.uk Sat Aug 2 20:54:57 2014 From: haskell at benmachine.co.uk (Ben Millwood) Date: Sat, 2 Aug 2014 21:54:57 +0100 Subject: Overlapping and incoherent instances In-Reply-To: <20140802195157.GB119560@srcf.ucam.org> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2208260F@DB3PRD3001MB020.064d.mgd.msft.net> <20140802152714.GA119560@srcf.ucam.org> <20140802195157.GB119560@srcf.ucam.org> Message-ID: <20140802205457.GA157265@srcf.ucam.org> On Sat, Aug 02, 2014 at 08:51:57PM +0100, Ben Millwood wrote: >On Sat, Aug 02, 2014 at 04:27:14PM +0100, Ben Millwood wrote: >>On Thu, Jul 31, 2014 at 07:20:31AM +0000, Simon Peyton Jones wrote: >>>My main motivation was to signal the proposed deprecation of the global per-module flag -XoverlappingInstances. Happily people generally seem fine with this. It is, after all, precisely what deprecations are for ("the old thing still works for now, but it won't do so for ever, and you should change as soon as is convenient"). >> >>Here's one concern I have with the deprecation of >>-XOverlappingInstances: I don't like overlapping instances, I find >>them confusing and weird and prefer to use code that doesn't >>include them, because they violate my expectations about how type >>classes work. When there is a single LANGUAGE pragma, that's a >>simple, easily-checkable signpost of "this code uses techniques >>that Ben doesn't understand". When it is all controlled by pragmas >>I basically have to check every instance declaration individually. >> >>On a largely unrelated note, here's another thing I don't >>understand: when is OVERLAPPABLE at one instance declaration >>preferable to using only OVERLAPPING at the instance declarations >>that overlap it? In the latter model, as long as none of the >>instances I write have pragmas, I can be sure none of them overlap. >>In the former model, any instance I write for an existing typeclass >>might overlap another instance, even if I don't want it to. Do we >>have any specific use cases in mind for OVERLAPPABLE? >>_______________________________________________ >>Libraries mailing list >>Libraries at haskell.org >>http://www.haskell.org/mailman/listinfo/libraries > >When I originally sent this mail I wasn't subscribed to the GHC >lists, so I went and fixed that and am now resending. Good grief, and then I sent from the wrong address. Sorry for the noise. >Addendum: I was surprised by the behaviour of overlapping instances >when I went and looked closer at it. > > {-# LANGUAGE FlexibleInstances #-} > module M where > class C a where f :: a -> a > instance C a where f x = x > instance C Int where f x = x + 1 > >I suspect many people have the intuition that NoOverlappingInstances >should forbid the above, but in fact OverlappingInstances or no only >controls instance *resolution*. I imagine you all already knew this >but I did not until I carefully reread things. > >As someone who dislikes overlapping type class instances, I am >finding them harder to avoid than I at first thought :( From hvriedel at gmail.com Sun Aug 3 09:31:30 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 03 Aug 2014 11:31:30 +0200 Subject: Question about BangPatterns semantics/documentation Message-ID: <877g2px6kt.fsf@gmail.com> The GHC User's Guide[1] says: | There is one (apparent) exception to this general rule that a bang | only makes a difference when it precedes a variable or wild-card: a | bang at the top level of a let or where binding makes the binding | strict, regardless of the pattern. (We say "apparent" exception | because the Right Way to think of it is that the bang at the top of a | binding is not part of the pattern; rather it is part of the syntax of | the binding, creating a "bang-pattern binding".) For example: | | let ![x,y] = e in b | | is a bang-pattern binding. Operationally, it behaves just like a case | expression: | | case e of [x,y] -> b However, the following two functions are not equivalent after compilation to Core: g, h :: (Int -> Int) -> Int -> () g f x = let !y = f x in () h f x = case f x of y -> () In fact, compilation results in g = \ (f_asi :: Int -> Int) (x_asj :: Int) -> case f_asi x_asj of _ [Occ=Dead] { I# ipv_sKS -> () } h = \ _ [Occ=Dead] _ [Occ=Dead] -> () Is the documentation inaccurate/incomplete/I-missed-something or is the implementation to blame? Cheers, hvr [1]: http://www.haskell.org/ghc/docs/7.8.3/html/users_guide/bang-patterns.html From mail at joachim-breitner.de Sun Aug 3 14:41:13 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 03 Aug 2014 16:41:13 +0200 Subject: Question about BangPatterns semantics/documentation In-Reply-To: <877g2px6kt.fsf@gmail.com> References: <877g2px6kt.fsf@gmail.com> Message-ID: <1407076873.1744.1.camel@joachim-breitner.de> Hi Herbert, Am Sonntag, den 03.08.2014, 11:31 +0200 schrieb Herbert Valerio Riedel: > However, the following two functions are not equivalent after > compilation to Core: > > g, h :: (Int -> Int) -> Int -> () > g f x = let !y = f x in () > h f x = case f x of y -> () > > In fact, compilation results in > > g = \ (f_asi :: Int -> Int) > (x_asj :: Int) -> > case f_asi x_asj of _ [Occ=Dead] { I# ipv_sKS -> () } > > h = \ _ [Occ=Dead] _ [Occ=Dead] -> () > > Is the documentation inaccurate/incomplete/I-missed-something or is the > implementation to blame? I think that in Haskell (which is not Core!), a "case" does not imply evaluation ? only if the patterns require it. So the example in the docs is correct (case e of [x,y] -> b requires evaluation of e), but your example is simply optimized away. haskell.org is down, so I can?t check if the report has anything to say about that. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Mon Aug 4 07:21:50 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Aug 2014 07:21:50 +0000 Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) In-Reply-To: <1406928426.6407.2.camel@joachim-breitner.de> References: <20140801175727.61A0A240EA@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> <1406928426.6407.2.camel@joachim-breitner.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF2208890B@DB3PRD3001MB020.064d.mgd.msft.net> Ha. max_bytes_used is vulnerable to exactly when gc strikes, so I'm disinclined to get stressed about this. I was mis-reading it as bytes-allocated. Interestingly it doesn't happen for me. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Joachim Breitner | Sent: 01 August 2014 22:27 | To: ghc-devs at haskell.org | Subject: Re: [commit: ghc] master: Bump haddock.base max_bytes_used | (8df7fea) | | Hi, | | | Am Freitag, den 01.08.2014, 20:28 +0000 schrieb Simon Peyton Jones: | > Urk. It's quite surprising that this particular change would | increase allocation significantly. | > I wonder whether it just pushed it over the threshold. | | I?m confident it was not just that: | | ~/logs $ fgrep 'Deviation haddock.base(normal) max_bytes_used' $(cd | ghc-master; git log --oneline --first-parent | db19c665ec5055c2193b2174519866045aeff09a..HEAD | cut -d\ -f1| (cd ..; | while read x ; do test -e $x.log && echo $x.log; done) |tac )|tail -n | 25 | 6fa6caa.log: Deviation haddock.base(normal) max_bytes_used: | 2.2 % | a0ff1eb.log: Deviation haddock.base(normal) max_bytes_used: - | 1.0 % | 0be7c2c.log: Deviation haddock.base(normal) max_bytes_used: | 2.2 % | dc7d3c2.log: Deviation haddock.base(normal) max_bytes_used: | 2.2 % | 7381cee.log: Deviation haddock.base(normal) max_bytes_used: | 2.2 % | fe2d807.log: Deviation haddock.base(normal) max_bytes_used: | 2.2 % | bfaa179.log: Deviation haddock.base(normal) max_bytes_used: - | 0.9 % | 1ae5fa4.log: Deviation haddock.base(normal) max_bytes_used: | 11.0 % | c97f853.log: Deviation haddock.base(normal) max_bytes_used: | 11.0 % | fd47e26.log: Deviation haddock.base(normal) max_bytes_used: | 11.2 % | bdf0ef0.log: Deviation haddock.base(normal) max_bytes_used: | 11.1 % | 58ed1cc.log: Deviation haddock.base(normal) max_bytes_used: | 11.0 % | 1c1ef82.log: Deviation haddock.base(normal) max_bytes_used: | 11.2 % | 52188ad.log: Deviation haddock.base(normal) max_bytes_used: | 11.0 % | 3b9fe0c.log: Deviation haddock.base(normal) max_bytes_used: | 11.2 % | 6483b8a.log: Deviation haddock.base(normal) max_bytes_used: | 11.0 % | 9d9a554.log: Deviation haddock.base(normal) max_bytes_used: | 11.2 % | 028630a.log: Deviation haddock.base(normal) max_bytes_used: | 11.2 % | aab5937.log: Deviation haddock.base(normal) max_bytes_used: | 11.0 % | 6c06db1.log: Deviation haddock.base(normal) max_bytes_used: | 11.0 % | 2989ffd.log: Deviation haddock.base(normal) max_bytes_used: | 11.1 % | d4d4bef.log: Deviation haddock.base(normal) max_bytes_used: | 11.2 % | 8df7fea.log: Deviation haddock.base(normal) max_bytes_used: - | 0.0 % | 3faff73.log: Deviation haddock.base(normal) max_bytes_used: - | 0.0 % | 02975c9.log: Deviation haddock.base(normal) max_bytes_used: - | 0.1 % | | | (If this were a bytes_allocated test I could also show you nice graphs | like http://ghcspeed-nomeata.rhcloud.com/timeline/?exe=2&base=2% | 2B68&ben=tests%2Falloc%2FT6048&env=1&revs=50&equid=on but I didn?t add | the max_bytes_used tests yet.) | | Interestingly, bytes_allocated did not change a bit! | | Greetings, | Joachim | | -- | Joachim ?nomeata? Breitner | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F | Debian Developer: nomeata at debian.org From simonpj at microsoft.com Mon Aug 4 07:50:19 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Aug 2014 07:50:19 +0000 Subject: [commit: ghc] master: Dont allow hand-written Generic instances in Safe Haskell. (578fbec) In-Reply-To: <20140802020535.213CE2406B@ghc.haskell.org> References: <20140802020535.213CE2406B@ghc.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF22088992@DB3PRD3001MB020.064d.mgd.msft.net> David Thanks for doing this. I'm a bit concerned, though, that there is quite a bit of Safe-Haskell special-casing in GHC, but no single place to look for a list of what the choices are, and why they are made. Even the paragraph you wrote as a commit comment would make a helpful Note to accompany the code changes. The worry is that in five years time someone will look at this code and wonder "why exactly is this special case there?". They may look in the paper, but Generics post-dates it. Would it be worth Wiki page to collect the choices? Or more detailed Notes with the individual tests? Thanks Simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf Of | git at git.haskell.org | Sent: 02 August 2014 03:06 | To: ghc-commits at haskell.org | Subject: [commit: ghc] master: Dont allow hand-written Generic | instances in Safe Haskell. (578fbec) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : master | Link : | http://ghc.haskell.org/trac/ghc/changeset/578fbeca31dd3d755e24e910c3a73 | 27f92bc4ee3/ghc | | >--------------------------------------------------------------- | | commit 578fbeca31dd3d755e24e910c3a7327f92bc4ee3 | Author: David Terei | Date: Thu Dec 5 17:27:17 2013 -0800 | | Dont allow hand-written Generic instances in Safe Haskell. | | While they aren't strictly unsafe, it is a similar situation to | Typeable. There are few instances where a programmer will write | their | own instance, and having compiler assurance that the Generic | implementation is correct brings a lot of benefits. | | | >--------------------------------------------------------------- | | 578fbeca31dd3d755e24e910c3a7327f92bc4ee3 | compiler/prelude/PrelNames.lhs | 3 +++ | compiler/typecheck/TcInstDcls.lhs | 31 +++++++++++++++++++++---------- | 2 files changed, 24 insertions(+), 10 deletions(-) | | diff --git a/compiler/prelude/PrelNames.lhs | b/compiler/prelude/PrelNames.lhs index 2c84e40..b2dec88 100644 | --- a/compiler/prelude/PrelNames.lhs | +++ b/compiler/prelude/PrelNames.lhs | @@ -1084,6 +1084,9 @@ datatypeClassName = clsQual gHC_GENERICS | (fsLit "Datatype") datatypeClassK | constructorClassName = clsQual gHC_GENERICS (fsLit "Constructor") | constructorClassKey | selectorClassName = clsQual gHC_GENERICS (fsLit "Selector") | selectorClassKey | | +genericClassNames :: [Name] | +genericClassNames = [genClassName, gen1ClassName] | + | -- GHCi things | ghciIoClassName, ghciStepIoMName :: Name ghciIoClassName = clsQual | gHC_GHCI (fsLit "GHCiSandboxIO") ghciIoClassKey diff --git | a/compiler/typecheck/TcInstDcls.lhs b/compiler/typecheck/TcInstDcls.lhs | index c3ba825..6ff8a2b 100644 | --- a/compiler/typecheck/TcInstDcls.lhs | +++ b/compiler/typecheck/TcInstDcls.lhs | @@ -51,8 +51,8 @@ import VarEnv | import VarSet | import CoreUnfold ( mkDFunUnfolding ) | import CoreSyn ( Expr(Var, Type), CoreExpr, mkTyApps, mkVarApps ) | -import PrelNames ( tYPEABLE_INTERNAL, typeableClassName, | oldTypeableClassNames ) | - | +import PrelNames ( tYPEABLE_INTERNAL, typeableClassName, | + oldTypeableClassNames, genericClassNames ) | import Bag | import BasicTypes | import DynFlags | @@ -415,13 +415,16 @@ tcInstDecls1 tycl_decls inst_decls deriv_decls | -- hand written instances of old Typeable as then unsafe casts | could be | -- performed. Derived instances are OK. | ; dflags <- getDynFlags | - ; when (safeLanguageOn dflags) $ | - mapM_ (\x -> when (typInstCheck x) | - (addErrAt (getSrcSpan $ iSpec x) | typInstErr)) | - local_infos | + ; when (safeLanguageOn dflags) $ forM_ local_infos $ \x -> case | x of | + _ | typInstCheck x -> addErrAt (getSrcSpan $ iSpec x) | (typInstErr x) | + _ | genInstCheck x -> addErrAt (getSrcSpan $ iSpec x) | (genInstErr x) | + _ -> return () | + | -- As above but for Safe Inference mode. | - ; when (safeInferOn dflags) $ | - mapM_ (\x -> when (typInstCheck x) recordUnsafeInfer) | local_infos | + ; when (safeInferOn dflags) $ forM_ local_infos $ \x -> case x | of | + _ | typInstCheck x -> recordUnsafeInfer | + _ | genInstCheck x -> recordUnsafeInfer | + _ -> return () | | ; return ( gbl_env | , bagToList deriv_inst_info ++ local_infos @@ -442,8 | +445,16 @@ tcInstDecls1 tycl_decls inst_decls deriv_decls | else (typeableInsts, i:otherInsts) | | typInstCheck ty = is_cls_nm (iSpec ty) `elem` | oldTypeableClassNames | - typInstErr = ptext $ sLit $ "Can't create hand written instances | of Typeable in Safe" | - ++ " Haskell! Can only derive them" | + typInstErr i = hang (ptext (sLit $ "Typeable instances can only be | " | + ++ "derived in Safe Haskell.") $+$ | + ptext (sLit "Replace the following | instance:")) | + 2 (pprInstanceHdr (iSpec i)) | + | + genInstCheck ty = is_cls_nm (iSpec ty) `elem` genericClassNames | + genInstErr i = hang (ptext (sLit $ "Generic instances can only be | " | + ++ "derived in Safe Haskell.") $+$ | + ptext (sLit "Replace the following | instance:")) | + 2 (pprInstanceHdr (iSpec i)) | | instMsg i = hang (ptext (sLit $ "Typeable instances can only be | derived; replace " | ++ "the following instance:")) | | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-commits From marlowsd at gmail.com Mon Aug 4 09:33:40 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 04 Aug 2014 10:33:40 +0100 Subject: globalRegMaybe and ARM In-Reply-To: <87d2cwhqoa.fsf@gmail.com> References: <87d2cwhqoa.fsf@gmail.com> Message-ID: <53DF5374.3010308@gmail.com> Hi Ben, There has been some confusion here because I accidentally committed a change to this file that then got reverted. The original fix was #9055, making it so that on platforms with no registers, globalRegMaybe would return Nothing (as it should). This wasn't necessary until recently when we started using globalRegMaybe during optimisation in CmmSink. Yes I think you should add MACHREGS_arm to the #if in that file, and anywhere else where we do similar things. Cheers, Simon On 23/07/2014 19:32, Ben Gamari wrote: > > Hello Simon, > > b0534f7 [1] and the subsequent reversion f0fcc41d7 touched > `includes/CodeGen.Platform.hs`, the former removing a panic in the case > of `globalRegMaybe` being undefined for a platform and replacing it with > `Nothing`. > > Recently I've found that my ARM builds (with -fllvm) crash at this panic > whereas they did not as of the 7.8 release. Given that b0534f7 was > reverted this is no doubt due to another change that I haven't > identified yet. Do you have any idea what is happening here? > > I'm currently attempting to build with a workaround setting > `globalRegMaybe _ = Nothing`, although this smells suspicously like what > would happen in the unregisterized case. > > My other hypothesis is that MACHREGS_arm should be added to the > > #if MACHREGS_i386 || MACHREGS_x86_64 || MACHREGS_sparc || MACHREGS_powerpc > > which smells more like what a registerised architecture should do and it > seems the requisite macros are defined for ARM in > `stg/MachRegs.h`. Whatever happens for ARM should probably > also happen for AArch64. > > How should `globalRegMaybe` and `freeReg` be defined for platforms that > rely exclusively on the LLVM backend? Both ARM and AArch64 appear to be > doing the wrong thing at present. > > Cheers, > > - Ben > > > [1] https://github.com/ghc/ghc/commit/b0534f78a73f972e279eed4447a5687bd6a8308e#diff-4899eba6e173d5811d08d6c312da7752R741 > From austin at well-typed.com Mon Aug 4 11:01:38 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 4 Aug 2014 06:01:38 -0500 Subject: HEADS UP: Applicative-Monad incoming Message-ID: Hi all, The Applicative Monad changes will be landing in HEAD soon, hopefully within a few hours once ./validate finishes and I triple-check everything. Why am I sending this? To warn you that you need to update Happy to *at least* version 1.19.4 before you can build GHC again. This is because Happy emitted parsers that had instances of Monad, and thus Happy needed to be fixed to also emit Applicative instances. Please be sure to upgrade your workstations, buildbots, toasters, etc so you don't get caught by surprise. ./configure will fail you if your Happy is invalid. This message is brought to you by sleep deprivation and almost two days of hunting an infinite loop in my changeset, which was quite annoying to track down. I'm sure something will still go wrong during ./validate... -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ezyang at mit.edu Mon Aug 4 11:02:48 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 04 Aug 2014 12:02:48 +0100 Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2208890B@DB3PRD3001MB020.064d.mgd.msft.net> References: <20140801175727.61A0A240EA@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> <1406928426.6407.2.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF2208890B@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1407150134-sup-8316@sabre> Yes, on my box, this test is now failing (because the stat is too good): Expected haddock.base(normal) max_bytes_used: 127954488 +/-10% Lower bound haddock.base(normal) max_bytes_used: 115159039 Upper bound haddock.base(normal) max_bytes_used: 140749937 Actual haddock.base(normal) max_bytes_used: 113167424 Deviation haddock.base(normal) max_bytes_used: -11.6 % Cheers, Edward Excerpts from Simon Peyton Jones's message of 2014-08-04 08:21:50 +0100: > Ha. max_bytes_used is vulnerable to exactly when gc strikes, so I'm disinclined to get stressed about this. I was mis-reading it as bytes-allocated. Interestingly it doesn't happen for me. > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Joachim Breitner > | Sent: 01 August 2014 22:27 > | To: ghc-devs at haskell.org > | Subject: Re: [commit: ghc] master: Bump haddock.base max_bytes_used > | (8df7fea) > | > | Hi, > | > | > | Am Freitag, den 01.08.2014, 20:28 +0000 schrieb Simon Peyton Jones: > | > Urk. It's quite surprising that this particular change would > | increase allocation significantly. > | > I wonder whether it just pushed it over the threshold. > | > | I?m confident it was not just that: > | > | ~/logs $ fgrep 'Deviation haddock.base(normal) max_bytes_used' $(cd > | ghc-master; git log --oneline --first-parent > | db19c665ec5055c2193b2174519866045aeff09a..HEAD | cut -d\ -f1| (cd ..; > | while read x ; do test -e $x.log && echo $x.log; done) |tac )|tail -n > | 25 > | 6fa6caa.log: Deviation haddock.base(normal) max_bytes_used: > | 2.2 % > | a0ff1eb.log: Deviation haddock.base(normal) max_bytes_used: - > | 1.0 % > | 0be7c2c.log: Deviation haddock.base(normal) max_bytes_used: > | 2.2 % > | dc7d3c2.log: Deviation haddock.base(normal) max_bytes_used: > | 2.2 % > | 7381cee.log: Deviation haddock.base(normal) max_bytes_used: > | 2.2 % > | fe2d807.log: Deviation haddock.base(normal) max_bytes_used: > | 2.2 % > | bfaa179.log: Deviation haddock.base(normal) max_bytes_used: - > | 0.9 % > | 1ae5fa4.log: Deviation haddock.base(normal) max_bytes_used: > | 11.0 % > | c97f853.log: Deviation haddock.base(normal) max_bytes_used: > | 11.0 % > | fd47e26.log: Deviation haddock.base(normal) max_bytes_used: > | 11.2 % > | bdf0ef0.log: Deviation haddock.base(normal) max_bytes_used: > | 11.1 % > | 58ed1cc.log: Deviation haddock.base(normal) max_bytes_used: > | 11.0 % > | 1c1ef82.log: Deviation haddock.base(normal) max_bytes_used: > | 11.2 % > | 52188ad.log: Deviation haddock.base(normal) max_bytes_used: > | 11.0 % > | 3b9fe0c.log: Deviation haddock.base(normal) max_bytes_used: > | 11.2 % > | 6483b8a.log: Deviation haddock.base(normal) max_bytes_used: > | 11.0 % > | 9d9a554.log: Deviation haddock.base(normal) max_bytes_used: > | 11.2 % > | 028630a.log: Deviation haddock.base(normal) max_bytes_used: > | 11.2 % > | aab5937.log: Deviation haddock.base(normal) max_bytes_used: > | 11.0 % > | 6c06db1.log: Deviation haddock.base(normal) max_bytes_used: > | 11.0 % > | 2989ffd.log: Deviation haddock.base(normal) max_bytes_used: > | 11.1 % > | d4d4bef.log: Deviation haddock.base(normal) max_bytes_used: > | 11.2 % > | 8df7fea.log: Deviation haddock.base(normal) max_bytes_used: - > | 0.0 % > | 3faff73.log: Deviation haddock.base(normal) max_bytes_used: - > | 0.0 % > | 02975c9.log: Deviation haddock.base(normal) max_bytes_used: - > | 0.1 % > | > | > | (If this were a bytes_allocated test I could also show you nice graphs > | like http://ghcspeed-nomeata.rhcloud.com/timeline/?exe=2&base=2% > | 2B68&ben=tests%2Falloc%2FT6048&env=1&revs=50&equid=on but I didn?t add > | the max_bytes_used tests yet.) > | > | Interestingly, bytes_allocated did not change a bit! > | > | Greetings, > | Joachim > | > | -- > | Joachim ?nomeata? Breitner > | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > | Debian Developer: nomeata at debian.org > From austin at well-typed.com Mon Aug 4 11:04:14 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 4 Aug 2014 06:04:14 -0500 Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) In-Reply-To: <1407150134-sup-8316@sabre> References: <20140801175727.61A0A240EA@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> <1406928426.6407.2.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF2208890B@DB3PRD3001MB020.064d.mgd.msft.net> <1407150134-sup-8316@sabre> Message-ID: This is also happening on Phabricator, which is causing the buildbot to choke: https://phabricator.haskell.org/D112#4 On Mon, Aug 4, 2014 at 6:02 AM, Edward Z. Yang wrote: > Yes, on my box, this test is now failing (because the stat is too good): > > Expected haddock.base(normal) max_bytes_used: 127954488 +/-10% > Lower bound haddock.base(normal) max_bytes_used: 115159039 > Upper bound haddock.base(normal) max_bytes_used: 140749937 > Actual haddock.base(normal) max_bytes_used: 113167424 > Deviation haddock.base(normal) max_bytes_used: -11.6 % > > Cheers, > Edward > > Excerpts from Simon Peyton Jones's message of 2014-08-04 08:21:50 +0100: >> Ha. max_bytes_used is vulnerable to exactly when gc strikes, so I'm disinclined to get stressed about this. I was mis-reading it as bytes-allocated. Interestingly it doesn't happen for me. >> >> Simon >> >> | -----Original Message----- >> | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of >> | Joachim Breitner >> | Sent: 01 August 2014 22:27 >> | To: ghc-devs at haskell.org >> | Subject: Re: [commit: ghc] master: Bump haddock.base max_bytes_used >> | (8df7fea) >> | >> | Hi, >> | >> | >> | Am Freitag, den 01.08.2014, 20:28 +0000 schrieb Simon Peyton Jones: >> | > Urk. It's quite surprising that this particular change would >> | increase allocation significantly. >> | > I wonder whether it just pushed it over the threshold. >> | >> | I?m confident it was not just that: >> | >> | ~/logs $ fgrep 'Deviation haddock.base(normal) max_bytes_used' $(cd >> | ghc-master; git log --oneline --first-parent >> | db19c665ec5055c2193b2174519866045aeff09a..HEAD | cut -d\ -f1| (cd ..; >> | while read x ; do test -e $x.log && echo $x.log; done) |tac )|tail -n >> | 25 >> | 6fa6caa.log: Deviation haddock.base(normal) max_bytes_used: >> | 2.2 % >> | a0ff1eb.log: Deviation haddock.base(normal) max_bytes_used: - >> | 1.0 % >> | 0be7c2c.log: Deviation haddock.base(normal) max_bytes_used: >> | 2.2 % >> | dc7d3c2.log: Deviation haddock.base(normal) max_bytes_used: >> | 2.2 % >> | 7381cee.log: Deviation haddock.base(normal) max_bytes_used: >> | 2.2 % >> | fe2d807.log: Deviation haddock.base(normal) max_bytes_used: >> | 2.2 % >> | bfaa179.log: Deviation haddock.base(normal) max_bytes_used: - >> | 0.9 % >> | 1ae5fa4.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.0 % >> | c97f853.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.0 % >> | fd47e26.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.2 % >> | bdf0ef0.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.1 % >> | 58ed1cc.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.0 % >> | 1c1ef82.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.2 % >> | 52188ad.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.0 % >> | 3b9fe0c.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.2 % >> | 6483b8a.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.0 % >> | 9d9a554.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.2 % >> | 028630a.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.2 % >> | aab5937.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.0 % >> | 6c06db1.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.0 % >> | 2989ffd.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.1 % >> | d4d4bef.log: Deviation haddock.base(normal) max_bytes_used: >> | 11.2 % >> | 8df7fea.log: Deviation haddock.base(normal) max_bytes_used: - >> | 0.0 % >> | 3faff73.log: Deviation haddock.base(normal) max_bytes_used: - >> | 0.0 % >> | 02975c9.log: Deviation haddock.base(normal) max_bytes_used: - >> | 0.1 % >> | >> | >> | (If this were a bytes_allocated test I could also show you nice graphs >> | like http://ghcspeed-nomeata.rhcloud.com/timeline/?exe=2&base=2% >> | 2B68&ben=tests%2Falloc%2FT6048&env=1&revs=50&equid=on but I didn?t add >> | the max_bytes_used tests yet.) >> | >> | Interestingly, bytes_allocated did not change a bit! >> | >> | Greetings, >> | Joachim >> | >> | -- >> | Joachim ?nomeata? Breitner >> | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ >> | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F >> | Debian Developer: nomeata at debian.org >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From mail at joachim-breitner.de Mon Aug 4 11:08:31 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 04 Aug 2014 13:08:31 +0200 Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) In-Reply-To: <1407150134-sup-8316@sabre> References: <20140801175727.61A0A240EA@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> <1406928426.6407.2.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF2208890B@DB3PRD3001MB020.064d.mgd.msft.net> <1407150134-sup-8316@sabre> Message-ID: <1407150511.1818.5.camel@joachim-breitner.de> Hi, Am Montag, den 04.08.2014, 12:02 +0100 schrieb Edward Z.Yang: > Yes, on my box, this test is now failing (because the stat is too good): > > Expected haddock.base(normal) max_bytes_used: 127954488 +/-10% > Lower bound haddock.base(normal) max_bytes_used: 115159039 > Upper bound haddock.base(normal) max_bytes_used: 140749937 > Actual haddock.base(normal) max_bytes_used: 113167424 > Deviation haddock.base(normal) max_bytes_used: -11.6 % ugh. What are your compilation settings? Plain "validate"? Looks like the ghcspeed instance settings still don?t quite match what validate does... But I don?t see anything in mk/validate-settings.mk which would yield different results than echo 'GhcLibHcOpts += -O -dcore-lint' >> mk/build.mk echo 'GhcStage2HcOpts += -O -dcore-lint' >> mk/build.mk I?m starting a plain validate run on that machine, to see if it is compilation settings or some other variable. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Mon Aug 4 11:28:22 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Aug 2014 11:28:22 +0000 Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) In-Reply-To: References: <20140801175727.61A0A240EA@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> <1406928426.6407.2.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF2208890B@DB3PRD3001MB020.064d.mgd.msft.net> <1407150134-sup-8316@sabre> Message-ID: <618BE556AADD624C9C918AA5D5911BEF22089EEA@DB3PRD3001MB020.064d.mgd.msft.net> OK, so perhaps we should just bump the limit? max_bytes_used is an unreliable meause | -----Original Message----- | From: mad.one at gmail.com [mailto:mad.one at gmail.com] On Behalf Of Austin | Seipp | Sent: 04 August 2014 12:04 | To: Edward Z. Yang | Cc: Simon Peyton Jones; Joachim Breitner; ghc-devs at haskell.org | Subject: Re: [commit: ghc] master: Bump haddock.base max_bytes_used | (8df7fea) | | This is also happening on Phabricator, which is causing the buildbot to | choke: | | https://phabricator.haskell.org/D112#4 | | On Mon, Aug 4, 2014 at 6:02 AM, Edward Z. Yang wrote: | > Yes, on my box, this test is now failing (because the stat is too | good): | > | > Expected haddock.base(normal) max_bytes_used: 127954488 +/-10% | > Lower bound haddock.base(normal) max_bytes_used: 115159039 | > Upper bound haddock.base(normal) max_bytes_used: 140749937 | > Actual haddock.base(normal) max_bytes_used: 113167424 | > Deviation haddock.base(normal) max_bytes_used: -11.6 % | > | > Cheers, | > Edward | > | > Excerpts from Simon Peyton Jones's message of 2014-08-04 08:21:50 | +0100: | >> Ha. max_bytes_used is vulnerable to exactly when gc strikes, so I'm | disinclined to get stressed about this. I was mis-reading it as bytes- | allocated. Interestingly it doesn't happen for me. | >> | >> Simon | >> | >> | -----Original Message----- | >> | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | >> | Joachim Breitner | >> | Sent: 01 August 2014 22:27 | >> | To: ghc-devs at haskell.org | >> | Subject: Re: [commit: ghc] master: Bump haddock.base max_bytes_used | >> | (8df7fea) | >> | | >> | Hi, | >> | | >> | | >> | Am Freitag, den 01.08.2014, 20:28 +0000 schrieb Simon Peyton Jones: | >> | > Urk. It's quite surprising that this particular change would | >> | increase allocation significantly. | >> | > I wonder whether it just pushed it over the threshold. | >> | | >> | I?m confident it was not just that: | >> | | >> | ~/logs $ fgrep 'Deviation haddock.base(normal) max_bytes_used' | $(cd | >> | ghc-master; git log --oneline --first-parent | >> | db19c665ec5055c2193b2174519866045aeff09a..HEAD | cut -d\ -f1| (cd | ..; | >> | while read x ; do test -e $x.log && echo $x.log; done) |tac )|tail - | n | >> | 25 | >> | 6fa6caa.log: Deviation haddock.base(normal) max_bytes_used: | >> | 2.2 % | >> | a0ff1eb.log: Deviation haddock.base(normal) max_bytes_used: | - | >> | 1.0 % | >> | 0be7c2c.log: Deviation haddock.base(normal) max_bytes_used: | >> | 2.2 % | >> | dc7d3c2.log: Deviation haddock.base(normal) max_bytes_used: | >> | 2.2 % | >> | 7381cee.log: Deviation haddock.base(normal) max_bytes_used: | >> | 2.2 % | >> | fe2d807.log: Deviation haddock.base(normal) max_bytes_used: | >> | 2.2 % | >> | bfaa179.log: Deviation haddock.base(normal) max_bytes_used: | - | >> | 0.9 % | >> | 1ae5fa4.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.0 % | >> | c97f853.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.0 % | >> | fd47e26.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.2 % | >> | bdf0ef0.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.1 % | >> | 58ed1cc.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.0 % | >> | 1c1ef82.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.2 % | >> | 52188ad.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.0 % | >> | 3b9fe0c.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.2 % | >> | 6483b8a.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.0 % | >> | 9d9a554.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.2 % | >> | 028630a.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.2 % | >> | aab5937.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.0 % | >> | 6c06db1.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.0 % | >> | 2989ffd.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.1 % | >> | d4d4bef.log: Deviation haddock.base(normal) max_bytes_used: | >> | 11.2 % | >> | 8df7fea.log: Deviation haddock.base(normal) max_bytes_used: | - | >> | 0.0 % | >> | 3faff73.log: Deviation haddock.base(normal) max_bytes_used: | - | >> | 0.0 % | >> | 02975c9.log: Deviation haddock.base(normal) max_bytes_used: | - | >> | 0.1 % | >> | | >> | | >> | (If this were a bytes_allocated test I could also show you nice | graphs | >> | like http://ghcspeed-nomeata.rhcloud.com/timeline/?exe=2&base=2% | >> | 2B68&ben=tests%2Falloc%2FT6048&env=1&revs=50&equid=on but I didn?t | add | >> | the max_bytes_used tests yet.) | >> | | >> | Interestingly, bytes_allocated did not change a bit! | >> | | >> | Greetings, | >> | Joachim | >> | | >> | -- | >> | Joachim ?nomeata? Breitner | >> | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ | >> | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F | >> | Debian Developer: nomeata at debian.org | >> | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | | | | -- | Regards, | | Austin Seipp, Haskell Consultant | Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Mon Aug 4 11:47:35 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 4 Aug 2014 06:47:35 -0500 Subject: I'm going to disable DPH until someone starts maintaining it Message-ID: Everyone, I'm still working on the AMP, and of course, things aren't going as planned. Why is that? I'm attempting to update `vector` to the latest version from GitHub, as it has some fixes we need for AMP. Mostly, it has some instances we need. Unfortunately, this is basically impossible because dph is locked into a vector fork of ours that we have been maintaining for *months*, and doesn't work with the latest vector upstream (0.11) due to changes in the stream representation. This means I can either: 1) Fix everything in DPH to work with vector 0.11, which is probably going to take a lot of work. 2) Merely fix our fork of vector and let things continue working. This is much easier than #1. Now, you might say that #2 is clearly a preferable solution, and it's very easy - so just do that, Austin! But I don't want to do it. You could say this is the straw that has broken the camel's back. Why? Because I'm afraid I just don't have any more patience for DPH, I'm tired of fixing it, and it takes up a lot of extra time to build, and time to maintain. In fact I'm the only person who's committed to it in *months*, and that has only been to fix breakage. The hackage packages are out of date and sync with what's in the repository (I can't upload them, nor can anyone else besides Ben I believe). So - why are we still building it, exactly? We had a conversation about this months ago. The concern was that things would break and we don't want it to fall out of sync. We're at that point right now - things are breaking, it's out of sync, and it's a pain to keep fixing it, and the actual *benefits* we get from doing so are completely unclear to me. It basically just seems like extra work for nothing, honestly. Unless someone speaks[1] up *very* soon, I'm going to disable DPH during ./validate and the regular build. It will be possible to build it with a '--dph' flag (the dual of the current '--no-dph' flag), although it will be broken very soon with these incoming changes. Providing someone starts fixing it, I'm completely, 100% open to re-enabling it in ./validate by default. But I'm personally tired of fixing it. I'm CC'ing Manuel, Geoff & Ben for their inputs. [1] And by 'speak up', I mean I'd like to see someone actively step forward address my concerns above in a decisive manner. With patches. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Mon Aug 4 11:55:10 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Aug 2014 11:55:10 +0000 Subject: I'm going to disable DPH until someone starts maintaining it In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF22089FB8@DB3PRD3001MB020.064d.mgd.msft.net> Adding Ben, Roman, Gabi SImon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Austin Seipp | Sent: 04 August 2014 12:48 | To: ghc-devs at haskell.org | Cc: Manuel Chakravarty; Geoffrey Mainland | Subject: I'm going to disable DPH until someone starts maintaining it | | Everyone, | | I'm still working on the AMP, and of course, things aren't going as | planned. Why is that? | | I'm attempting to update `vector` to the latest version from GitHub, as | it has some fixes we need for AMP. Mostly, it has some instances we | need. | | Unfortunately, this is basically impossible because dph is locked into | a vector fork of ours that we have been maintaining for *months*, and | doesn't work with the latest vector upstream (0.11) due to changes in | the stream representation. This means I can either: | | 1) Fix everything in DPH to work with vector 0.11, which is probably | going to take a lot of work. | 2) Merely fix our fork of vector and let things continue working. | This is much easier than #1. | | Now, you might say that #2 is clearly a preferable solution, and it's | very easy - so just do that, Austin! | | But I don't want to do it. You could say this is the straw that has | broken the camel's back. | | Why? Because I'm afraid I just don't have any more patience for DPH, | I'm tired of fixing it, and it takes up a lot of extra time to build, | and time to maintain. | | In fact I'm the only person who's committed to it in *months*, and that | has only been to fix breakage. The hackage packages are out of date and | sync with what's in the repository (I can't upload them, nor can anyone | else besides Ben I believe). | | So - why are we still building it, exactly? | | We had a conversation about this months ago. The concern was that | things would break and we don't want it to fall out of sync. We're at | that point right now - things are breaking, it's out of sync, and it's | a pain to keep fixing it, and the actual *benefits* we get from doing | so are completely unclear to me. It basically just seems like extra | work for nothing, honestly. | | Unless someone speaks[1] up *very* soon, I'm going to disable DPH | during ./validate and the regular build. It will be possible to build | it with a '--dph' flag (the dual of the current '--no-dph' flag), | although it will be broken very soon with these incoming changes. | | Providing someone starts fixing it, I'm completely, 100% open to re- | enabling it in ./validate by default. But I'm personally tired of | fixing it. | | I'm CC'ing Manuel, Geoff & Ben for their inputs. | | [1] And by 'speak up', I mean I'd like to see someone actively step | forward address my concerns above in a decisive manner. With patches. | | -- | Regards, | | Austin Seipp, Haskell Consultant | Well-Typed LLP, http://www.well-typed.com/ | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From mail at joachim-breitner.de Mon Aug 4 12:13:48 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 04 Aug 2014 14:13:48 +0200 Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) In-Reply-To: <1407150511.1818.5.camel@joachim-breitner.de> References: <20140801175727.61A0A240EA@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> <1406928426.6407.2.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF2208890B@DB3PRD3001MB020.064d.mgd.msft.net> <1407150134-sup-8316@sabre> <1407150511.1818.5.camel@joachim-breitner.de> Message-ID: <1407154428.1818.7.camel@joachim-breitner.de> Hi, Am Montag, den 04.08.2014, 13:08 +0200 schrieb Joachim Breitner: > Am Montag, den 04.08.2014, 12:02 +0100 schrieb Edward Z.Yang: > > Yes, on my box, this test is now failing (because the stat is too good): > > > > Expected haddock.base(normal) max_bytes_used: 127954488 +/-10% > > Lower bound haddock.base(normal) max_bytes_used: 115159039 > > Upper bound haddock.base(normal) max_bytes_used: 140749937 > > Actual haddock.base(normal) max_bytes_used: 113167424 > > Deviation haddock.base(normal) max_bytes_used: -11.6 % > > ugh. > > What are your compilation settings? Plain "validate"? > > Looks like the ghcspeed instance settings still don?t quite match what > validate does... > > But I don?t see anything in > mk/validate-settings.mk > which would yield different results than > echo 'GhcLibHcOpts += -O -dcore-lint' >> mk/build.mk > echo 'GhcStage2HcOpts += -O -dcore-lint' >> mk/build.mk > > I?m starting a plain validate run on that machine, to see if it is > compilation settings or some other variable. validate goes through without a problem. So it seems to be dependent on other things. Are these very flaky measures (max_bytes_used) at all useful? So far, I have only seen friction due to them, and any real problem would likely be caught by either bytes_allocated or nofib measurements (I hope). Maybe we should simply remove them from the test suite, and stop worrying? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From benl at ouroborus.net Mon Aug 4 12:18:11 2014 From: benl at ouroborus.net (Ben Lippmeier) Date: Mon, 4 Aug 2014 22:18:11 +1000 Subject: I'm going to disable DPH until someone starts maintaining it In-Reply-To: References: Message-ID: <2F09BA4C-DADD-490F-9C29-D74BBD449A85@ouroborus.net> On 4 Aug 2014, at 21:47 , Austin Seipp wrote: > Why? Because I'm afraid I just don't have any more patience for DPH, > I'm tired of fixing it, and it takes up a lot of extra time to build, > and time to maintain. I'm not going to argue against cutting it lose. > So - why are we still building it, exactly? It can be a good stress test for the simplifier, especially the SpecConstr transform. The fact that it takes so long to build is part of the reason it's a good stress test. > [1] And by 'speak up', I mean I'd like to see someone actively step > forward address my concerns above in a decisive manner. With patches. I thought that in the original conversation we agreed that if the DPH code became too much of a burden it was fine to switch it off and let it become unmaintained. I don't have time to maintain it anymore myself. The original DPH project has fractured into a few different research streams, none of which work directly with the implementation in GHC, or with the DPH libraries that are bundled with the GHC build. The short of it is that the array fusion mechanism implemented in DPH (based on stream fusion) is inadequate for the task. A few people are working on replacement fusion systems that aim to solve this problem, but merging this work back into DPH will entail an almost complete rewrite of the backend libraries. If it the existing code has become a maintenance burden then it's fine to switch it off. Sorry for the trouble. Ben. From svenpanne at gmail.com Mon Aug 4 12:52:23 2014 From: svenpanne at gmail.com (Sven Panne) Date: Mon, 4 Aug 2014 14:52:23 +0200 Subject: Release building for Windows In-Reply-To: References: Message-ID: 2014-08-02 4:49 GMT+02:00 Mark Lentczner : > [...] * All Windows HP 2014.2.0.0 packages have been built without > --enabled-split-objs, in deference to the GHC 7.8 FAQ [...] Do you have an URL for this FAQ? I can't find it, and I can't remember what's wrong with --enable-split-objs. :-( What is the impact for people using large libraries (like OpenGL/OpenGLRaw/...) where you often use only a small part? Will you get a huge executable then? IIRC that's at least the case for Linux. Cheers, S. From austin at well-typed.com Mon Aug 4 12:57:38 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 4 Aug 2014 07:57:38 -0500 Subject: Phabricator updates Message-ID: Hello *, I've spent the morning re-jiggering our Phabricator documentation: https://ghc.haskell.org/trac/ghc/wiki/Phabricator It now includes: - More screenshots! - Coverage of core applications, including Owners, and better coverage of Herald - Coverage of the new remarkup syntax - Better tips - Linking issues in Phabricator to Trac issues I particularly think people will find 'Owners' very useful, in combination with Herald. I have already organized a lot of the GHC source tree into 'Owner packages' and assigned owners. If you're working on the compiler, please go tweak those and add yourself as an owner of some part of the compiler! In particular the last one is the one you'll want to note the most. When you run `arc diff` now, you can associate a differential revision with a ticket. Here's the documentation with a nice big screenshot: https://ghc.haskell.org/trac/ghc/wiki/Phabricator#Linkingtracticketsandwikisyntax The TL;DR is when you run `arc diff`, just fill out the new "GHC Trac Issues" field and it'll be linked. As a bonus, it'll automatically show up in commit messages, and be hyperlinked in Phabricator appropriately. Note: Phabricator does not yet comment on Trac still, sorry. I didn't get around to it this week. Please let me know if anything is confusing. I'm sure I need to expand on stuff; in particular I still need to do another pass, adding some screenshots for Audit, and some of the "Philosophy of Phabricator" on how you may want to submit reviews and think about organizing your work. Thanks. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From the.dead.shall.rise at gmail.com Mon Aug 4 12:59:26 2014 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Mon, 4 Aug 2014 14:59:26 +0200 Subject: Release building for Windows In-Reply-To: References: Message-ID: Hi, On 4 August 2014 14:52, Sven Panne wrote: > Do you have an URL for this FAQ? I can't find it, and I can't remember > what's wrong with --enable-split-objs. :-( What is the impact for > people using large libraries (like OpenGL/OpenGLRaw/...) where you > often use only a small part? Will you get a huge executable then? IIRC > that's at least the case for Linux. https://ghc.haskell.org/trac/ghc/wiki/GHC-7.8-FAQ One of the problems is that split-objs is extremely slow, especially on Windows. I had to disable split-objs for OpenGL-related libraries when building the HP installer in the past because of this. Randy also said that libraries built with split-objs don't work well in ghci on Windows x64. From ezyang at mit.edu Mon Aug 4 13:02:42 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 04 Aug 2014 14:02:42 +0100 Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) In-Reply-To: <1407150511.1818.5.camel@joachim-breitner.de> References: <20140801175727.61A0A240EA@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> <1406928426.6407.2.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF2208890B@DB3PRD3001MB020.064d.mgd.msft.net> <1407150134-sup-8316@sabre> <1407150511.1818.5.camel@joachim-breitner.de> Message-ID: <1407157317-sup-4049@sabre> Yes, plain validate. Cheers, Edward Excerpts from Joachim Breitner's message of 2014-08-04 12:08:31 +0100: > Hi, > > Am Montag, den 04.08.2014, 12:02 +0100 schrieb Edward Z.Yang: > > Yes, on my box, this test is now failing (because the stat is too good): > > > > Expected haddock.base(normal) max_bytes_used: 127954488 +/-10% > > Lower bound haddock.base(normal) max_bytes_used: 115159039 > > Upper bound haddock.base(normal) max_bytes_used: 140749937 > > Actual haddock.base(normal) max_bytes_used: 113167424 > > Deviation haddock.base(normal) max_bytes_used: -11.6 % > > ugh. > > What are your compilation settings? Plain "validate"? > > Looks like the ghcspeed instance settings still don?t quite match what > validate does... > > But I don?t see anything in > mk/validate-settings.mk > which would yield different results than > echo 'GhcLibHcOpts += -O -dcore-lint' >> mk/build.mk > echo 'GhcStage2HcOpts += -O -dcore-lint' >> mk/build.mk > > I?m starting a plain validate run on that machine, to see if it is > compilation settings or some other variable. > > Greetings, > Joachim > From mainland at apeiron.net Mon Aug 4 13:49:38 2014 From: mainland at apeiron.net (Geoffrey Mainland) Date: Mon, 04 Aug 2014 09:49:38 -0400 Subject: I'm going to disable DPH until someone starts maintaining it In-Reply-To: <2F09BA4C-DADD-490F-9C29-D74BBD449A85@ouroborus.net> References: <2F09BA4C-DADD-490F-9C29-D74BBD449A85@ouroborus.net> Message-ID: <53DF8F72.7080105@apeiron.net> I have patches for DPH that let it work with vector 0.11 as of a few months ago. I would be happy to submit them via phabricator if that is agreeable (we have to coordinate with the import of vector 0.11 though...I can instead leave them in a wip branch for Austin to merge as he sees fit). I am also willing to commit some time to keep DPH at least working in its current state. Geoff On 8/4/14 8:18 AM, Ben Lippmeier wrote: > On 4 Aug 2014, at 21:47 , Austin Seipp wrote: > >> Why? Because I'm afraid I just don't have any more patience for DPH, >> I'm tired of fixing it, and it takes up a lot of extra time to build, >> and time to maintain. > I'm not going to argue against cutting it lose. > > >> So - why are we still building it, exactly? > It can be a good stress test for the simplifier, especially the SpecConstr transform. The fact that it takes so long to build is part of the reason it's a good stress test. > > >> [1] And by 'speak up', I mean I'd like to see someone actively step >> forward address my concerns above in a decisive manner. With patches. > I thought that in the original conversation we agreed that if the DPH code became too much of a burden it was fine to switch it off and let it become unmaintained. I don't have time to maintain it anymore myself. > > The original DPH project has fractured into a few different research streams, none of which work directly with the implementation in GHC, or with the DPH libraries that are bundled with the GHC build. > > The short of it is that the array fusion mechanism implemented in DPH (based on stream fusion) is inadequate for the task. A few people are working on replacement fusion systems that aim to solve this problem, but merging this work back into DPH will entail an almost complete rewrite of the backend libraries. If it the existing code has become a maintenance burden then it's fine to switch it off. > > Sorry for the trouble. > Ben. > From svenpanne at gmail.com Mon Aug 4 13:50:56 2014 From: svenpanne at gmail.com (Sven Panne) Date: Mon, 4 Aug 2014 15:50:56 +0200 Subject: Release building for Windows In-Reply-To: References: Message-ID: 2014-08-04 14:59 GMT+02:00 Mikhail Glushenkov : > https://ghc.haskell.org/trac/ghc/wiki/GHC-7.8-FAQ Hmmm, this isn't very specific, it just says that there are probably bugs, but that's true for almost all code. :-) Are there any concrete issues with --enable-split-objs? > One of the problems is that split-objs is extremely slow, especially > on Windows. I had to disable split-objs for OpenGL-related libraries > when building the HP installer in the past because of this. I think it's perfectly fine if the the compilation of the library itself takes ages if it pays off later: You compile the library once, but link against it multiple times. Or do the link times against e.g. OpenGL stuff suffer? My point is: Do we make the right trade-off here? A quick search brought up e.g. https://github.com/gentoo-haskell/gentoo-haskell/issues/169 which seems to be a request to split everything. > Randy also said that libraries built with split-objs don't work well > in ghci on Windows x64. Is there an issue for this? From austin at well-typed.com Mon Aug 4 13:57:41 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 4 Aug 2014 08:57:41 -0500 Subject: Release building for Windows In-Reply-To: References: Message-ID: Mark, Randy, Sorry for the delayed reply! On Fri, Aug 1, 2014 at 9:49 PM, Mark Lentczner wrote: > Randy Polen, undertook porting the new build of Haskell Platform to Windows. > He did a great job... but as this is first time stepping up to such a big > release, he has some questions about GHC and Windows, and the choices he had > to make. He asked me to forward these to this list, as he's not a member. > He's cc'd so you can reply to all and include him... or I can forward as > needed. > > From Randy: > ------------------ > I am building the Haskell Platform 2014.2.0.0 on the Windows side. Your > advice > would be very helpful to make sure the HP 2014 for Windows is as good as > possible. > > There were some issues I worked-around, plus some features that seem to not > be > available in this particular GHC (7.8.3) on the 32-bit and 64-bit Windows > platforms, and I would like to confirm that HP 2014.2.0.0 will be shipping > something sensible and as expected for the Windows environment, noting > things > which are supported on other environments but not on Windows. > > * GHC 7.8.3 on Windows does not support building Haskell into shared > libraries, > (GHC ticket #8228) so all packages in HP 2014.2.0.0 for Windows have been > built > without --enable-shared That's correct. > * GHC 7.8.3 on Windows does not currently support LLVM (GHC ticket #7143) Correct. > * All Windows HP 2014.2.0.0 packages have been built without > --enabled-split-objs, in deference to the GHC 7.8 FAQ No, this shouldn't be needed. split-objs should work just fine on Windows; the FAQ was referencing the fact that *users* using split-objs in their Cabal configurations will probably get odd behavior (we don't encourage split-objs outside of the packages GHC ships). Sometimes bugs arise but these generally aren't high priority for arbitrary user code. (It will also hurt users since it will dramatically increase link time - it should only be used for GHC libraries!) If you have issues here, please let me know; it's a bug. > * Extra python, etc. bits included in the GHC 7.8.3 bindist for 64-bit > Windows > (GHC issue #9014) are not installed with Windows HP 2014.2.0.0. Is eliding > them from the HP 2014.2.0.0 64-bit Windows installation safe and correct > (i.e., are they truely not required)? Hmmmm, that seems like a total oversight on my part! Yes, deleting them should be fine. Upon review, I think they're just artifacts of our 64 bit MinGW distribution. > * Missing src/html in GHC packages were worked around by replacing the > entire > GHC package doc tree of html files with the contents of the "Standard > Libraries" tarball (but not for the two packages which are not built for > Windows, terminfo and unix). Is this valid to do? Any issues might arise? > * ref: http://www.haskell.org/ghc/docs/latest/libraries.html.tar.bz2 This should be just fine. > Thanks for any advice on these. I do want to make the Windows HP > 2014.2.0.0 be as good as it can be. > > Randy > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Mon Aug 4 13:59:54 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 4 Aug 2014 08:59:54 -0500 Subject: Release building for Windows In-Reply-To: References: Message-ID: On Mon, Aug 4, 2014 at 8:50 AM, Sven Panne wrote: > 2014-08-04 14:59 GMT+02:00 Mikhail Glushenkov : >> https://ghc.haskell.org/trac/ghc/wiki/GHC-7.8-FAQ > > Hmmm, this isn't very specific, it just says that there are probably > bugs, but that's true for almost all code. :-) Are there any concrete > issues with --enable-split-objs? Sorry for the confusion; I just meant you *should not* enable split-objs in your cabal configuration - GHC uses it for its libraries, but in general users don't want it for arbitrary code (bugs, huge linking time and memory usage, etc). >> One of the problems is that split-objs is extremely slow, especially >> on Windows. I had to disable split-objs for OpenGL-related libraries >> when building the HP installer in the past because of this. > > I think it's perfectly fine if the the compilation of the library > itself takes ages if it pays off later: You compile the library once, > but link against it multiple times. Or do the link times against e.g. > OpenGL stuff suffer? My point is: Do we make the right trade-off here? > A quick search brought up e.g. > https://github.com/gentoo-haskell/gentoo-haskell/issues/169 which > seems to be a request to split everything. > >> Randy also said that libraries built with split-objs don't work well >> in ghci on Windows x64. > > Is there an issue for this? Yes, there should be a bug filed for this if there isn't one already. But problems with the GHC build itself are really more of the priority than arbitrary user facing code. Still, a ticket would be good. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Mon Aug 4 14:03:21 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 4 Aug 2014 09:03:21 -0500 Subject: I'm going to disable DPH until someone starts maintaining it In-Reply-To: <2F09BA4C-DADD-490F-9C29-D74BBD449A85@ouroborus.net> References: <2F09BA4C-DADD-490F-9C29-D74BBD449A85@ouroborus.net> Message-ID: On Mon, Aug 4, 2014 at 7:18 AM, Ben Lippmeier wrote: > > On 4 Aug 2014, at 21:47 , Austin Seipp wrote: > >> Why? Because I'm afraid I just don't have any more patience for DPH, >> I'm tired of fixing it, and it takes up a lot of extra time to build, >> and time to maintain. > > I'm not going to argue against cutting it lose. > > >> So - why are we still building it, exactly? > > It can be a good stress test for the simplifier, especially the SpecConstr transform. The fact that it takes so long to build is part of the reason it's a good stress test. That's definitely fair. There's also the problem that SpecConstr has seen regressions lately causing explosions in the amount of time needed to compile, which might be making this more problematic (I can't remember the ticket # off hand). > >> [1] And by 'speak up', I mean I'd like to see someone actively step >> forward address my concerns above in a decisive manner. With patches. > > I thought that in the original conversation we agreed that if the DPH code became too much of a burden it was fine to switch it off and let it become unmaintained. I don't have time to maintain it anymore myself. Oh, I misremembered. It seems we're on the same page then :) > The original DPH project has fractured into a few different research streams, none of which work directly with the implementation in GHC, or with the DPH libraries that are bundled with the GHC build. > > The short of it is that the array fusion mechanism implemented in DPH (based on stream fusion) is inadequate for the task. A few people are working on replacement fusion systems that aim to solve this problem, but merging this work back into DPH will entail an almost complete rewrite of the backend libraries. If it the existing code has become a maintenance burden then it's fine to switch it off. I see, thanks. Is there any current roadmap on what might be done? > Sorry for the trouble. > Ben. > No problem. I suppose after dealing with the frustration of tracking a single bug for a few days, this is just an annoyance that tipped me. :) -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Mon Aug 4 14:07:47 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 4 Aug 2014 09:07:47 -0500 Subject: I'm going to disable DPH until someone starts maintaining it In-Reply-To: <53DF8F72.7080105@apeiron.net> References: <2F09BA4C-DADD-490F-9C29-D74BBD449A85@ouroborus.net> <53DF8F72.7080105@apeiron.net> Message-ID: On Mon, Aug 4, 2014 at 8:49 AM, Geoffrey Mainland wrote: > I have patches for DPH that let it work with vector 0.11 as of a few > months ago. I would be happy to submit them via phabricator if that is > agreeable (we have to coordinate with the import of vector 0.11 > though...I can instead leave them in a wip branch for Austin to merge as > he sees fit). I am also willing to commit some time to keep DPH at least > working in its current state. That would be quite nice if you could submit patches to get it to work! Thanks so much. As we've moved to submodules, having our own forks is becoming less palatable; we'd like to start tracking upstream closely, and having people submit changes there first and foremost. This creates a bit of a lag time between changes, but I think this is acceptable (and most of our maintainers are quite responsive to GHC needs!) It's also great you're willing to help maintain DPH a bit - but based on what Ben said, it seems like a significant rewrite will happen eventually. Geoff, here's my proposal: 1) I'll disable DPH for right now, so it won't pop up during ./validate. This will probably happen today. 2) We can coordinate the update of vector to 0.11, making it track the official master. (Perhaps an email thread or even Skype would work) 3) We can fix DPH at the same time. 4) Afterwords, we can re-enable it for ./validate If you submit Phabricator patches, that would be fantastic - we can add the DPH repository to Phabricator with little issue. In the long run, I think we should sync up with Ben and perhaps Simon & Co to see what will happen long-term for the DPH libraries. > Geoff > > On 8/4/14 8:18 AM, Ben Lippmeier wrote: >> On 4 Aug 2014, at 21:47 , Austin Seipp wrote: >> >>> Why? Because I'm afraid I just don't have any more patience for DPH, >>> I'm tired of fixing it, and it takes up a lot of extra time to build, >>> and time to maintain. >> I'm not going to argue against cutting it lose. >> >> >>> So - why are we still building it, exactly? >> It can be a good stress test for the simplifier, especially the SpecConstr transform. The fact that it takes so long to build is part of the reason it's a good stress test. >> >> >>> [1] And by 'speak up', I mean I'd like to see someone actively step >>> forward address my concerns above in a decisive manner. With patches. >> I thought that in the original conversation we agreed that if the DPH code became too much of a burden it was fine to switch it off and let it become unmaintained. I don't have time to maintain it anymore myself. >> >> The original DPH project has fractured into a few different research streams, none of which work directly with the implementation in GHC, or with the DPH libraries that are bundled with the GHC build. >> >> The short of it is that the array fusion mechanism implemented in DPH (based on stream fusion) is inadequate for the task. A few people are working on replacement fusion systems that aim to solve this problem, but merging this work back into DPH will entail an almost complete rewrite of the backend libraries. If it the existing code has become a maintenance burden then it's fine to switch it off. >> >> Sorry for the trouble. >> Ben. >> > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Mon Aug 4 14:10:55 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Aug 2014 14:10:55 +0000 Subject: Question about BangPatterns semantics/documentation In-Reply-To: <1407076873.1744.1.camel@joachim-breitner.de> References: <877g2px6kt.fsf@gmail.com> <1407076873.1744.1.camel@joachim-breitner.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF2208A48F@DB3PRD3001MB020.064d.mgd.msft.net> Yes, Joachim is dead right. In Haskell (case f x of y -> blah) really is equivalent to (let y = f x in blah). Herbert, if you think a reminder of this point, in the documentation or user manual, would be helpful, please suggest what and where. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Joachim Breitner | Sent: 03 August 2014 15:41 | To: ghc-devs at haskell.org | Subject: Re: Question about BangPatterns semantics/documentation | | Hi Herbert, | | Am Sonntag, den 03.08.2014, 11:31 +0200 schrieb Herbert Valerio Riedel: | > However, the following two functions are not equivalent after | > compilation to Core: | > | > g, h :: (Int -> Int) -> Int -> () | > g f x = let !y = f x in () | > h f x = case f x of y -> () | > | > In fact, compilation results in | > | > g = \ (f_asi :: Int -> Int) | > (x_asj :: Int) -> | > case f_asi x_asj of _ [Occ=Dead] { I# ipv_sKS -> () } | > | > h = \ _ [Occ=Dead] _ [Occ=Dead] -> () | > | > Is the documentation inaccurate/incomplete/I-missed-something or is | > the implementation to blame? | | I think that in Haskell (which is not Core!), a "case" does not imply | evaluation ? only if the patterns require it. So the example in the | docs is correct (case e of [x,y] -> b requires evaluation of e), but | your example is simply optimized away. | | haskell.org is down, so I can?t check if the report has anything to say | about that. | | Greetings, | Joachim | | -- | Joachim ?nomeata? Breitner | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F | Debian Developer: nomeata at debian.org From the.dead.shall.rise at gmail.com Mon Aug 4 14:26:39 2014 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Mon, 4 Aug 2014 16:26:39 +0200 Subject: Release building for Windows In-Reply-To: References: Message-ID: Hi, On 4 August 2014 15:50, Sven Panne wrote: > > I think it's perfectly fine if the the compilation of the library > itself takes ages if it pays off later: You compile the library once, > but link against it multiple times. Building GL* with -split-objs on Windows is like watching paint dry (and it has to be done multiple times for shared and profiling variants). But I agree in general. Current HP installer uses -split-objs for all libraries except GL*. From simonpj at microsoft.com Mon Aug 4 15:00:53 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 4 Aug 2014 15:00:53 +0000 Subject: Core libraries bug tracker Message-ID: <618BE556AADD624C9C918AA5D5911BEF2208A6B5@DB3PRD3001MB020.064d.mgd.msft.net> Edward, and core library colleagues, This came up in our weekly GHC discussion * Does the Core Libraries Committee have a Trac? Surely, surely you should, else you'll lose track of issues. * Would you like to use GHC's Trac for the purpose? Advantages: o People often report core library issues on GHC's Trac anyway, so telling them to move it somewhere else just creates busy-work --- and maybe they won't bother, which leaves it in our pile. o Several of these libraries are closely coupled to GHC, and you might want to milestone some library tickets with an upcoming GHC release * If so we'd need a canonical way to identify tickets as CLC issues. Perhaps by making "core-libraries" the owner? Or perhaps the "Component" field? * Some core libraries (e.g. random) have a maintainer that isn't the committee. So that maintainer should be the owner of the ticket. Or the CLC might like a particular member to own a ticket. Either way, that suggest using the "Component" field to identify CLC tickets * Or maybe you want a Trac of your own? The underlying issue from our end is that we'd like a way to * filter out tickets that you are dealing with * and be sure you are dealing with them * without losing track of milestones... i.e. when building a release we want to be sure that important tickets are indeed fixed before releasing Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Mon Aug 4 16:05:04 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Mon, 04 Aug 2014 18:05:04 +0200 Subject: [commit: ghc] master: Bump haddock.base max_bytes_used (8df7fea) In-Reply-To: <1407150511.1818.5.camel@joachim-breitner.de> References: <20140801175727.61A0A240EA@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF22086308@DB3PRD3001MB020.064d.mgd.msft.net> <1406928426.6407.2.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF2208890B@DB3PRD3001MB020.064d.mgd.msft.net> <1407150134-sup-8316@sabre> <1407150511.1818.5.camel@joachim-breitner.de> Message-ID: <53DFAF30.2070500@fuuzetsu.co.uk> On 08/04/2014 01:08 PM, Joachim Breitner wrote: > Hi, > > Am Montag, den 04.08.2014, 12:02 +0100 schrieb Edward Z.Yang: >> Yes, on my box, this test is now failing (because the stat is too good): >> >> Expected haddock.base(normal) max_bytes_used: 127954488 +/-10% >> Lower bound haddock.base(normal) max_bytes_used: 115159039 >> Upper bound haddock.base(normal) max_bytes_used: 140749937 >> Actual haddock.base(normal) max_bytes_used: 113167424 >> Deviation haddock.base(normal) max_bytes_used: -11.6 % > > ugh. > > What are your compilation settings? Plain "validate"? > > Looks like the ghcspeed instance settings still don?t quite match what > validate does... > > But I don?t see anything in > mk/validate-settings.mk > which would yield different results than > echo 'GhcLibHcOpts += -O -dcore-lint' >> mk/build.mk > echo 'GhcStage2HcOpts += -O -dcore-lint' >> mk/build.mk > > I?m starting a plain validate run on that machine, to see if it is > compilation settings or some other variable. > > Greetings, > Joachim > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > I'd like to point out that on my 32-bit box, I don't remember the last time Haddock perf numbers passed the validation even if I see commits updating them. See [1] for an example. [1]: http://haskell.inf.elte.hu/builders/validator1-linux-x86-head/43.html -- Mateusz K. From michael at snoyman.com Mon Aug 4 16:24:35 2014 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 4 Aug 2014 19:24:35 +0300 Subject: [core libraries] Core libraries bug tracker In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2208A6B5@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2208A6B5@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Mon, Aug 4, 2014 at 6:00 PM, Simon Peyton Jones wrote: > Edward, and core library colleagues, > > This came up in our weekly GHC discussion > > ? Does the Core Libraries Committee have a Trac? Surely, surely > you should, else you?ll lose track of issues. > > ? Would you like to use GHC?s Trac for the purpose? Advantages: > > o People often report core library issues on GHC?s Trac anyway, so > telling them to move it somewhere else just creates busy-work --- and maybe > they won?t bother, which leaves it in our pile. > > o Several of these libraries are closely coupled to GHC, and you might > want to milestone some library tickets with an upcoming GHC release > > ? If so we?d need a canonical way to identify tickets as CLC > issues. Perhaps by making ?core-libraries? the owner? Or perhaps the > ?Component? field? > > ? Some core libraries (e.g. random) have a maintainer that isn?t > the committee. So that maintainer should be the owner of the ticket. Or > the CLC might like a particular member to own a ticket. Either way, that > suggest using the ?Component? field to identify CLC tickets > > ? Or maybe you want a Trac of your own? > > The underlying issue from our end is that we?d like a way to > > ? filter out tickets that you are dealing with > > ? and be sure you are dealing with them > > ? without losing track of milestones? i.e. when building a > release we want to be sure that important tickets are indeed fixed before > releasing > > Simon > > -- > You received this message because you are subscribed to the Google Groups > "haskell-core-libraries" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to haskell-core-libraries+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > +1 for the general concept of an issue tracker, and +0.5 on doing it as part of the GHC tracker. That seems like it will be the most useful place to track issues, but I don't feel *that* strongly on it versus other options. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Mon Aug 4 16:51:35 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 4 Aug 2014 11:51:35 -0500 Subject: Status updates Message-ID: Hi *, Here's some weekly status updates! - I'm merging Applicative-Monad today after a few more minor fixes. OMG YES. This only occurred after fighting off some nasty bugs that took quite a while to track down. Unfortunately this ate up most of my time this week. - I redid the Phabricator page quite a bit on the wiki, as I sent in my earlier email. Do read over it and let me know what you think, I'll be updating it more soon: https://ghc.haskell.org/trac/ghc/wiki/Phabricator - I'm going to be redoing the Git wiki pages a bit today to streamline them, which hopefully will only take a few hours. - I'm draining the patch queue still, both from Phabricator and Trac, although I need to re-triage a few of the Trac tickets in particular which are in 'patch' status. - We have a new committer afoot, Karel Gardas! Yay! He's been working on Solaris support, so having direct access to commit will surely help speed things up here. - DPH will soon be disabled by default in ./validate, although Geoff has stepped up to help alleviate some of the pain (see prior emails from me.) After all that: - I'm going to fix bugs! Yes, I'll actually have time to do that. - Sometime this week I'll also hopefully finish off my Phabricator integrations: better build bot, with accurate logs, and posting to Trac from Phabricator. This is *almost* done, but still needs to be tested/deployed a bit. - I may get around to finally pushing my patches for faster inline copies this week (and -march support) if I still have time. Do let me know if anything is on your mind, or if any of you have questions. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From bos at serpentine.com Mon Aug 4 17:58:30 2014 From: bos at serpentine.com (Bryan O'Sullivan) Date: Mon, 4 Aug 2014 10:58:30 -0700 Subject: Status updates In-Reply-To: References: Message-ID: Hey Austin ? It's very helpful and informative to know what you're up to. Thanks for taking the time to write these updates; I assure you they're appreciated. On Mon, Aug 4, 2014 at 9:51 AM, Austin Seipp wrote: > Hi *, > > Here's some weekly status updates! > > - I'm merging Applicative-Monad today after a few more minor fixes. > OMG YES. This only occurred after fighting off some nasty bugs that > took quite a while to track down. Unfortunately this ate up most of my > time this week. > > - I redid the Phabricator page quite a bit on the wiki, as I sent in > my earlier email. Do read over it and let me know what you think, I'll > be updating it more soon: > https://ghc.haskell.org/trac/ghc/wiki/Phabricator > > - I'm going to be redoing the Git wiki pages a bit today to > streamline them, which hopefully will only take a few hours. > > - I'm draining the patch queue still, both from Phabricator and Trac, > although I need to re-triage a few of the Trac tickets in particular > which are in 'patch' status. > > - We have a new committer afoot, Karel Gardas! Yay! He's been working > on Solaris support, so having direct access to commit will surely help > speed things up here. > > - DPH will soon be disabled by default in ./validate, although Geoff > has stepped up to help alleviate some of the pain (see prior emails > from me.) > > After all that: > > - I'm going to fix bugs! Yes, I'll actually have time to do that. > > - Sometime this week I'll also hopefully finish off my Phabricator > integrations: better build bot, with accurate logs, and posting to > Trac from Phabricator. This is *almost* done, but still needs to be > tested/deployed a bit. > > - I may get around to finally pushing my patches for faster inline > copies this week (and -march support) if I still have time. > > Do let me know if anything is on your mind, or if any of you have > questions. > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rarash at student.chalmers.se Mon Aug 4 19:28:32 2014 From: rarash at student.chalmers.se (Arash Rouhani) Date: Mon, 4 Aug 2014 21:28:32 +0200 Subject: Phabricator updates In-Reply-To: References: Message-ID: <53DFDEE0.1020702@student.chalmers.se> Thanks for the nice documentation! :) /Arash On 2014-08-04 14:57, Austin Seipp wrote: > Hello *, > > I've spent the morning re-jiggering our Phabricator documentation: > > https://ghc.haskell.org/trac/ghc/wiki/Phabricator > > It now includes: > > - More screenshots! > - Coverage of core applications, including Owners, and better > coverage of Herald > - Coverage of the new remarkup syntax > - Better tips > - Linking issues in Phabricator to Trac issues > > I particularly think people will find 'Owners' very useful, in > combination with Herald. I have already organized a lot of the GHC > source tree into 'Owner packages' and assigned owners. If you're > working on the compiler, please go tweak those and add yourself as an > owner of some part of the compiler! > > In particular the last one is the one you'll want to note the most. > When you run `arc diff` now, you can associate a differential revision > with a ticket. Here's the documentation with a nice big screenshot: > > https://ghc.haskell.org/trac/ghc/wiki/Phabricator#Linkingtracticketsandwikisyntax > > The TL;DR is when you run `arc diff`, just fill out the new "GHC Trac > Issues" field and it'll be linked. As a bonus, it'll automatically > show up in commit messages, and be hyperlinked in Phabricator > appropriately. > > Note: Phabricator does not yet comment on Trac still, sorry. I didn't > get around to it this week. > > Please let me know if anything is confusing. I'm sure I need to expand > on stuff; in particular I still need to do another pass, adding some > screenshots for Audit, and some of the "Philosophy of Phabricator" on > how you may want to submit reviews and think about organizing your > work. > > Thanks. > From austin at well-typed.com Tue Aug 5 00:04:52 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 4 Aug 2014 19:04:52 -0500 Subject: Status updates In-Reply-To: References: Message-ID: Thanks, Bryan! On a related note, and to drum up some excitement - we have another new committer (two in one day): Sergei Trofimovich :] On Mon, Aug 4, 2014 at 12:58 PM, Bryan O'Sullivan wrote: > Hey Austin ? > > It's very helpful and informative to know what you're up to. Thanks for > taking the time to write these updates; I assure you they're appreciated. > > > On Mon, Aug 4, 2014 at 9:51 AM, Austin Seipp wrote: >> >> Hi *, >> >> Here's some weekly status updates! >> >> - I'm merging Applicative-Monad today after a few more minor fixes. >> OMG YES. This only occurred after fighting off some nasty bugs that >> took quite a while to track down. Unfortunately this ate up most of my >> time this week. >> >> - I redid the Phabricator page quite a bit on the wiki, as I sent in >> my earlier email. Do read over it and let me know what you think, I'll >> be updating it more soon: >> https://ghc.haskell.org/trac/ghc/wiki/Phabricator >> >> - I'm going to be redoing the Git wiki pages a bit today to >> streamline them, which hopefully will only take a few hours. >> >> - I'm draining the patch queue still, both from Phabricator and Trac, >> although I need to re-triage a few of the Trac tickets in particular >> which are in 'patch' status. >> >> - We have a new committer afoot, Karel Gardas! Yay! He's been working >> on Solaris support, so having direct access to commit will surely help >> speed things up here. >> >> - DPH will soon be disabled by default in ./validate, although Geoff >> has stepped up to help alleviate some of the pain (see prior emails >> from me.) >> >> After all that: >> >> - I'm going to fix bugs! Yes, I'll actually have time to do that. >> >> - Sometime this week I'll also hopefully finish off my Phabricator >> integrations: better build bot, with accurate logs, and posting to >> Trac from Phabricator. This is *almost* done, but still needs to be >> tested/deployed a bit. >> >> - I may get around to finally pushing my patches for faster inline >> copies this week (and -march support) if I still have time. >> >> Do let me know if anything is on your mind, or if any of you have >> questions. >> >> -- >> Regards, >> >> Austin Seipp, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ezyang at mit.edu Tue Aug 5 10:15:45 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 05 Aug 2014 11:15:45 +0100 Subject: HEADS UP: linker symbols changed, need to recompile Message-ID: <1407233645-sup-1680@sabre> I've just pushed a set of patches which change the linker symbols GHC chooses for object files; you'll need to make clean and rebuild your tree once you take these changes. Thanks, Edward From svenpanne at gmail.com Tue Aug 5 11:32:43 2014 From: svenpanne at gmail.com (Sven Panne) Date: Tue, 5 Aug 2014 13:32:43 +0200 Subject: [Haskell] ANNOUNCE: GHC version 7.8.3 In-Reply-To: References: <917BB1CE-5AB4-4F67-881A-60B375E1DEEA@serpentine.com> Message-ID: Coming a bit late to the party, but I've just realized this when playing around with the platform release candidate: I've successfully built and installed the 2014 RC3 on x64 Ubuntu Linux 12.04 LTS using ghc-7.8.3-x86_64-unknown-linux-centos65.tar.bz2 from the GHC download page. But somehow loading compiled code into ghci doesn't work, ghci always uses interpreted code. To verify this I've followed the simple example at http://www.haskell.org/ghc/docs/7.8.3/html/users_guide/ghci-compiled.html: ------------------------------------------------------------------------------------------------- svenpanne at svenpanne:~/ghci-test$ ll total 16 -rw-r----- 1 svenpanne eng 33 Aug 5 13:01 A.hs -rw-r----- 1 svenpanne eng 24 Aug 5 13:02 B.hs -rw-r----- 1 svenpanne eng 24 Aug 5 13:02 C.hs -rw-r----- 1 svenpanne eng 15 Aug 5 13:02 D.hs svenpanne at svenpanne:~/ghci-test$ more *.hs :::::::::::::: A.hs :::::::::::::: module A where import B import C :::::::::::::: B.hs :::::::::::::: module B where import D :::::::::::::: C.hs :::::::::::::: module C where import D :::::::::::::: D.hs :::::::::::::: module D where svenpanne at svenpanne:~/ghci-test$ ghci-7.6.3 GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Prelude> :! ghc-7.6.3 -c D.hs Prelude> :load A [2 of 4] Compiling C ( C.hs, interpreted ) [3 of 4] Compiling B ( B.hs, interpreted ) [4 of 4] Compiling A ( A.hs, interpreted ) Ok, modules loaded: D, C, A, B. *A> :show modules D ( D.hs, D.o ) C ( C.hs, interpreted ) A ( A.hs, interpreted ) B ( B.hs, interpreted ) *A> Leaving GHCi. svenpanne at svenpanne:~/ghci-test$ rm *.hi *.o svenpanne at svenpanne:~/ghci-test$ ghci-7.8.3 GHCi, version 7.8.3: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Prelude> :! ghc-7.8.3 -c D.hs Prelude> :load A [1 of 4] Compiling D ( D.hs, interpreted ) [2 of 4] Compiling C ( C.hs, interpreted ) [3 of 4] Compiling B ( B.hs, interpreted ) [4 of 4] Compiling A ( A.hs, interpreted ) Ok, modules loaded: D, C, A, B. *A> :show modules D ( D.hs, interpreted ) C ( C.hs, interpreted ) A ( A.hs, interpreted ) B ( B.hs, interpreted ) *A> Leaving GHCi. ------------------------------------------------------------------------------------------------- Using strace showed that ghci-7.8.3 reads D.hs twice (huh?) and D.hi once, but only "stat"s D.o (never reads its contents): ------------------------------------------------------------------------------------------------- [...] 12124 stat("D.hs", {st_mode=S_IFREG|0640, st_size=15, ...}) = 0 12124 stat("./D.hs", {st_mode=S_IFREG|0640, st_size=15, ...}) = 0 12124 open("./D.hs", O_RDONLY|O_NOCTTY|O_NONBLOCK) = 11 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=15, ...}) = 0 12124 ioctl(11, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff90c12938) = -1 ENOTTY (Inappropriate ioctl for device) 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=15, ...}) = 0 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=15, ...}) = 0 12124 lseek(11, 0, SEEK_CUR) = 0 12124 read(11, "module D where\n", 8096) = 15 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=15, ...}) = 0 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=15, ...}) = 0 12124 lseek(11, 0, SEEK_CUR) = 15 12124 close(11) = 0 12124 open("./D.hs", O_RDONLY|O_NOCTTY|O_NONBLOCK) = 11 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=15, ...}) = 0 12124 ioctl(11, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff90c12938) = -1 ENOTTY (Inappropriate ioctl for device) 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=15, ...}) = 0 12124 read(11, "module D where\n", 8096) = 15 12124 close(11) = 0 12124 stat("./D.o", {st_mode=S_IFREG|0640, st_size=933, ...}) = 0 12124 stat("Prelude.hs", 0x7f28b26e2b30) = -1 ENOENT (No such file or directory) 12124 stat("Prelude.lhs", 0x7f28b26e2cd0) = -1 ENOENT (No such file or directory) 12124 stat("B.hs", {st_mode=S_IFREG|0640, st_size=24, ...}) = 0 12124 stat("./B.hs", {st_mode=S_IFREG|0640, st_size=24, ...}) = 0 12124 open("./B.hs", O_RDONLY|O_NOCTTY|O_NONBLOCK) = 11 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=24, ...}) = 0 12124 ioctl(11, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff90c12938) = -1 ENOTTY (Inappropriate ioctl for device) 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=24, ...}) = 0 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=24, ...}) = 0 12124 lseek(11, 0, SEEK_CUR) = 0 12124 read(11, "module B where\nimport D\n", 8096) = 24 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=24, ...}) = 0 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=24, ...}) = 0 12124 lseek(11, 0, SEEK_CUR) = 24 12124 close(11) = 0 12124 open("./B.hs", O_RDONLY|O_NOCTTY|O_NONBLOCK) = 11 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=24, ...}) = 0 12124 ioctl(11, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff90c12938) = -1 ENOTTY (Inappropriate ioctl for device) 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=24, ...}) = 0 12124 read(11, "module B where\nimport D\n", 8096) = 24 12124 close(11) = 0 12124 stat("./B.o", 0x7f28b26fa780) = -1 ENOENT (No such file or directory) 12124 stat("Prelude.hs", 0x7f28b26fa900) = -1 ENOENT (No such file or directory) 12124 stat("Prelude.lhs", 0x7f28b26faaa0) = -1 ENOENT (No such file or directory) 12124 mkdir("/tmp/ghc12124_0", 0777) = 0 12124 stat("/tmp/ghc12124_0/ghc12124_1.o", 0x7f28b26facf0) = -1 ENOENT (No such file or directory) 12124 open("./D.hi", O_RDONLY|O_NOCTTY|O_NONBLOCK) = 11 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=500, ...}) = 0 12124 ioctl(11, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff90c12938) = -1 ENOTTY (Inappropriate ioctl for device) 12124 fstat(11, {st_mode=S_IFREG|0640, st_size=500, ...}) = 0 12124 read(11, "\1\372\316d\0\0\0\0\0\0\0\0\4\0"..., 8096) = 500 12124 close(11) = 0 12124 select(2, [], [1], NULL, {0, 0}) = 1 (out [1], left {0, 0}) 12124 write(1, "[1 of 4] Compiling D ( D.hs, interpreted )\n", 58) = 58 [...] ------------------------------------------------------------------------------------------------- This looks wrong to me. Did I miss something and/or did something stupid? Known bug? From mark.lentczner at gmail.com Tue Aug 5 13:27:20 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Tue, 5 Aug 2014 09:27:20 -0400 Subject: Release building for Windows In-Reply-To: References: Message-ID: Seems to me that given the state of affairs, it is best to just leave it as it is this round: That is GHC is built split-obj, but the HP libs on Windows are not. In general, split-obj only makes executables smaller at the expense of link time and memory. It is a trade-off, and I'd make the call for faster link, and now uncertainty for now. So, unless there is objection, I'd call RC4 final for Windows. - Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Tue Aug 5 13:30:55 2014 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Tue, 5 Aug 2014 09:30:55 -0400 Subject: last call... for HP 2014.2.0.0 Message-ID: As it stands, I'm planning on calling the final RCs final: - RC2 Mac - RC4 Win - RC3 Src and linux-dist They have a few wibbles, but nothing close to show-stopping. I'll rebuild the web site and upload it tomorrow and announce then as well. - Mark ("still on vacation") L. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Tue Aug 5 13:36:30 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 05 Aug 2014 14:36:30 +0100 Subject: Interrupt interruptible foreign calls on HS exit In-Reply-To: References: <1406761014-sup-6348@sabre> Message-ID: <1407239640-sup-7177@sabre> Hello Andreas, Yes, it seems that deleteAllThreads() is too late. The problem is that you need to defer the major GC until all the logical threads (now including interruptible FFI threads) have terminated themselves. This happens automatically for Haskell threads, because they are either (1) chilling in a run queue, in which case we don't need to do anything, or (2) being processed by a capability, in which case the GC will wait until the capability is done processing the thread and gets the memo (since it needs to acquire all locks.) However, in the case of safe FFI calls, these give up the capability before going off to C-land, so trying to acquire all the capabilities won't block on the FFI calls coming back. So you'll need to do something else. It's not necessarily clear what the best course of action here is: if an interruptible thread fails to be interrupted, should we keep waiting? (Maybe we should; that's how we handle non-preemptible Haskell threads too.) In any case, you'll need some way of waiting on the FFI calls, which might need some more gyrations as well. Cheers, Edward Excerpts from Andreas Voellmy's message of 2014-08-02 21:28:31 +0100: > I tried to go ahead and call throwTo() instead of throwToSingleThreaded() > for threads BlockedOnCCall_Interruptible state during the shutdown > sequence. Unfortunately something goes wrong with this change. I haven't > tracked it down yet, but it looks like the following happens... > > hs_exit() eventually result in a call to scheduleDoGC(), which does > acquireAllCapabilities() > and then deleteAllThreads() interrupts interruptible foreign calls. Those > foreign calls come back and call waitForReturnCapability() but get stuck > here: > > if (!task->wakeup) waitCondition(&task->cond, &task->lock); > > I guess the scheduleDoGC is blocking the interrupted Haskell threads from > finishing. > > One possible fix is to have the returning foreign call see that we are in > the exit sequence and avoid trying to return to the Haskell caller - I > guess it can just exit. I tried adding some code in resumeThread() to exit > if sched_state is SCHED_INTERRUPTING or SCHED_SHUTTING_DOWN, but this > caused more trouble, so it seems that it's not a simple change. > > > > > > On Sat, Aug 2, 2014 at 1:55 PM, Andreas Voellmy > wrote: > > > Thanks Edward! Another question... > > > > deleteThread() calls throwToSingleThreaded(). I can update this so that it > > also calls throwToSingleThreaded() in the case > > of BlockedOnCCall_Interruptible (currently it explicitly excludes this > > case), but this doesn't solve the problem, because throwToSingleThreaded() > > doesn't seem to interrupt blocked calls at all. That functionality is in > > throwTo(), which is not called by throwToSingleThreaded(). Why are we using > > throwToSingleThreaded() in deleteThread() rather than throwTo()? Can I > > switch deleteThread() to use throwTo()? Or should I use throwTo() in > > deleteThread() only for the special case of BlockedOnCCall_Interruptible? > > Or should throwToSingleThreaded() be updated to do the same thing that > > throwTo does for the case of BlockedOnCCall_Interruptible? > > > > Thanks, > > Andi > > > > > > On Wed, Jul 30, 2014 at 6:57 PM, Edward Z. Yang wrote: > > > >> Recalling when I implemented this functionality, I think not > >> interrupting threads in the exit sequence was just an oversight, > >> and I think we could implement it. Seems reasonable to me. > >> > >> Edward > >> > >> Excerpts from Andreas Voellmy's message of 2014-07-30 23:49:24 +0100: > >> > Hi GHCers, > >> > > >> > I've been looking into issue #9284, which boils down to getting certain > >> > foreign calls issued by HS threads to finish (i.e. return) in the exit > >> > sequence of forkProcess. > >> > > >> > There are several options for solving the particular problem in #9284; > >> one > >> > option is to issue the particular foreign calls causing that issue as > >> > "interruptible" and then have the exit sequence interrupt interruptible > >> > foreign calls. > >> > > >> > The exit sequence, starting from hs_exit(), goes through hs_exit_(), > >> > exitScheduler(), scheduleDoGC(), deleteAllThreads(), and then > >> > deleteThread(), where deleteThread is this: > >> > > >> > static void > >> > deleteThread (Capability *cap STG_UNUSED, StgTSO *tso) > >> > { > >> > // NOTE: must only be called on a TSO that we have exclusive > >> > // access to, because we will call throwToSingleThreaded() below. > >> > // The TSO must be on the run queue of the Capability we own, or > >> > // we must own all Capabilities. > >> > if (tso->why_blocked != BlockedOnCCall && > >> > tso->why_blocked != BlockedOnCCall_Interruptible) { > >> > throwToSingleThreaded(tso->cap,tso,NULL); > >> > } > >> > } > >> > > >> > So it looks like interruptible foreign calls are not interrupted in the > >> > exit sequence. > >> > > >> > Is there a good reason why we have this behavior? Could we change it to > >> > interrupt TSO's with why_blocked == BlockedOnCCall_Interruptible in the > >> > exit sequence? > >> > > >> > Thanks, > >> > Andi > >> > > >> > P.S. It looks like this was introduced in commit > >> > 83d563cb9ede0ba792836e529b1e2929db926355. > >> > > > > From ggreif at gmail.com Tue Aug 5 20:40:17 2014 From: ggreif at gmail.com (Gabor Greif) Date: Tue, 5 Aug 2014 22:40:17 +0200 Subject: Core Lint warnings Message-ID: Hello all, I see *literally thousands* of these warnings (in yesterday's and) today's bootstraps: {{{ HC [stage 1] libraries/base/dist-install/build/GHC/Base.o *** Core Lint warnings : in result of Desugar (after optimization) *** {-# LINE 261 "libraries/base/GHC/Base.lhs #-}: Warning: [RHS of $c>>_arr :: forall r_agf a_adQ b_adR. (r_agf -> a_adQ) -> (r_agf -> b_adR) -> r_agf -> b_adR] INLINE binder is (non-rule) loop breaker: $c>>_arr {-# LINE 632 "libraries/base/GHC/Base.lhs #-}: Warning: [RHS of $c>>_apH :: forall a_adQ b_adR. GHC.Types.IO a_adQ -> GHC.Types.IO b_adR -> GHC.Types.IO b_adR] INLINE binder is (non-rule) loop breaker: $c>>_apH }}} I don't know when this started and it does not seem to lead to a damaged GHC, but nevertheless it does not look very trustworthy... Any idea what this could be and where it came from? Cheers, Gabor From simonpj at microsoft.com Tue Aug 5 21:05:28 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 5 Aug 2014 21:05:28 +0000 Subject: Overlapping and incoherent instances In-Reply-To: <20140802205457.GA157265@srcf.ucam.org> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2208260F@DB3PRD3001MB020.064d.mgd.msft.net> <20140802152714.GA119560@srcf.ucam.org> <20140802195157.GB119560@srcf.ucam.org> <20140802205457.GA157265@srcf.ucam.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF220981A1@DB3PRD3001MB020.064d.mgd.msft.net> | >>Here's one concern I have with the deprecation of | >>-XOverlappingInstances: I don't like overlapping instances, I find | >>them confusing and weird and prefer to use code that doesn't | >>include them, because they violate my expectations about how type | >>classes work. When there is a single LANGUAGE pragma, that's a | >>simple, easily-checkable signpost of "this code uses techniques | >>that Ben doesn't understand". When it is all controlled by pragmas | >>I basically have to check every instance declaration individually. I see your point. Though you could just grep for OVERLAP! I suppose that -XOverlappingInstances could mean "silently honour OVERLAPPABLE/OVERLAPPING pragmas", while lacking it would mean "honour OVERLAPPABLE/OVERLAPPING pragmas, but emit noisy warnings" or even "don't honour them and warn". But that is different to the behaviour today, so we'd need a new LANGUAGE pragma. Perhaps -XHonourOverlappingInstances or something. My sense is that the extra faff is not worth it. | >>On a largely unrelated note, here's another thing I don't | >>understand: when is OVERLAPPABLE at one instance declaration | >>preferable to using only OVERLAPPING at the instance declarations | >>that overlap it? It's a user decision. GHC allows - OVERLAPPABLE at the instance that is being overlapped, or - OVERLAPPING at the instance that is doing the overlapping, or - both Another possible choice would be to require both. One or t'other wouldn't do. But the current choice (with the LANGUAGE pragmas -XOverlappingInstances) is the either/or choice, and I had no user pressure to change that. There *is* user pressure for the either/or semantics, so that you can *later* add an un-anticipated OVERLAPPING instance. | > {-# LANGUAGE FlexibleInstances #-} | > module M where | > class C a where f :: a -> a | > instance C a where f x = x | > instance C Int where f x = x + 1 | > | >I suspect many people have the intuition that NoOverlappingInstances | >should forbid the above, but in fact OverlappingInstances or no only | >controls instance *resolution*. I imagine you all already knew this | >but I did not until I carefully reread things. It's pretty clearly stated in the manual, but I'd be delighted to add a paragraph or two, or an example, if you can draft something and say where a good place for it would be (ie where you'd have looked). Thanks Simon From george.colpitts at gmail.com Tue Aug 5 21:09:48 2014 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 5 Aug 2014 18:09:48 -0300 Subject: Haskell Platform 2014.2.0.0 Release Candidate 2 In-Reply-To: References: Message-ID: Yes, as you stated, not an HP problem. I did a cabal update and got past this problem. I still can't install threadscope but I don't believe this is an HP platform problem so I'll follow up with the threadscope people and won't post again to this group Thanks [10 of 35] Compiling GUI.ProgressView ( GUI/ProgressView.hs, dist/build/threadscope/threadscope-tmp/GUI/ProgressView.o ) GUI/ProgressView.hs:92:15: Could not deduce (System.Glib.UTFString.GlibString string0) arising from a use of ?labelNew? from the context (WindowClass win) bound by the type signature for new :: WindowClass win => win -> IO () -> IO ProgressView at GUI/ProgressView.hs:79:8-57 The type variable ?string0? is ambiguous Note: there are several potential instances: instance System.Glib.UTFString.GlibString text-1.1.0.0:Data.Text.Internal.Text -- Defined in ?System.Glib.UTFString? instance System.Glib.UTFString.GlibString [Char] -- Defined in ?System.Glib.UTFString? In a stmt of a 'do' block: progText <- labelNew Nothing In the expression: do { win <- windowNew; set win [containerBorderWidth := 10, windowTitle := "", ....]; progText <- labelNew Nothing; set progText [miscXalign := 0, labelUseMarkup := True]; .... } In an equation for ?new?: new parent cancelAction = do { win <- windowNew; set win [containerBorderWidth := 10, ....]; progText <- labelNew Nothing; .... } cabal: Error: some packages failed to install: threadscope-0.2.4 failed during the building phase. The exception was: ExitFailure 1 On Tue, Jul 29, 2014 at 10:48 AM, Brandon Allbery wrote: > On Tue, Jul 29, 2014 at 7:45 AM, George Colpitts < > george.colpitts at gmail.com> wrote: > >> Installation worked fine. However I encountered a problem that looks like >> a regression, although it may be a problem with new versions of the package >> I am trying to install: > > > It's not an H-P problem; Apple's started using their BLOCKS C extension in > system headers, and gtk2hsc2hs doesn't understand it. (And possibly is not > using cpp properly when processing headers.) You'll need to take this up > with the gtk2hs folks. > > #ifdef __BLOCKS__ > int scandir_b(const char *, struct dirent ***, > int (^)(const struct dirent *), int (^)(const struct dirent **, > const struct > dirent **)) __DARWIN_INODE64(scandir_b) > __OSX_AVAILABLE_STARTING(__MAC_10_6, __ > IPHONE_3_2); > #endif /* __BLOCKS__ */ > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chak at cse.unsw.edu.au Tue Aug 5 23:48:36 2014 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Wed, 6 Aug 2014 09:48:36 +1000 Subject: I'm going to disable DPH until someone starts maintaining it In-Reply-To: References: <2F09BA4C-DADD-490F-9C29-D74BBD449A85@ouroborus.net> <53DF8F72.7080105@apeiron.net> Message-ID: <222439DC-9FF0-455E-A968-6FA43415D902@cse.unsw.edu.au> Sounds good to me. Thanks, Geoff, for doing this! Manuel Austin Seipp : > On Mon, Aug 4, 2014 at 8:49 AM, Geoffrey Mainland wrote: >> I have patches for DPH that let it work with vector 0.11 as of a few >> months ago. I would be happy to submit them via phabricator if that is >> agreeable (we have to coordinate with the import of vector 0.11 >> though...I can instead leave them in a wip branch for Austin to merge as >> he sees fit). I am also willing to commit some time to keep DPH at least >> working in its current state. > > That would be quite nice if you could submit patches to get it to > work! Thanks so much. > > As we've moved to submodules, having our own forks is becoming less > palatable; we'd like to start tracking upstream closely, and having > people submit changes there first and foremost. This creates a bit of > a lag time between changes, but I think this is acceptable (and most > of our maintainers are quite responsive to GHC needs!) > > It's also great you're willing to help maintain DPH a bit - but based > on what Ben said, it seems like a significant rewrite will happen > eventually. > > Geoff, here's my proposal: > > 1) I'll disable DPH for right now, so it won't pop up during > ./validate. This will probably happen today. > 2) We can coordinate the update of vector to 0.11, making it track > the official master. (Perhaps an email thread or even Skype would > work) > 3) We can fix DPH at the same time. > 4) Afterwords, we can re-enable it for ./validate > > If you submit Phabricator patches, that would be fantastic - we can > add the DPH repository to Phabricator with little issue. > > In the long run, I think we should sync up with Ben and perhaps Simon > & Co to see what will happen long-term for the DPH libraries. > >> Geoff >> >> On 8/4/14 8:18 AM, Ben Lippmeier wrote: >>> On 4 Aug 2014, at 21:47 , Austin Seipp wrote: >>> >>>> Why? Because I'm afraid I just don't have any more patience for DPH, >>>> I'm tired of fixing it, and it takes up a lot of extra time to build, >>>> and time to maintain. >>> I'm not going to argue against cutting it lose. >>> >>> >>>> So - why are we still building it, exactly? >>> It can be a good stress test for the simplifier, especially the SpecConstr transform. The fact that it takes so long to build is part of the reason it's a good stress test. >>> >>> >>>> [1] And by 'speak up', I mean I'd like to see someone actively step >>>> forward address my concerns above in a decisive manner. With patches. >>> I thought that in the original conversation we agreed that if the DPH code became too much of a burden it was fine to switch it off and let it become unmaintained. I don't have time to maintain it anymore myself. >>> >>> The original DPH project has fractured into a few different research streams, none of which work directly with the implementation in GHC, or with the DPH libraries that are bundled with the GHC build. >>> >>> The short of it is that the array fusion mechanism implemented in DPH (based on stream fusion) is inadequate for the task. A few people are working on replacement fusion systems that aim to solve this problem, but merging this work back into DPH will entail an almost complete rewrite of the backend libraries. If it the existing code has become a maintenance burden then it's fine to switch it off. >>> >>> Sorry for the trouble. >>> Ben. >>> >> > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 496 bytes Desc: Message signed with OpenPGP using GPGMail URL: From mail at joachim-breitner.de Wed Aug 6 07:30:45 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 06 Aug 2014 09:30:45 +0200 Subject: Perf regression: ghc --make: add nicer names to RTS threads (threaded IO manager, make workers) (f686682) In-Reply-To: <20140804131313.1D834240EA@ghc.haskell.org> References: <20140804131313.1D834240EA@ghc.haskell.org> Message-ID: <1407310245.1760.1.camel@joachim-breitner.de> Hi, the attached commit seems to have regressed the scs nofib benchmark by ~3%: http://ghcspeed-nomeata.rhcloud.com/timeline/?ben=nofib/time/scs&env=1#/?exe=2&base=2+68&ben=nofib/time/scs&env=1&revs=50&equid=on The graph unfortunately is in the wrong order, as the tool gets confused by timezones and by commits with identical CommitDate, e.g. due to rebasing. This needs to be fixed, I manually verified that the commit below is the first that shows the above-noise-level-increase of runtime. (Other benchmarks seem to be unaffected.) Is this regression expected and intended or unexpected? Is it fixable? Or is is this simply inexplicable? Thanks, Joachim Am Montag, den 04.08.2014, 13:13 +0000 schrieb git at git.haskell.org: > Repository : ssh://git at git.haskell.org/ghc > > On branch : master > Link : http://ghc.haskell.org/trac/ghc/changeset/f6866824ce5cdf5359f0cad78c49d65f6d43af12/ghc > > >--------------------------------------------------------------- > > commit f6866824ce5cdf5359f0cad78c49d65f6d43af12 > Author: Sergei Trofimovich > Date: Mon Aug 4 08:10:33 2014 -0500 > > ghc --make: add nicer names to RTS threads (threaded IO manager, make workers) > > Summary: > The patch names most of RTS threads > and ghc (the tool) threads. > > It makes nicer debug and eventlog output for ghc itself. > > Signed-off-by: Sergei Trofimovich > > Test Plan: ran debugged ghc under '+RTS -Ds' > > Reviewers: simonmar, austin > > Reviewed By: austin > > Subscribers: phaskell, simonmar, relrod, ezyang, carter > > Differential Revision: https://phabricator.haskell.org/D101 > > > >--------------------------------------------------------------- > > f6866824ce5cdf5359f0cad78c49d65f6d43af12 > compiler/main/GhcMake.hs | 14 ++++++++++++++ > libraries/base/GHC/Event/Thread.hs | 8 ++++++-- > 2 files changed, 20 insertions(+), 2 deletions(-) > > diff --git a/compiler/main/GhcMake.hs b/compiler/main/GhcMake.hs > index 33f163c..0c63203 100644 > --- a/compiler/main/GhcMake.hs > +++ b/compiler/main/GhcMake.hs > @@ -63,6 +63,7 @@ import qualified Data.Set as Set > import qualified FiniteMap as Map ( insertListWith ) > > import Control.Concurrent ( forkIOWithUnmask, killThread ) > +import qualified GHC.Conc as CC > import Control.Concurrent.MVar > import Control.Concurrent.QSem > import Control.Exception > @@ -80,6 +81,11 @@ import System.IO.Error ( isDoesNotExistError ) > > import GHC.Conc ( getNumProcessors, getNumCapabilities, setNumCapabilities ) > > +label_self :: String -> IO () > +label_self thread_name = do > + self_tid <- CC.myThreadId > + CC.labelThread self_tid thread_name > + > -- ----------------------------------------------------------------------------- > -- Loading the program > > @@ -744,10 +750,18 @@ parUpsweep n_jobs old_hpt stable_mods cleanup sccs = do > | ((ms,mvar,_),idx) <- comp_graph_w_idx ] > > > + liftIO $ label_self "main --make thread" > -- For each module in the module graph, spawn a worker thread that will > -- compile this module. > let { spawnWorkers = forM comp_graph_w_idx $ \((mod,!mvar,!log_queue),!mod_idx) -> > forkIOWithUnmask $ \unmask -> do > + liftIO $ label_self $ unwords > + [ "worker --make thread" > + , "for module" > + , show (moduleNameString (ms_mod_name mod)) > + , "number" > + , show mod_idx > + ] > -- Replace the default log_action with one that writes each > -- message to the module's log_queue. The main thread will > -- deal with synchronously printing these messages. > diff --git a/libraries/base/GHC/Event/Thread.hs b/libraries/base/GHC/Event/Thread.hs > index 6e991bf..dcfa32a 100644 > --- a/libraries/base/GHC/Event/Thread.hs > +++ b/libraries/base/GHC/Event/Thread.hs > @@ -39,6 +39,7 @@ import GHC.Event.Manager (Event, EventManager, evtRead, evtWrite, loop, > import qualified GHC.Event.Manager as M > import qualified GHC.Event.TimerManager as TM > import GHC.Num ((-), (+)) > +import GHC.Show (showSignedInt) > import System.IO.Unsafe (unsafePerformIO) > import System.Posix.Types (Fd) > > @@ -244,11 +245,14 @@ startIOManagerThreads = > forM_ [0..high] (startIOManagerThread eventManagerArray) > writeIORef numEnabledEventManagers (high+1) > > +show_int :: Int -> String > +show_int i = showSignedInt 0 i "" > + > restartPollLoop :: EventManager -> Int -> IO ThreadId > restartPollLoop mgr i = do > M.release mgr > !t <- forkOn i $ loop mgr > - labelThread t "IOManager" > + labelThread t ("IOManager on cap " ++ show_int i) > return t > > startIOManagerThread :: IOArray Int (Maybe (ThreadId, EventManager)) > @@ -258,7 +262,7 @@ startIOManagerThread eventManagerArray i = do > let create = do > !mgr <- new True > !t <- forkOn i $ loop mgr > - labelThread t "IOManager" > + labelThread t ("IOManager on cap " ++ show_int i) > writeIOArray eventManagerArray i (Just (t,mgr)) > old <- readIOArray eventManagerArray i > case old of > > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-commits > -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From karel.gardas at centrum.cz Wed Aug 6 09:16:20 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Wed, 06 Aug 2014 11:16:20 +0200 Subject: linker_unload validate related issue (how to duplicate that?). Message-ID: <53E1F264.70009@centrum.cz> Folks, I've noted that validate is failing on Linux recently due to issue in linker_unload. As I've submitted some patch to this test case recently which fixes this on Solaris I'm kind of curious if I broke it or not. Anyway, strange thing is: when I configure ghc and run the test by (g)make TEST=linker_unload on both Linux and Solaris I get no failure. When I validate on Linux (validate is not working on Solaris yet), then I get failure in linker_unload: Wrong exit code (expected 0 , actual 2 ) Stdout: Stderr: /bin/sh: 1: Syntax error: Unterminated quoted string make[3]: *** [linker_unload] Error 2 *** unexpected failure for linker_unload(normal) when I try to run: cd testsuite make TEST=linker_unload inside this validation tree I again get no failure in this test: [...] =====> linker_unload(normal) 2522 of 4082 [0, 0, 0] cd ./rts && $MAKE -s --no-print-directory linker_unload linker_unload.run.stdout 2>linker_unload.run.stderr OVERALL SUMMARY for test run started at Wed Aug 6 10:55:17 2014 CEST 0:00:08 spent to go through 4082 total tests, which gave rise to 13459 test cases, of which 13458 were skipped 0 had missing libraries 1 expected passes 0 expected failures 0 caused framework failures 0 unexpected passes 0 unexpected failures make[1]: Leaving directory `/home/karel/src/validate-test/testsuite/tests' I've also noted that this test case fails on Solaris builders with strange error: =====> linker_unload(normal) 170 of 4082 [0, 0, 1] cd ./rts && $MAKE -s --no-print-directory linker_unload linker_unload.run.stdout 2>linker_unload.run.stderr Wrong exit code (expected 0 , actual 2 ) Stdout: Stderr: linker_unload: internal error: loadObj: can't read `/buildbot/gabor-ghc-head-builder/builder/tempbuild/build/bindisttest/install/libHSinteg_BcPVjqcazPNGsNFG4agFty.a' (GHC version 7.9.20140806 for i386_unknown_solaris2) Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug gmake[3]: *** [linker_unload] Abort (core dumped) So the question is: why validate fails and why builder fails on this particular test and why my common testing on both Solaris and Linux is not able to duplicate the issue? What's so different between validate and builders and between my common: perl boot; ./configure ; gmake -j12; cd testsuite; gmake THREADS=12 fast ? Thanks! Karel From ezyang at mit.edu Wed Aug 6 10:04:14 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 06 Aug 2014 11:04:14 +0100 Subject: linker_unload validate related issue (how to duplicate that?). In-Reply-To: <53E1F264.70009@centrum.cz> References: <53E1F264.70009@centrum.cz> Message-ID: <1407319413-sup-2223@sabre> Austin and I chatted about it, and it's probably because the test is not creating ghcconfig.h early enough. I haven't looked further on how to fix it though. Edward Excerpts from Karel Gardas's message of 2014-08-06 10:16:20 +0100: > > Folks, > > I've noted that validate is failing on Linux recently due to issue in > linker_unload. As I've submitted some patch to this test case recently > which fixes this on Solaris I'm kind of curious if I broke it or not. > Anyway, strange thing is: when I configure ghc and run the test by > (g)make TEST=linker_unload on both Linux and Solaris I get no failure. > When I validate on Linux (validate is not working on Solaris yet), then > I get failure in linker_unload: > > Wrong exit code (expected 0 , actual 2 ) > Stdout: > > Stderr: > /bin/sh: 1: Syntax error: Unterminated quoted string > make[3]: *** [linker_unload] Error 2 > > *** unexpected failure for linker_unload(normal) > > > when I try to run: > > cd testsuite > make TEST=linker_unload > > inside this validation tree I again get no failure in this test: > > [...] > =====> linker_unload(normal) 2522 of 4082 [0, 0, 0] > cd ./rts && $MAKE -s --no-print-directory linker_unload >linker_unload.run.stdout 2>linker_unload.run.stderr > > OVERALL SUMMARY for test run started at Wed Aug 6 10:55:17 2014 CEST > 0:00:08 spent to go through > 4082 total tests, which gave rise to > 13459 test cases, of which > 13458 were skipped > > 0 had missing libraries > 1 expected passes > 0 expected failures > > 0 caused framework failures > 0 unexpected passes > 0 unexpected failures > > make[1]: Leaving directory `/home/karel/src/validate-test/testsuite/tests' > > I've also noted that this test case fails on Solaris builders with > strange error: > > =====> linker_unload(normal) 170 of 4082 [0, 0, 1] > cd ./rts && $MAKE -s --no-print-directory linker_unload >linker_unload.run.stdout 2>linker_unload.run.stderr > Wrong exit code (expected 0 , actual 2 ) > Stdout: > Stderr: > linker_unload: internal error: loadObj: can't read > `/buildbot/gabor-ghc-head-builder/builder/tempbuild/build/bindisttest/install/libHSinteg_BcPVjqcazPNGsNFG4agFty.a' > (GHC version 7.9.20140806 for i386_unknown_solaris2) > Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug > gmake[3]: *** [linker_unload] Abort (core dumped) > > > So the question is: why validate fails and why builder fails on this > particular test and why my common testing on both Solaris and Linux is > not able to duplicate the issue? What's so different between validate > and builders and between my common: perl boot; ./configure params>; gmake -j12; cd testsuite; gmake THREADS=12 fast > ? > > Thanks! > Karel From karel.gardas at centrum.cz Wed Aug 6 20:13:33 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Wed, 06 Aug 2014 22:13:33 +0200 Subject: linker_unload validate related issue (how to duplicate that?). In-Reply-To: <53E1F264.70009@centrum.cz> References: <53E1F264.70009@centrum.cz> Message-ID: <53E28C6D.6010801@centrum.cz> Just for the record validate fails since it is using ghc installed into bindisttest/install dir/ subdirectory. The spaces here are really nasty test as my fix to linker_unload has not counted with the possibility of having ghc installed in such location (cut -d ' ' ... does bad thing in this case). So yes, that was me who broke validate but this should be already fixed by revert of problematic patch. Sorry for that, Karel On 08/ 6/14 11:16 AM, Karel Gardas wrote: > So the question is: why validate fails and why builder fails on this > particular test and why my common testing on both Solaris and Linux is > not able to duplicate the issue? What's so different between validate > and builders and between my common: perl boot; ./configure params>; gmake -j12; cd testsuite; gmake THREADS=12 fast > ? From mail at joachim-breitner.de Wed Aug 6 20:35:32 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 06 Aug 2014 22:35:32 +0200 Subject: Perf regression: ghc --make: add nicer names to RTS threads (threaded IO manager, make workers) (f686682) In-Reply-To: <20140806221534.1a5a922a@sf> References: <20140804131313.1D834240EA@ghc.haskell.org> <1407310245.1760.1.camel@joachim-breitner.de> <20140806221534.1a5a922a@sf> Message-ID: <1407357332.23222.1.camel@joachim-breitner.de> Hi Sergei, Am Mittwoch, den 06.08.2014, 22:15 +0300 schrieb Sergei Trofimovich: > On Wed, 06 Aug 2014 09:30:45 +0200 Joachim Breitner wrote: > > the attached commit seems to have regressed the scs nofib benchmark by > > ~3%: > > http://ghcspeed-nomeata.rhcloud.com/timeline/?ben=nofib/time/scs&env=1#/?exe=2&base=2+68&ben=nofib/time/scs&env=1&revs=50&equid=on > > That's a test of compiled binary performance, not the compiler, right? Correct. > > The graph unfortunately is in the wrong order, as the tool gets confused > > by timezones and by commits with identical CommitDate, e.g. due to > > rebasing. This needs to be fixed, I manually verified that the commit > > below is the first that shows the above-noise-level-increase of runtime. > > > > (Other benchmarks seem to be unaffected.) > > > > Is this regression expected and intended or unexpected? Is it fixable? > > Or is is this simply inexplicable? > > The graph looks mysterious (18 ms bump). Bencmark does not use haskell > threads at all. Yes, I was surprised by that as well. > I'll try to reproduce degradation locally and will investigate. Thanks! > The only runtime part affected by the patch only renames threads > (the renamer gets called once for each created thread): > > > > diff --git a/libraries/base/GHC/Event/Thread.hs b/libraries/base/GHC/Event/Thread.hs > > > index 6e991bf..dcfa32a 100644 > > > --- a/libraries/base/GHC/Event/Thread.hs > > > +++ b/libraries/base/GHC/Event/Thread.hs > > > @@ -39,6 +39,7 @@ import GHC.Event.Manager (Event, EventManager, evtRead, evtWrite, loop, > > > import qualified GHC.Event.Manager as M > > > import qualified GHC.Event.TimerManager as TM > > > import GHC.Num ((-), (+)) > > > +import GHC.Show (showSignedInt) > > > import System.IO.Unsafe (unsafePerformIO) > > > import System.Posix.Types (Fd) > > > > > > @@ -244,11 +245,14 @@ startIOManagerThreads = > > > forM_ [0..high] (startIOManagerThread eventManagerArray) > > > writeIORef numEnabledEventManagers (high+1) > > > > > > +show_int :: Int -> String > > > +show_int i = showSignedInt 0 i "" > > > + > > > restartPollLoop :: EventManager -> Int -> IO ThreadId > > > restartPollLoop mgr i = do > > > M.release mgr > > > !t <- forkOn i $ loop mgr > > > - labelThread t "IOManager" > > > + labelThread t ("IOManager on cap " ++ show_int i) > > > return t > > > > > > startIOManagerThread :: IOArray Int (Maybe (ThreadId, EventManager)) > > > @@ -258,7 +262,7 @@ startIOManagerThread eventManagerArray i = do > > > let create = do > > > !mgr <- new True > > > !t <- forkOn i $ loop mgr > > > - labelThread t "IOManager" > > > + labelThread t ("IOManager on cap " ++ show_int i) > > > writeIOArray eventManagerArray i (Just (t,mgr)) > > > old <- readIOArray eventManagerArray i > > > case old of It does replace a reference to the a string ("IOManager") by something involving allocation and computation. I guess that could have a measurable effect. What happens to programs relying on very cheap threads? Do we have benchmarks for this class of programs at all? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From slyich at gmail.com Wed Aug 6 20:40:36 2014 From: slyich at gmail.com (Sergei Trofimovich) Date: Wed, 6 Aug 2014 23:40:36 +0300 Subject: Perf regression: ghc --make: add nicer names to RTS threads (threaded IO manager, make workers) (f686682) In-Reply-To: <20140806221534.1a5a922a@sf> References: <20140804131313.1D834240EA@ghc.haskell.org> <1407310245.1760.1.camel@joachim-breitner.de> <20140806221534.1a5a922a@sf> Message-ID: <20140806234036.12e6f758@sf> On Wed, 6 Aug 2014 22:15:34 +0300 Sergei Trofimovich wrote: > On Wed, 06 Aug 2014 09:30:45 +0200 > Joachim Breitner wrote: > > > Hi, > > > > the attached commit seems to have regressed the scs nofib benchmark by > > ~3%: > > http://ghcspeed-nomeata.rhcloud.com/timeline/?ben=nofib/time/scs&env=1#/?exe=2&base=2+68&ben=nofib/time/scs&env=1&revs=50&equid=on > > That's a test of compiled binary performance, not the compiler, right? > > > The graph unfortunately is in the wrong order, as the tool gets confused > > by timezones and by commits with identical CommitDate, e.g. due to > > rebasing. This needs to be fixed, I manually verified that the commit > > below is the first that shows the above-noise-level-increase of runtime. > > > > (Other benchmarks seem to be unaffected.) > > > > Is this regression expected and intended or unexpected? Is it fixable? > > Or is is this simply inexplicable? > > The graph looks mysterious (18 ms bump). Bencmark does not use haskell threads at all. > > I'll try to reproduce degradation locally and will investigate. I think I know what happens. According to perf the benchmark spends 34%+ of time in garbage collection ('perf record -- $args'/'perf report'): 27,91% test test [.] evacuate 9,29% test test [.] s9Lz_info 7,46% test test [.] scavenge_block And the whole benchmark runs a tiny bit more than 300ms. It is exactly in line with major GC timer (0.3s). If we run $ time ./test inverter 345 10n 4u 1>/dev/null multiple times there is heavy instability in there (with my patch reverted): real 0m0.319s real 0m0.305s real 0m0.307s real 0m0.373s real 0m0.381s which is +/- 80ms drift! Let's try to kick major GC earlier instead of running right at runtime shutdown time: $ time ./test inverter 345 10n 4u +RTS -I0.1 1>/dev/null real 0m0.304s real 0m0.308s real 0m0.302s real 0m0.304s real 0m0.308s real 0m0.306s real 0m0.305s real 0m0.312s which is way more stable behaviour. Thus my theory is that my changed stepped from "90% of time 1 GC run per run" to "90% of time 2 GC runs per run" -- Sergei -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From mail at joachim-breitner.de Wed Aug 6 20:44:54 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 06 Aug 2014 22:44:54 +0200 Subject: Perf regression: ghc --make: add nicer names to RTS threads (threaded IO manager, make workers) (f686682) In-Reply-To: <20140806234036.12e6f758@sf> References: <20140804131313.1D834240EA@ghc.haskell.org> <1407310245.1760.1.camel@joachim-breitner.de> <20140806221534.1a5a922a@sf> <20140806234036.12e6f758@sf> Message-ID: <1407357894.23222.5.camel@joachim-breitner.de> Hi, Am Mittwoch, den 06.08.2014, 23:40 +0300 schrieb Sergei Trofimovich: > I think I know what happens. According to perf the benchmark spends 34%+ > of time in garbage collection ('perf record -- $args'/'perf report'): > > 27,91% test test [.] evacuate > 9,29% test test [.] s9Lz_info > 7,46% test test [.] scavenge_block > > And the whole benchmark runs a tiny bit more than 300ms. > It is exactly in line with major GC timer (0.3s). > > If we run > $ time ./test inverter 345 10n 4u 1>/dev/null > multiple times there is heavy instability in there (with my patch reverted): > real 0m0.319s > real 0m0.305s > real 0m0.307s > real 0m0.373s > real 0m0.381s > which is +/- 80ms drift! > > Let's try to kick major GC earlier instead of running right at runtime > shutdown time: > $ time ./test inverter 345 10n 4u +RTS -I0.1 1>/dev/null > > real 0m0.304s > real 0m0.308s > real 0m0.302s > real 0m0.304s > real 0m0.308s > real 0m0.306s > real 0m0.305s > real 0m0.312s > which is way more stable behaviour. > > Thus my theory is that my changed stepped from > "90% of time 1 GC run per run" > to > "90% of time 2 GC runs per run" great analysis, thanks. I think in this case we should not worry about it. From a QA perspective we are already doing well if we consider the apparent regressions. If we can explain them and consider it acceptable, then it?s fine. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From karel.gardas at centrum.cz Thu Aug 7 08:08:17 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Thu, 07 Aug 2014 10:08:17 +0200 Subject: linker_unload validate related issue (how to duplicate that?). In-Reply-To: <1407319413-sup-2223@sabre> References: <53E1F264.70009@centrum.cz> <1407319413-sup-2223@sabre> Message-ID: <53E333F1.7070200@centrum.cz> Hi Edward, thanks for dealing with this. I've found a little bit different reason. validate runs with ghc installed in bindisttest/install dir/ and the tests invokes ghc-pkg to get gmp library path. The dir above is then returned quoted: "...../install dir/....". What I did in my linker_unload fix is to cut -d ' ' -f 1-1 IIRC which returned "..../install and nothing more (quotes missing). The shell then complains with the message below. Anyway, this is already fixed by reverting the patch and I already validated and pushed another version of it which correctly use head -1 to get the intended first directory name... Hopefully this time I've not broken anything. Thanks! Karel On 08/ 6/14 12:04 PM, Edward Z. Yang wrote: > Austin and I chatted about it, and it's probably because the test is not > creating ghcconfig.h early enough. I haven't looked further on how to > fix it though. > > Edward > > Excerpts from Karel Gardas's message of 2014-08-06 10:16:20 +0100: >> >> Folks, >> >> I've noted that validate is failing on Linux recently due to issue in >> linker_unload. As I've submitted some patch to this test case recently >> which fixes this on Solaris I'm kind of curious if I broke it or not. >> Anyway, strange thing is: when I configure ghc and run the test by >> (g)make TEST=linker_unload on both Linux and Solaris I get no failure. >> When I validate on Linux (validate is not working on Solaris yet), then >> I get failure in linker_unload: >> >> Wrong exit code (expected 0 , actual 2 ) >> Stdout: >> >> Stderr: >> /bin/sh: 1: Syntax error: Unterminated quoted string >> make[3]: *** [linker_unload] Error 2 >> >> *** unexpected failure for linker_unload(normal) >> >> >> when I try to run: >> >> cd testsuite >> make TEST=linker_unload >> >> inside this validation tree I again get no failure in this test: >> >> [...] >> =====> linker_unload(normal) 2522 of 4082 [0, 0, 0] >> cd ./rts&& $MAKE -s --no-print-directory linker_unload> >linker_unload.run.stdout 2>linker_unload.run.stderr >> >> OVERALL SUMMARY for test run started at Wed Aug 6 10:55:17 2014 CEST >> 0:00:08 spent to go through >> 4082 total tests, which gave rise to >> 13459 test cases, of which >> 13458 were skipped >> >> 0 had missing libraries >> 1 expected passes >> 0 expected failures >> >> 0 caused framework failures >> 0 unexpected passes >> 0 unexpected failures >> >> make[1]: Leaving directory `/home/karel/src/validate-test/testsuite/tests' >> >> I've also noted that this test case fails on Solaris builders with >> strange error: >> >> =====> linker_unload(normal) 170 of 4082 [0, 0, 1] >> cd ./rts&& $MAKE -s --no-print-directory linker_unload> >linker_unload.run.stdout 2>linker_unload.run.stderr >> Wrong exit code (expected 0 , actual 2 ) >> Stdout: >> Stderr: >> linker_unload: internal error: loadObj: can't read >> `/buildbot/gabor-ghc-head-builder/builder/tempbuild/build/bindisttest/install/libHSinteg_BcPVjqcazPNGsNFG4agFty.a' >> (GHC version 7.9.20140806 for i386_unknown_solaris2) >> Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug >> gmake[3]: *** [linker_unload] Abort (core dumped) >> >> >> So the question is: why validate fails and why builder fails on this >> particular test and why my common testing on both Solaris and Linux is >> not able to duplicate the issue? What's so different between validate >> and builders and between my common: perl boot; ./configure> params>; gmake -j12; cd testsuite; gmake THREADS=12 fast >> ? >> >> Thanks! >> Karel > From johan.tibell at gmail.com Thu Aug 7 11:10:37 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 7 Aug 2014 13:10:37 +0200 Subject: Improving the Int/Word story inside GHC Message-ID: Inside GHC we mostly use Int instead of Word, even when we want to represent non-negative values, such as sizes of things or indices into things. This is now causing some grief in https://ghc.haskell.org/trac/ghc/ticket/9416, where an allocation boundary case test fails with a segfault because a n < m Int comparison overflows. I tried to fix the issue by changing the type of maxInlineAllocSize, which is used on one side of the above comparison, to Word. However, that unravels a bunch of other issues, such as wordsToBytes, ByteOff, etc are all Int-valued quantities. I could perhaps work around these problems by judicious use of fromIntegral in StgCmmPrim, but I'm a bit unhappy about it because it 1) makes the code uglier and 2) needs to be done in quite a few places. How much work would it be to try to switch the codegen to use Word for most of these quantities instead? -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Thu Aug 7 11:16:44 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 07 Aug 2014 12:16:44 +0100 Subject: Improving the Int/Word story inside GHC In-Reply-To: References: Message-ID: <1407410184-sup-8595@sabre> If it's strictly just in the codegen (and not affecting user code), seems fine to me. Edward Excerpts from Johan Tibell's message of 2014-08-07 12:10:37 +0100: > Inside GHC we mostly use Int instead of Word, even when we want to > represent non-negative values, such as sizes of things or indices into > things. This is now causing some grief in > https://ghc.haskell.org/trac/ghc/ticket/9416, where an allocation boundary > case test fails with a segfault because a n < m Int comparison overflows. > > I tried to fix the issue by changing the type of maxInlineAllocSize, which > is used on one side of the above comparison, to Word. However, that > unravels a bunch of other issues, such as wordsToBytes, ByteOff, etc are > all Int-valued quantities. > > I could perhaps work around these problems by judicious use of fromIntegral > in StgCmmPrim, but I'm a bit unhappy about it because it 1) makes the code > uglier and 2) needs to be done in quite a few places. > > How much work would it be to try to switch the codegen to use Word for most > of these quantities instead? > > -- Johan From johan.tibell at gmail.com Thu Aug 7 11:21:09 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 7 Aug 2014 13:21:09 +0200 Subject: Improving the Int/Word story inside GHC In-Reply-To: <1407410184-sup-8595@sabre> References: <1407410184-sup-8595@sabre> Message-ID: Simon M, is the intention of ByteOff and WordOff that they should be able to represent negative quantities as well? If so we might need to split it into ByteOff (still an Int) and ByteIndex (a Word) to have a type for indexing into arrays. On Thu, Aug 7, 2014 at 1:16 PM, Edward Z. Yang wrote: > If it's strictly just in the codegen (and not affecting user code), > seems fine to me. > > Edward > > Excerpts from Johan Tibell's message of 2014-08-07 12:10:37 +0100: > > Inside GHC we mostly use Int instead of Word, even when we want to > > represent non-negative values, such as sizes of things or indices into > > things. This is now causing some grief in > > https://ghc.haskell.org/trac/ghc/ticket/9416, where an allocation > boundary > > case test fails with a segfault because a n < m Int comparison overflows. > > > > I tried to fix the issue by changing the type of maxInlineAllocSize, > which > > is used on one side of the above comparison, to Word. However, that > > unravels a bunch of other issues, such as wordsToBytes, ByteOff, etc are > > all Int-valued quantities. > > > > I could perhaps work around these problems by judicious use of > fromIntegral > > in StgCmmPrim, but I'm a bit unhappy about it because it 1) makes the > code > > uglier and 2) needs to be done in quite a few places. > > > > How much work would it be to try to switch the codegen to use Word for > most > > of these quantities instead? > > > > -- Johan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 7 11:49:28 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 7 Aug 2014 11:49:28 +0000 Subject: Improving the Int/Word story inside GHC In-Reply-To: References: <1407410184-sup-8595@sabre> Message-ID: <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> I?m all for it! I believe that ByteOff/WordOff are always 0 or positive. At least, they were when I introduced them! SImon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Johan Tibell Sent: 07 August 2014 12:21 To: Simon Marlow Cc: ghc-devs at haskell.org Subject: Re: Improving the Int/Word story inside GHC Simon M, is the intention of ByteOff and WordOff that they should be able to represent negative quantities as well? If so we might need to split it into ByteOff (still an Int) and ByteIndex (a Word) to have a type for indexing into arrays. On Thu, Aug 7, 2014 at 1:16 PM, Edward Z. Yang > wrote: If it's strictly just in the codegen (and not affecting user code), seems fine to me. Edward Excerpts from Johan Tibell's message of 2014-08-07 12:10:37 +0100: > Inside GHC we mostly use Int instead of Word, even when we want to > represent non-negative values, such as sizes of things or indices into > things. This is now causing some grief in > https://ghc.haskell.org/trac/ghc/ticket/9416, where an allocation boundary > case test fails with a segfault because a n < m Int comparison overflows. > > I tried to fix the issue by changing the type of maxInlineAllocSize, which > is used on one side of the above comparison, to Word. However, that > unravels a bunch of other issues, such as wordsToBytes, ByteOff, etc are > all Int-valued quantities. > > I could perhaps work around these problems by judicious use of fromIntegral > in StgCmmPrim, but I'm a bit unhappy about it because it 1) makes the code > uglier and 2) needs to be done in quite a few places. > > How much work would it be to try to switch the codegen to use Word for most > of these quantities instead? > > -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Thu Aug 7 13:45:43 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 7 Aug 2014 15:45:43 +0200 Subject: Improving the Int/Word story inside GHC In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> References: <1407410184-sup-8595@sabre> <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: I'm hacking on this now. I'm not 100% sure that ByteOff isn't used for negative values though, see for example mkTaggedObjectLoad :: DynFlags -> LocalReg -> LocalReg -> ByteOff -> DynTag -> CmmAGraph -- (loadTaggedObjectField reg base off tag) generates assignment -- reg = bitsK[ base + off - tag ] -- where K is fixed by 'reg' mkTaggedObjectLoad dflags reg base offset tag = mkAssign (CmmLocal reg) (CmmLoad (cmmOffsetB dflags (CmmReg (CmmLocal base)) (offset - tag)) (localRegType reg)) from StgCmmUtils. Wouldn't it be possible that the offset in cmmOffsetB (which is of type ByteOff) could be negative? On Thu, Aug 7, 2014 at 1:49 PM, Simon Peyton Jones wrote: > I?m all for it! > > > > I believe that ByteOff/WordOff are always 0 or positive. At least, they > were when I introduced them! > > > > SImon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Johan > Tibell > *Sent:* 07 August 2014 12:21 > *To:* Simon Marlow > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Improving the Int/Word story inside GHC > > > > Simon M, is the intention of ByteOff and WordOff that they should be able > to represent negative quantities as well? If so we might need to split it > into ByteOff (still an Int) and ByteIndex (a Word) to have a type for > indexing into arrays. > > > > On Thu, Aug 7, 2014 at 1:16 PM, Edward Z. Yang wrote: > > If it's strictly just in the codegen (and not affecting user code), > seems fine to me. > > Edward > > Excerpts from Johan Tibell's message of 2014-08-07 12:10:37 +0100: > > > Inside GHC we mostly use Int instead of Word, even when we want to > > represent non-negative values, such as sizes of things or indices into > > things. This is now causing some grief in > > https://ghc.haskell.org/trac/ghc/ticket/9416, where an allocation > boundary > > case test fails with a segfault because a n < m Int comparison overflows. > > > > I tried to fix the issue by changing the type of maxInlineAllocSize, > which > > is used on one side of the above comparison, to Word. However, that > > unravels a bunch of other issues, such as wordsToBytes, ByteOff, etc are > > all Int-valued quantities. > > > > I could perhaps work around these problems by judicious use of > fromIntegral > > in StgCmmPrim, but I'm a bit unhappy about it because it 1) makes the > code > > uglier and 2) needs to be done in quite a few places. > > > > How much work would it be to try to switch the codegen to use Word for > most > > of these quantities instead? > > > > -- Johan > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Thu Aug 7 13:49:04 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 07 Aug 2014 14:49:04 +0100 Subject: Improving the Int/Word story inside GHC In-Reply-To: References: <1407410184-sup-8595@sabre> <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: <1407419323-sup-1892@sabre> Yes, in particular if the offset is zero. Morally, however, we're just doing this to clear the tag bit. Edward Excerpts from Johan Tibell's message of 2014-08-07 14:45:43 +0100: > I'm hacking on this now. I'm not 100% sure that ByteOff isn't used for > negative values though, see for example > > mkTaggedObjectLoad > :: DynFlags -> LocalReg -> LocalReg -> ByteOff -> DynTag -> CmmAGraph > -- (loadTaggedObjectField reg base off tag) generates assignment > -- reg = bitsK[ base + off - tag ] > -- where K is fixed by 'reg' > mkTaggedObjectLoad dflags reg base offset tag > = mkAssign (CmmLocal reg) > (CmmLoad (cmmOffsetB dflags > (CmmReg (CmmLocal base)) > (offset - tag)) > (localRegType reg)) > > from StgCmmUtils. > > Wouldn't it be possible that the offset in cmmOffsetB (which is of type > ByteOff) could be negative? > > > > On Thu, Aug 7, 2014 at 1:49 PM, Simon Peyton Jones > wrote: > > > I?m all for it! > > > > > > > > I believe that ByteOff/WordOff are always 0 or positive. At least, they > > were when I introduced them! > > > > > > > > SImon > > > > > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Johan > > Tibell > > *Sent:* 07 August 2014 12:21 > > *To:* Simon Marlow > > *Cc:* ghc-devs at haskell.org > > *Subject:* Re: Improving the Int/Word story inside GHC > > > > > > > > Simon M, is the intention of ByteOff and WordOff that they should be able > > to represent negative quantities as well? If so we might need to split it > > into ByteOff (still an Int) and ByteIndex (a Word) to have a type for > > indexing into arrays. > > > > > > > > On Thu, Aug 7, 2014 at 1:16 PM, Edward Z. Yang wrote: > > > > If it's strictly just in the codegen (and not affecting user code), > > seems fine to me. > > > > Edward > > > > Excerpts from Johan Tibell's message of 2014-08-07 12:10:37 +0100: > > > > > Inside GHC we mostly use Int instead of Word, even when we want to > > > represent non-negative values, such as sizes of things or indices into > > > things. This is now causing some grief in > > > https://ghc.haskell.org/trac/ghc/ticket/9416, where an allocation > > boundary > > > case test fails with a segfault because a n < m Int comparison overflows. > > > > > > I tried to fix the issue by changing the type of maxInlineAllocSize, > > which > > > is used on one side of the above comparison, to Word. However, that > > > unravels a bunch of other issues, such as wordsToBytes, ByteOff, etc are > > > all Int-valued quantities. > > > > > > I could perhaps work around these problems by judicious use of > > fromIntegral > > > in StgCmmPrim, but I'm a bit unhappy about it because it 1) makes the > > code > > > uglier and 2) needs to be done in quite a few places. > > > > > > How much work would it be to try to switch the codegen to use Word for > > most > > > of these quantities instead? > > > > > > -- Johan > > > > > > From johan.tibell at gmail.com Thu Aug 7 13:53:05 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 7 Aug 2014 15:53:05 +0200 Subject: Improving the Int/Word story inside GHC In-Reply-To: <1407419323-sup-1892@sabre> References: <1407410184-sup-8595@sabre> <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> <1407419323-sup-1892@sabre> Message-ID: I guess this example, from mk_switch in StgCmmUtils, is the same return (mkSwitch (cmmOffset dflags tag_expr (- real_lo_tag)) arms) ? (This is clearly a negative offset and I don't know the implications of the Cmm code we output if we switch to ByteOff = Word) On Thu, Aug 7, 2014 at 3:49 PM, Edward Z. Yang wrote: > Yes, in particular if the offset is zero. Morally, however, we're > just doing this to clear the tag bit. > > Edward > > Excerpts from Johan Tibell's message of 2014-08-07 14:45:43 +0100: > > I'm hacking on this now. I'm not 100% sure that ByteOff isn't used for > > negative values though, see for example > > > > mkTaggedObjectLoad > > :: DynFlags -> LocalReg -> LocalReg -> ByteOff -> DynTag -> CmmAGraph > > -- (loadTaggedObjectField reg base off tag) generates assignment > > -- reg = bitsK[ base + off - tag ] > > -- where K is fixed by 'reg' > > mkTaggedObjectLoad dflags reg base offset tag > > = mkAssign (CmmLocal reg) > > (CmmLoad (cmmOffsetB dflags > > (CmmReg (CmmLocal base)) > > (offset - tag)) > > (localRegType reg)) > > > > from StgCmmUtils. > > > > Wouldn't it be possible that the offset in cmmOffsetB (which is of type > > ByteOff) could be negative? > > > > > > > > On Thu, Aug 7, 2014 at 1:49 PM, Simon Peyton Jones < > simonpj at microsoft.com> > > wrote: > > > > > I?m all for it! > > > > > > > > > > > > I believe that ByteOff/WordOff are always 0 or positive. At least, > they > > > were when I introduced them! > > > > > > > > > > > > SImon > > > > > > > > > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of > *Johan > > > Tibell > > > *Sent:* 07 August 2014 12:21 > > > *To:* Simon Marlow > > > *Cc:* ghc-devs at haskell.org > > > *Subject:* Re: Improving the Int/Word story inside GHC > > > > > > > > > > > > Simon M, is the intention of ByteOff and WordOff that they should be > able > > > to represent negative quantities as well? If so we might need to split > it > > > into ByteOff (still an Int) and ByteIndex (a Word) to have a type for > > > indexing into arrays. > > > > > > > > > > > > On Thu, Aug 7, 2014 at 1:16 PM, Edward Z. Yang wrote: > > > > > > If it's strictly just in the codegen (and not affecting user code), > > > seems fine to me. > > > > > > Edward > > > > > > Excerpts from Johan Tibell's message of 2014-08-07 12:10:37 +0100: > > > > > > > Inside GHC we mostly use Int instead of Word, even when we want to > > > > represent non-negative values, such as sizes of things or indices > into > > > > things. This is now causing some grief in > > > > https://ghc.haskell.org/trac/ghc/ticket/9416, where an allocation > > > boundary > > > > case test fails with a segfault because a n < m Int comparison > overflows. > > > > > > > > I tried to fix the issue by changing the type of maxInlineAllocSize, > > > which > > > > is used on one side of the above comparison, to Word. However, that > > > > unravels a bunch of other issues, such as wordsToBytes, ByteOff, etc > are > > > > all Int-valued quantities. > > > > > > > > I could perhaps work around these problems by judicious use of > > > fromIntegral > > > > in StgCmmPrim, but I'm a bit unhappy about it because it 1) makes the > > > code > > > > uglier and 2) needs to be done in quite a few places. > > > > > > > > How much work would it be to try to switch the codegen to use Word > for > > > most > > > > of these quantities instead? > > > > > > > > -- Johan > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Thu Aug 7 13:56:46 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 07 Aug 2014 14:56:46 +0100 Subject: New docs about tracking down regressions in GHC Message-ID: <1407419578-sup-1002@sabre> I recently spent some time debugging a performance regression in Haddock, and came up with some useful tips and tricks for tracking these things down in GHC. I wrote them up here: https://ghc.haskell.org/trac/ghc/wiki/Debugging/ProfilingGhc Please take a look. Thanks, Edward From johan.tibell at gmail.com Thu Aug 7 14:15:16 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 7 Aug 2014 16:15:16 +0200 Subject: Improving the Int/Word story inside GHC In-Reply-To: References: <1407410184-sup-8595@sabre> <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> <1407419323-sup-1892@sabre> Message-ID: I've uploaded https://phabricator.haskell.org/D125 to give an idea of what such a change might look like. I'm not quite done with the change (it's quite a chore) but the commit gives and idea of what such a change might look like. I'm still not convinced that making ByteOff a Word is the right thing, there seems to be several cases where it's used to represent a negative offset. On Thu, Aug 7, 2014 at 3:53 PM, Johan Tibell wrote: > I guess this example, from mk_switch in StgCmmUtils, is the same > > return (mkSwitch (cmmOffset dflags tag_expr (- real_lo_tag)) arms) > > ? > > (This is clearly a negative offset and I don't know the implications of > the Cmm code we output if we switch to ByteOff = Word) > > > On Thu, Aug 7, 2014 at 3:49 PM, Edward Z. Yang wrote: > >> Yes, in particular if the offset is zero. Morally, however, we're >> just doing this to clear the tag bit. >> >> Edward >> >> Excerpts from Johan Tibell's message of 2014-08-07 14:45:43 +0100: >> > I'm hacking on this now. I'm not 100% sure that ByteOff isn't used for >> > negative values though, see for example >> > >> > mkTaggedObjectLoad >> > :: DynFlags -> LocalReg -> LocalReg -> ByteOff -> DynTag -> CmmAGraph >> > -- (loadTaggedObjectField reg base off tag) generates assignment >> > -- reg = bitsK[ base + off - tag ] >> > -- where K is fixed by 'reg' >> > mkTaggedObjectLoad dflags reg base offset tag >> > = mkAssign (CmmLocal reg) >> > (CmmLoad (cmmOffsetB dflags >> > (CmmReg (CmmLocal base)) >> > (offset - tag)) >> > (localRegType reg)) >> > >> > from StgCmmUtils. >> > >> > Wouldn't it be possible that the offset in cmmOffsetB (which is of type >> > ByteOff) could be negative? >> > >> > >> > >> > On Thu, Aug 7, 2014 at 1:49 PM, Simon Peyton Jones < >> simonpj at microsoft.com> >> > wrote: >> > >> > > I?m all for it! >> > > >> > > >> > > >> > > I believe that ByteOff/WordOff are always 0 or positive. At least, >> they >> > > were when I introduced them! >> > > >> > > >> > > >> > > SImon >> > > >> > > >> > > >> > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of >> *Johan >> > > Tibell >> > > *Sent:* 07 August 2014 12:21 >> > > *To:* Simon Marlow >> > > *Cc:* ghc-devs at haskell.org >> > > *Subject:* Re: Improving the Int/Word story inside GHC >> > > >> > > >> > > >> > > Simon M, is the intention of ByteOff and WordOff that they should be >> able >> > > to represent negative quantities as well? If so we might need to >> split it >> > > into ByteOff (still an Int) and ByteIndex (a Word) to have a type for >> > > indexing into arrays. >> > > >> > > >> > > >> > > On Thu, Aug 7, 2014 at 1:16 PM, Edward Z. Yang >> wrote: >> > > >> > > If it's strictly just in the codegen (and not affecting user code), >> > > seems fine to me. >> > > >> > > Edward >> > > >> > > Excerpts from Johan Tibell's message of 2014-08-07 12:10:37 +0100: >> > > >> > > > Inside GHC we mostly use Int instead of Word, even when we want to >> > > > represent non-negative values, such as sizes of things or indices >> into >> > > > things. This is now causing some grief in >> > > > https://ghc.haskell.org/trac/ghc/ticket/9416, where an allocation >> > > boundary >> > > > case test fails with a segfault because a n < m Int comparison >> overflows. >> > > > >> > > > I tried to fix the issue by changing the type of maxInlineAllocSize, >> > > which >> > > > is used on one side of the above comparison, to Word. However, that >> > > > unravels a bunch of other issues, such as wordsToBytes, ByteOff, >> etc are >> > > > all Int-valued quantities. >> > > > >> > > > I could perhaps work around these problems by judicious use of >> > > fromIntegral >> > > > in StgCmmPrim, but I'm a bit unhappy about it because it 1) makes >> the >> > > code >> > > > uglier and 2) needs to be done in quite a few places. >> > > > >> > > > How much work would it be to try to switch the codegen to use Word >> for >> > > most >> > > > of these quantities instead? >> > > > >> > > > -- Johan >> > > >> > > >> > > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Thu Aug 7 14:25:39 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 07 Aug 2014 15:25:39 +0100 Subject: Improving the Int/Word story inside GHC In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> References: <1407410184-sup-8595@sabre> <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: <53E38C63.3040901@gmail.com> Hmm, surely these are used for negative offsets a lot? All Hp-relative indices are negative (but virtual Hp offsets are positive), and Sp-relative indices can be both negative and positive. On 07/08/2014 12:49, Simon Peyton Jones wrote: > I?m all for it! > > I believe that ByteOff/WordOff are always 0 or positive. At least, > they were when I introduced them! > > SImon > > *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of > *Johan Tibell > *Sent:* 07 August 2014 12:21 > *To:* Simon Marlow > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Improving the Int/Word story inside GHC > > Simon M, is the intention of ByteOff and WordOff that they should be > able to represent negative quantities as well? If so we might need to > split it into ByteOff (still an Int) and ByteIndex (a Word) to have a > type for indexing into arrays. > > On Thu, Aug 7, 2014 at 1:16 PM, Edward Z. Yang > wrote: > > If it's strictly just in the codegen (and not affecting user code), > seems fine to me. > > Edward > > Excerpts from Johan Tibell's message of 2014-08-07 12:10:37 +0100: > > > Inside GHC we mostly use Int instead of Word, even when we want to > > represent non-negative values, such as sizes of things or indices > into > > things. This is now causing some grief in > > https://ghc.haskell.org/trac/ghc/ticket/9416, where an allocation > boundary > > case test fails with a segfault because a n < m Int comparison > overflows. > > > > I tried to fix the issue by changing the type of > maxInlineAllocSize, which > > is used on one side of the above comparison, to Word. However, that > > unravels a bunch of other issues, such as wordsToBytes, ByteOff, > etc are > > all Int-valued quantities. > > > > I could perhaps work around these problems by judicious use of > fromIntegral > > in StgCmmPrim, but I'm a bit unhappy about it because it 1) makes > the code > > uglier and 2) needs to be done in quite a few places. > > > > How much work would it be to try to switch the codegen to use > Word for most > > of these quantities instead? > > > > -- Johan > From marlowsd at gmail.com Thu Aug 7 14:36:27 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 07 Aug 2014 15:36:27 +0100 Subject: Improving the Int/Word story inside GHC In-Reply-To: References: Message-ID: <53E38EEB.8000400@gmail.com> On 07/08/2014 12:10, Johan Tibell wrote: > Inside GHC we mostly use Int instead of Word, even when we want to > represent non-negative values, such as sizes of things or indices into > things. This is now causing some grief in > https://ghc.haskell.org/trac/ghc/ticket/9416, where an allocation > boundary case test fails with a segfault because a n < m Int comparison > overflows. > > I tried to fix the issue by changing the type of maxInlineAllocSize, > which is used on one side of the above comparison, to Word. However, > that unravels a bunch of other issues, such as wordsToBytes, ByteOff, > etc are all Int-valued quantities. > > I could perhaps work around these problems by judicious use of > fromIntegral in StgCmmPrim, but I'm a bit unhappy about it because it 1) > makes the code uglier and 2) needs to be done in quite a few places. I think doing the comparison with Integer is the right fix. Relying on Word being big enough for these things is technically wrong because we might be cross-compiling from a smaller word size. Cheers, Simon From marlowsd at gmail.com Thu Aug 7 14:42:07 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 07 Aug 2014 15:42:07 +0100 Subject: Perf regression: ghc --make: add nicer names to RTS threads (threaded IO manager, make workers) (f686682) In-Reply-To: <20140806234036.12e6f758@sf> References: <20140804131313.1D834240EA@ghc.haskell.org> <1407310245.1760.1.camel@joachim-breitner.de> <20140806221534.1a5a922a@sf> <20140806234036.12e6f758@sf> Message-ID: <53E3903F.40804@gmail.com> On 06/08/2014 21:40, Sergei Trofimovich wrote: > I think I know what happens. According to perf the benchmark spends 34%+ > of time in garbage collection ('perf record -- $args'/'perf report'): > > 27,91% test test [.] evacuate > 9,29% test test [.] s9Lz_info > 7,46% test test [.] scavenge_block > > And the whole benchmark runs a tiny bit more than 300ms. > It is exactly in line with major GC timer (0.3s). 0.3s is the *idle* GC timer, it has no effect when the program is running normally. There's no timed GC or anything like that. It sometimes happens that a tiny change somewhere tips a program over into doing one more major GC, though. > If we run > $ time ./test inverter 345 10n 4u 1>/dev/null > multiple times there is heavy instability in there (with my patch reverted): > real 0m0.319s > real 0m0.305s > real 0m0.307s > real 0m0.373s > real 0m0.381s > which is +/- 80ms drift! > > Let's try to kick major GC earlier instead of running right at runtime > shutdown time: > $ time ./test inverter 345 10n 4u +RTS -I0.1 1>/dev/null > > real 0m0.304s > real 0m0.308s > real 0m0.302s > real 0m0.304s > real 0m0.308s > real 0m0.306s > real 0m0.305s > real 0m0.312s > which is way more stable behaviour. > > Thus my theory is that my changed stepped from > "90% of time 1 GC run per run" > to > "90% of time 2 GC runs per run" Is this program idle? I have no idea why this might be happening! If the program is busy computing stuff, the idle GC should not be firing. If it is, that's a bug. Cheers, Simon From simonpj at microsoft.com Thu Aug 7 14:45:27 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 7 Aug 2014 14:45:27 +0000 Subject: Improving the Int/Word story inside GHC In-Reply-To: <53E38C63.3040901@gmail.com> References: <1407410184-sup-8595@sabre> <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> <53E38C63.3040901@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF22198726@DBXPRD3001MB024.064d.mgd.msft.net> When I introduced them in the first place they were used for positive offsets within StackAreas and heap objects. Both are organised with the zeroth byte of the stack area or heap object being at the lowest address. It's true that a positive offset from the beginning of a block of contiguous freshly-allocated heap objects will turn into a negative displacement from the actual, physical heap pointer. If ByteOff is used for both purpose then yes there will be negative ones. More than that I cannot say. They may well be being used for other purposes by now. One thought is that the profiling word appears just *before* the start of a heap object, so that might need a negative offset, but it seems like a rather special case. Simon | -----Original Message----- | From: Simon Marlow [mailto:marlowsd at gmail.com] | Sent: 07 August 2014 15:26 | To: Simon Peyton Jones; Johan Tibell | Cc: ghc-devs at haskell.org | Subject: Re: Improving the Int/Word story inside GHC | | Hmm, surely these are used for negative offsets a lot? All Hp-relative | indices are negative (but virtual Hp offsets are positive), and Sp- | relative indices can be both negative and positive. | | On 07/08/2014 12:49, Simon Peyton Jones wrote: | > I?m all for it! | > | > I believe that ByteOff/WordOff are always 0 or positive. At least, | > they were when I introduced them! | > | > SImon | > | > *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of | > *Johan Tibell | > *Sent:* 07 August 2014 12:21 | > *To:* Simon Marlow | > *Cc:* ghc-devs at haskell.org | > *Subject:* Re: Improving the Int/Word story inside GHC | > | > Simon M, is the intention of ByteOff and WordOff that they should be | > able to represent negative quantities as well? If so we might need to | > split it into ByteOff (still an Int) and ByteIndex (a Word) to have a | > type for indexing into arrays. | > | > On Thu, Aug 7, 2014 at 1:16 PM, Edward Z. Yang > wrote: | > | > If it's strictly just in the codegen (and not affecting user | code), | > seems fine to me. | > | > Edward | > | > Excerpts from Johan Tibell's message of 2014-08-07 12:10:37 | +0100: | > | > > Inside GHC we mostly use Int instead of Word, even when we | want to | > > represent non-negative values, such as sizes of things or | indices | > into | > > things. This is now causing some grief in | > > https://ghc.haskell.org/trac/ghc/ticket/9416, where an | allocation | > boundary | > > case test fails with a segfault because a n < m Int comparison | > overflows. | > > | > > I tried to fix the issue by changing the type of | > maxInlineAllocSize, which | > > is used on one side of the above comparison, to Word. However, | that | > > unravels a bunch of other issues, such as wordsToBytes, | ByteOff, | > etc are | > > all Int-valued quantities. | > > | > > I could perhaps work around these problems by judicious use of | > fromIntegral | > > in StgCmmPrim, but I'm a bit unhappy about it because it 1) | makes | > the code | > > uglier and 2) needs to be done in quite a few places. | > > | > > How much work would it be to try to switch the codegen to use | > Word for most | > > of these quantities instead? | > > | > > -- Johan | > From singpolyma at singpolyma.net Thu Aug 7 14:47:21 2014 From: singpolyma at singpolyma.net (Stephen Paul Weber) Date: Thu, 7 Aug 2014 09:47:21 -0500 Subject: Overlapping and incoherent instances In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF220981A1@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF2208260F@DB3PRD3001MB020.064d.mgd.msft.net> <20140802152714.GA119560@srcf.ucam.org> <20140802195157.GB119560@srcf.ucam.org> <20140802205457.GA157265@srcf.ucam.org> <618BE556AADD624C9C918AA5D5911BEF220981A1@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <20140807144721.GB2003@singpolyma-liberty> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 >I suppose that -XOverlappingInstances could mean "silently honour >OVERLAPPABLE/OVERLAPPING pragmas", while lacking it would mean "honour >OVERLAPPABLE/OVERLAPPING pragmas, but emit noisy warnings" or even "don't >honour them and warn". > >But that is different to the behaviour today, so we'd need a new LANGUAGE >pragma. Perhaps -XHonourOverlappingInstances or something. This would be a reasonable alternative to the keyword-based solution. At least for now. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) iQIcBAEBCAAGBQJT45F5AAoJENEcKRHOUZzewwwQALUiK60sbIC6j5nbXHPCeau8 bN80pmhUL11lAyI7OCSQyV8S9AU7ikgaUPyKwk/cx/TtKQGLwghWFZlRCoCRrplk dmGsSCw2LN4GElHL22EcyELPcx74lknz3tmQuZg4oAUXm1h2w8774L3iHOE4Ompg 9gYzuOo8Ii3oHJ6fjIL0DwGWp92F8NrTPeQmziBOPHAgQYVFO8QMncXcOGtdIZUb PQr0szr9HLkcwKSrYoxvvZBqEj6AM0xj+KhX87NEnQ08EaY6FO/Dhm4mi7X/A5Ca khwsjwId0qR6C4vs7QyYnaK3yiUFlZMUlXdUhpRG3wWFHseb3m9tX2a2ewE30jPl TgnoU5NYphijMWXpc3p06D5Zj0lT6L++Y3Ez8CS+0QpBPG9c0CJnLqXFQ5CNbl+9 Ryzpg7ltg5vhtq7BLwcz+V87lzc4KYm/tyqEVvxN59W6XLuLLe4tW6NL9vLDl9Rg /3vMRYKZBa+0we7jAi/wYbdks7g+9sqpiLqke5m73a5F2j/TTl4BrkIj2vG0KrP+ G9eJC1aieUbfPXXwXWSJh8binJNp//qWs7rILDsx4r0o1QR245uT5WzwkwapXmE8 ZYuXsVcc1/mS1ZneOP6zn0QOlRcUJ0A5dFxQg69hsXpD+MM226r/57P3eI9XhLRN W+DNBoXbr9mp8eNGjl6O =CFkA -----END PGP SIGNATURE----- From johan.tibell at gmail.com Thu Aug 7 15:01:42 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 7 Aug 2014 17:01:42 +0200 Subject: Improving the Int/Word story inside GHC In-Reply-To: <53E38EEB.8000400@gmail.com> References: <53E38EEB.8000400@gmail.com> Message-ID: On Thu, Aug 7, 2014 at 4:36 PM, Simon Marlow wrote: > I think doing the comparison with Integer is the right fix. Relying on Word > being big enough for these things is technically wrong because we might be > cross-compiling from a smaller word size. That sounds like an easier fix and I will try that. Unfortunately working with Integers means lots of our convenience functions, such as wordsToBytes, go out the window, as the all work on Byte/WordOff. Here's an example that now gets more annoying: shouldInlinePrimOp dflags NewArrayOp [(CmmLit (CmmInt n _)), init] | wordsToBytes dflags (fromInteger n) <= maxInlineAllocSize dflags = Most of our array primops are likely* still wrong, as the code that generates them uses Int everywhere. Still also sounds like a problem for cross-compiling. * In some cases we might be lucky and the Int is never introspected any we just look at the bits (i.e. pretend it's a Word) when we generate code. From omeragacan at gmail.com Thu Aug 7 15:29:04 2014 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Thu, 7 Aug 2014 18:29:04 +0300 Subject: biographical profiling is broken? Message-ID: Hi all, I'm trying to use LDV profiling features of GHC but I'm failing. Here's what I try: (I'm using GHC 7.8.2) * I'm compiling my app with `-prof` and I'm also using `-fprof-auto` just to be sure. * I'm running my app using `+RTS -hbdrag,void` as described in the docs. (https://www.haskell.org/ghc/docs/latest/html/users_guide/prof-heap.html#biography-prof) This always generates an empty MyApp.hp file. There's only this header in the generated file: JOB "MyApp +RTS -hd -hbdrag,void,lag" DATE "Thu Aug 7 18:14 2014" SAMPLE_UNIT "seconds" VALUE_UNIT "bytes" BEGIN_SAMPLE 0.00 END_SAMPLE 0.00 BEGIN_SAMPLE 0.10 END_SAMPLE 0.10 I tried different programs, from "hello world" to a complex language interpreter. I always get the same file with only a header. * I also tried adding more arguments like `-hc`, `-hm`, `-hr` etc. but I got same results. I feel like the feature is broken. I checked the test suite to find some working LDV profiling programs. But as far as I can see we don't have any tests for LDV stuff. There's a `bio001.stdout` which I believe is related with "biographical profiling"(which means LDV) but again AFAICS it's not used. (I'm not having any different behaviors or exceptions while running programs using LDV RTS arguments.) Can anyone help me with this? Is anyone using this feature? Am I right that this feature is not tested? Thanks. --- ?mer Sinan A?acan http://osa1.net From eir at cis.upenn.edu Thu Aug 7 18:34:19 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Thu, 7 Aug 2014 14:34:19 -0400 Subject: arc diff linter looping / stuck Message-ID: Hi all, I've prepared a bunch of commits to fix several tickets. After pushing these commits to branch wip/rae (to save my place and to get validate running on Travis), I then `git checkout`ed back to a point where `git diff origin/master` gave me a patch for precisely one bug (instead of the several unrelated ones I had fixed). I wanted to post to Differential. `arc diff` allowed me to fill out a description message (which mentioned, in its comments, the right set of commits), but then hung on the "linting..." stage. I suppose I could skip the linter, but it's more likely I've done something wrong here... Any advice? In the meantime, please consider this to be a request for feedback on everything in wip/rae! The bugs fixed are #9200, #9415, #9404, and #9371. Thanks! Richard From marlowsd at gmail.com Thu Aug 7 20:55:10 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 07 Aug 2014 21:55:10 +0100 Subject: Improving the Int/Word story inside GHC In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF22198726@DBXPRD3001MB024.064d.mgd.msft.net> References: <1407410184-sup-8595@sabre> <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> <53E38C63.3040901@gmail.com> <618BE556AADD624C9C918AA5D5911BEF22198726@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: <53E3E7AE.8070806@gmail.com> On 07/08/14 15:45, Simon Peyton Jones wrote: > When I introduced them in the first place they were used for positive offsets within StackAreas and heap objects. Both are organised with the zeroth byte of the stack area or heap object being at the lowest address. > > It's true that a positive offset from the beginning of a block of contiguous freshly-allocated heap objects will turn into a negative displacement from the actual, physical heap pointer. If ByteOff is used for both purpose then yes there will be negative ones. > > More than that I cannot say. They may well be being used for other purposes by now. I'm hazy on the history but I'm sure you're right. In any case I'm pretty sure I've used these types in lots of places. > One thought is that the profiling word appears just *before* the start of a heap object, so that might need a negative offset, but it seems like a rather special case. Hmmm... the profiling word is the second word of the object, after the info pointer. Cheers, Simon > Simon > > | -----Original Message----- > | From: Simon Marlow [mailto:marlowsd at gmail.com] > | Sent: 07 August 2014 15:26 > | To: Simon Peyton Jones; Johan Tibell > | Cc: ghc-devs at haskell.org > | Subject: Re: Improving the Int/Word story inside GHC > | > | Hmm, surely these are used for negative offsets a lot? All Hp-relative > | indices are negative (but virtual Hp offsets are positive), and Sp- > | relative indices can be both negative and positive. > | > | On 07/08/2014 12:49, Simon Peyton Jones wrote: > | > I?m all for it! > | > > | > I believe that ByteOff/WordOff are always 0 or positive. At least, > | > they were when I introduced them! > | > > | > SImon > | > > | > *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of > | > *Johan Tibell > | > *Sent:* 07 August 2014 12:21 > | > *To:* Simon Marlow > | > *Cc:* ghc-devs at haskell.org > | > *Subject:* Re: Improving the Int/Word story inside GHC > | > > | > Simon M, is the intention of ByteOff and WordOff that they should be > | > able to represent negative quantities as well? If so we might need to > | > split it into ByteOff (still an Int) and ByteIndex (a Word) to have a > | > type for indexing into arrays. > | > > | > On Thu, Aug 7, 2014 at 1:16 PM, Edward Z. Yang | > > wrote: > | > > | > If it's strictly just in the codegen (and not affecting user > | code), > | > seems fine to me. > | > > | > Edward > | > > | > Excerpts from Johan Tibell's message of 2014-08-07 12:10:37 > | +0100: > | > > | > > Inside GHC we mostly use Int instead of Word, even when we > | want to > | > > represent non-negative values, such as sizes of things or > | indices > | > into > | > > things. This is now causing some grief in > | > > https://ghc.haskell.org/trac/ghc/ticket/9416, where an > | allocation > | > boundary > | > > case test fails with a segfault because a n < m Int comparison > | > overflows. > | > > > | > > I tried to fix the issue by changing the type of > | > maxInlineAllocSize, which > | > > is used on one side of the above comparison, to Word. However, > | that > | > > unravels a bunch of other issues, such as wordsToBytes, > | ByteOff, > | > etc are > | > > all Int-valued quantities. > | > > > | > > I could perhaps work around these problems by judicious use of > | > fromIntegral > | > > in StgCmmPrim, but I'm a bit unhappy about it because it 1) > | makes > | > the code > | > > uglier and 2) needs to be done in quite a few places. > | > > > | > > How much work would it be to try to switch the codegen to use > | > Word for most > | > > of these quantities instead? > | > > > | > > -- Johan > | > > From marlowsd at gmail.com Thu Aug 7 20:56:33 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 07 Aug 2014 21:56:33 +0100 Subject: Improving the Int/Word story inside GHC In-Reply-To: References: <53E38EEB.8000400@gmail.com> Message-ID: <53E3E801.1060807@gmail.com> On 07/08/14 16:01, Johan Tibell wrote: > On Thu, Aug 7, 2014 at 4:36 PM, Simon Marlow wrote: >> I think doing the comparison with Integer is the right fix. Relying on Word >> being big enough for these things is technically wrong because we might be >> cross-compiling from a smaller word size. > > That sounds like an easier fix and I will try that. Unfortunately > working with Integers means lots of our convenience functions, such as > wordsToBytes, go out the window, as the all work on Byte/WordOff. > Here's an example that now gets more annoying: > > shouldInlinePrimOp dflags NewArrayOp [(CmmLit (CmmInt n _)), init] > | wordsToBytes dflags (fromInteger n) <= maxInlineAllocSize dflags = Maybe wordsToBytes should be overloaded on Integral (with specialisations). Cheers, Simon > Most of our array primops are likely* still wrong, as the code that > generates them uses Int everywhere. Still also sounds like a problem > for cross-compiling. > > * In some cases we might be lucky and the Int is never introspected > any we just look at the bits (i.e. pretend it's a Word) when we > generate code. > From simonpj at microsoft.com Thu Aug 7 21:37:06 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 7 Aug 2014 21:37:06 +0000 Subject: Improving the Int/Word story inside GHC In-Reply-To: <53E3E7AE.8070806@gmail.com> References: <1407410184-sup-8595@sabre> <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> <53E38C63.3040901@gmail.com> <618BE556AADD624C9C918AA5D5911BEF22198726@DBXPRD3001MB024.064d.mgd.msft.net> <53E3E7AE.8070806@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF22198D95@DBXPRD3001MB024.064d.mgd.msft.net> | > One thought is that the profiling word appears just *before* the start | of a heap object, so that might need a negative offset, but it seems like | a rather special case. | | Hmmm... the profiling word is the second word of the object, after the | info pointer. Oh, OK, I'm mis-remembering that; apols. Simon From simonpj at microsoft.com Thu Aug 7 21:51:40 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 7 Aug 2014 21:51:40 +0000 Subject: arc diff linter looping / stuck In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF22198DDC@DBXPRD3001MB024.064d.mgd.msft.net> I'm off on holiday for a week, but you and I have discussed most of these changes, some at length. If you are happy with your implementation, then go ahead and commit, from my pov. I did take a quick look though. For #9200 and TcTyClsDecls, I think you have implemented "Possible new strategy" on https://ghc.haskell.org/trac/ghc/wiki/GhcKinds/KindInference, but not "A possible variation" (same page). correct? If so, worth a note in the source code. And actually I'd transfer the algorithm itself, including the definition of CUSK, into the code. kcStrategy seems a very odd name for a predicate on HsDecls that is just a Bool saying whether or not it has a CUSK. Also odd is that every call to kcHsTyVarBndrs has a corresponding call to kcStrategy, and both functions are in TcHsType; why not just combine them into one? Thanks for doing this Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Richard | Eisenberg | Sent: 07 August 2014 19:34 | To: ghc-devs | Subject: arc diff linter looping / stuck | | Hi all, | | I've prepared a bunch of commits to fix several tickets. After pushing | these commits to branch wip/rae (to save my place and to get validate | running on Travis), I then `git checkout`ed back to a point where `git | diff origin/master` gave me a patch for precisely one bug (instead of the | several unrelated ones I had fixed). I wanted to post to Differential. | `arc diff` allowed me to fill out a description message (which mentioned, | in its comments, the right set of commits), but then hung on the | "linting..." stage. I suppose I could skip the linter, but it's more | likely I've done something wrong here... | | Any advice? | | In the meantime, please consider this to be a request for feedback on | everything in wip/rae! The bugs fixed are #9200, #9415, #9404, and #9371. | | Thanks! | Richard | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From lukexipd at gmail.com Thu Aug 7 23:58:24 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Thu, 7 Aug 2014 16:58:24 -0700 Subject: ARM64 Task Force In-Reply-To: References: Message-ID: Hi all, An update on this -- I've made a bit of progress thanks to Karel and Colin's start at ARM64 support https://ghc.haskell.org/trac/ghc/ticket/7942 With a few tweaks*, that let me build a GHC that builds ARM64 binaries and load them onto my iPad Air, which is great! But of course they don't work yet since LLVM doesn't have the ARM64/GHC calling convention in. Happily I was able to use LLVM HEAD to do this, which means we don't need to be bound to Xcode's release schedules. I'm now studying David's patches to LLVM to learn how to add the ARM64/GHC calling convention to LLVM. *including Ben Gamari's patches to get LLVM HEAD working https://github.com/bgamari/ghc/tree/llvm-3.5-new Best Luke On Mon, Jul 7, 2014 at 11:06 PM, Luke Iannini wrote: > Howdy all, > > Would anyone like to team up on getting ARM64 support into GHC? > > Cheers > Luke > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Fri Aug 8 03:14:46 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 08 Aug 2014 05:14:46 +0200 Subject: Building GHC under Wine? In-Reply-To: <1405464922.2694.29.camel@kirk> References: <1405464922.2694.29.camel@kirk> Message-ID: <53E440A6.4050203@fuuzetsu.co.uk> On 07/16/2014 12:55 AM, Joachim Breitner wrote: > Hi, > > I feel sorry for Simon always repeatedly stuck with an unbuildable tree, > and an idea crossed my mind: Can we build? GHC under Wine? If so, is it > likely to catch the kind of problems that Simon is getting? If so, maybe > it runs fast enough to be also tested by travis on every commit? > > (This mail is to find out if people have tried it before. If not, I?ll > give it a quick shot.) > > Greetings, > Joachim > > ? we surely can use it: http://www.haskell.org/haskellwiki/GHC_under_Wine > > Perhaps this is a bit off-tangent but few months ago there were some commits landing to the nix package manager which allow you to run tests in a Windows VM. It was created to run tests for things like cross-compiled packages but it probably could be adapted. If you don't mind actually installing Windows (in a VM) and have nix already/plan on using it then that might be a more preferable workflow: create a nix expression that builds a validates GHC in the VM and spits out the result. It's just something I thought I should mention in case anyone was interested. -- Mateusz K. From alexander.kjeldaas at gmail.com Fri Aug 8 05:21:35 2014 From: alexander.kjeldaas at gmail.com (Alexander Kjeldaas) Date: Fri, 8 Aug 2014 07:21:35 +0200 Subject: Building GHC under Wine? In-Reply-To: <53E440A6.4050203@fuuzetsu.co.uk> References: <1405464922.2694.29.camel@kirk> <53E440A6.4050203@fuuzetsu.co.uk> Message-ID: Microsoft has free VMs for testing purposes. It expires after 90 days and the only relevant limitation that i see is that it's not licensed for a "live operating environment". That might or might not exclude Travis, but scripting a test that developers can run personally should be allowed. https://www.modern.ie/en-us/virtualization-tools Alexander On Aug 8, 2014 5:14 AM, "Mateusz Kowalczyk" wrote: > On 07/16/2014 12:55 AM, Joachim Breitner wrote: > > Hi, > > > > I feel sorry for Simon always repeatedly stuck with an unbuildable tree, > > and an idea crossed my mind: Can we build? GHC under Wine? If so, is it > > likely to catch the kind of problems that Simon is getting? If so, maybe > > it runs fast enough to be also tested by travis on every commit? > > > > (This mail is to find out if people have tried it before. If not, I?ll > > give it a quick shot.) > > > > Greetings, > > Joachim > > > > ? we surely can use it: > http://www.haskell.org/haskellwiki/GHC_under_Wine > > > > > > Perhaps this is a bit off-tangent but few months ago there were some > commits landing to the nix package manager which allow you to run tests > in a Windows VM. It was created to run tests for things like > cross-compiled packages but it probably could be adapted. > > If you don't mind actually installing Windows (in a VM) and have nix > already/plan on using it then that might be a more preferable workflow: > create a nix expression that builds a validates GHC in the VM and spits > out the result. > > It's just something I thought I should mention in case anyone was > interested. > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Fri Aug 8 05:25:01 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 08 Aug 2014 07:25:01 +0200 Subject: Moving Haddock *development* out of GHC tree Message-ID: <53E45F2D.9000806@fuuzetsu.co.uk> Hello, A slightly long e-mail but I ask that you voice your opinion if you ever changed GHC API. You can skim over the details, simply know that it saves me vast amount of time, allows me to try and find contributors and doesn't impact GHC negatively. It seems like a win-win scenario for GHC and Haddock. GHC team's workflow does not change and will not require any new commitment: I do all the work and I think it's a 1 line change in sync-all when transition is ready. Here it is: It is no secret that many core Haskell projects lack developer hands and Haddock is no exception: the current maintainers are Simon Hengel and myself. Simon does not have much time so currently all the issues and updates are up to me. Ideally I would like if some more people could come and hack on Haddock but there are a couple of problems with trying to recruit folk for this: 1. Interacting with GHC API is not the easiest thing. This is Haddock's problem but I thought I'd mention it here. 2. Haddock resides directly in the GHC tree and it is currently *required* that it compiles with GHC HEAD. This is a huge barrier of entry for anyone: today I wanted to make a fairly simple change but it still took me 3 validate runs to be at least somewhat confident that I didn't break much in GHC. On top of this I had help from Edward Z. Yang on IRC and information from him on what the issue exactly was. If I was to do everything alone it would have taken even more validates. A validate is not fast on machine by any means, it takes an hour or two. Here is what I want to do unless there are major objections: I want to move the active development away from GHC tree. Below is how it would work. For simplicity please imagine that we have *just* released 7.8.3. * Haddock development would concentrate on supporting the last public release of GHC: I stop developing against GHC HEAD and currently would develop against 7.8.3. * GHC itself checks out Haddock as a submodule as it does now. The only difference is that it points at whatever commit worked last. Let us assume it is the Haddock 2.14.3 release commit. The vital difference from current state is that GHC will no longer track changes in master branch. * Now when GHC API changes things proceed as they normally do: whoever is responsible for the changes, pops into the Haddock submodule applies the patches necessary for Haddock to build with HEAD and everyone is happy. What does *not* happen is these patches don't go into master: I ignore them and keep working with 7.8.3. * When a GHC release rolls around, I update Haddock to work with the new API so that people with new release can still use it. Once it works against new API, GHC can start tracking from that commit onwards and proceed as usual. Here are the advantages: * I don't have to work against GHC HEAD. This means I don't have to build GHC HEAD and I don't need to worry about GHC API changes. I don't waste 2-4 hours building before hacking and validating after hacking to make any minor changes and to make sure I haven't broken anything. * More importantly, anyone who wants to write a patch for Haddock can now do so easily, all they need is recent compiler rather than being forced to build HEAD. Building and validating against HEAD is a **huge** barrier of entry. * I only have to care about GHC API changes once a release and not twice a week. I think PatternSynonyms have changed 4 times in a month but the end result at release time is the same and that's what people care about. * It is less work for anyone changing GHC API: they only have to deal with their own changes and not my changes which add features or whatever. * If I break something in Haddock HEAD, GHC is not affected. * If Haddock's binary interface doesn't change, we may even allow more versions of GHC be compatible through CPP and other such trickery. If we were to do it today, it would be an increased burden on the GHC team to deal with those. * I can release as often as I want against the same compiler version. Currently doing this requires backporting features (see v2.14 branch) which is a massive pain. I no longer have to tell the users ?yes, your bug is fixed but to get it you need to compile GHC HEAD or wait 6-12 months until next GHC release?. I have to do this a lot. Here are the disadvantages and why I think they don't make a big difference: * GHC HEAD doesn't get any new-and-cool features that we might implement. I say this doesn't matter because no one uses varying GHC HEAD versions to develop actual software, documentation and all. What I mean to say is that the only user of the Haddock that's developed in GHC tree is GHC itself. The only case where GHC actually used in-tree Haddock was when Herbert generated documentation for base-4.7 early for me to eye before the release. Even this doesn't matter because so close to the release I'll already have the existing GHC API integrated anyway. Again, it does not matter if GHC HEAD itself doesn't get pretty operator rendering or whatever right when I implement it because no one cares about it until it's release time. I know that many people simply HADDOCK_DOCS=NO to save time. The actual users only care about Haddock that works with 7.6.x, 7.8.x, 7.10.x; only GHC cares about in-betweens and only for the purpose of being able to build and validate. * GHC team can't easily contribute features and get the back immediately. In part it doesn't matter because of the previous point and in the last year or so there were no features contributed directly from GHC except those necessary to keep Haddock compiling. This just means there's no demand for such close relationship. * Haddock-affecting changes in GHC parser don't ?take effect? straight away. This is my loss and considering the infrequency at which such changes happen, it's a tiny price to pay to have to wait until release. * ?that's it, no other disadvantages that I can think of, but that's why I'm sending it to the list to review! What's worth mentioning is that the no-external-dependencies thing still applies because even though we no longer need to compile against HEAD, we still need to compile against the tree at release time. In summary: My life gets easier because I stop wasting it on playing with whole GHC tree, GHC team's life gets easier because they don't have to deal with the changes I make. My life gets even easier because I only have to make big API updates once a release. I can actually start looking for contributors. When a release rolls around, GHC and Haddock ?meet up?, we make sure it all works, release happens, GHC starts tracking from that point and we part ways until the next release. What do you think? If there are no major objections in one week then I will assume I am good to go with this. Transition from current setup: If I receive some patches I was promised then I will then make a 2.14.4 bugfix/compat release make sure that master is up to date and then create something like GHC-tracking branch from master and track that. I will then abandon that branch and not push to it unless it is GHC release time. The next commit in master will bring Haddock to a state where it works with 7.8.3: yes, this means removing all new API stuff until 7.10 or 7.8.4 or whatever. GHC API changes go onto GHC-tracking while all the stuff I write goes master. When GHC makes a release or is about to, I make master work with that and make GHC-tracking point to that instead. Thanks! -- Mateusz K. From fuuzetsu at fuuzetsu.co.uk Fri Aug 8 05:27:43 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 08 Aug 2014 07:27:43 +0200 Subject: Building GHC under Wine? In-Reply-To: References: <1405464922.2694.29.camel@kirk> <53E440A6.4050203@fuuzetsu.co.uk> Message-ID: <53E45FCF.7060009@fuuzetsu.co.uk> On 08/08/2014 07:21 AM, Alexander Kjeldaas wrote: > Microsoft has free VMs for testing purposes. It expires after 90 days and > the only relevant limitation that i see is that it's not licensed for a > "live operating environment". > > That might or might not exclude Travis, but scripting a test that > developers can run personally should be allowed. > > https://www.modern.ie/en-us/virtualization-tools > > Alexander This seems to be a VM dedicated for running Internet Explorer, is it actually a fully-featured environment? The site doesn't show much info. > On Aug 8, 2014 5:14 AM, "Mateusz Kowalczyk" wrote: > >> On 07/16/2014 12:55 AM, Joachim Breitner wrote: >>> Hi, >>> >>> I feel sorry for Simon always repeatedly stuck with an unbuildable tree, >>> and an idea crossed my mind: Can we build? GHC under Wine? If so, is it >>> likely to catch the kind of problems that Simon is getting? If so, maybe >>> it runs fast enough to be also tested by travis on every commit? >>> >>> (This mail is to find out if people have tried it before. If not, I?ll >>> give it a quick shot.) >>> >>> Greetings, >>> Joachim >>> >>> ? we surely can use it: >> http://www.haskell.org/haskellwiki/GHC_under_Wine >>> >>> >> >> Perhaps this is a bit off-tangent but few months ago there were some >> commits landing to the nix package manager which allow you to run tests >> in a Windows VM. It was created to run tests for things like >> cross-compiled packages but it probably could be adapted. >> >> If you don't mind actually installing Windows (in a VM) and have nix >> already/plan on using it then that might be a more preferable workflow: >> create a nix expression that builds a validates GHC in the VM and spits >> out the result. >> >> It's just something I thought I should mention in case anyone was >> interested. >> >> -- >> Mateusz K. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > -- Mateusz K. From stegeman at gmail.com Fri Aug 8 05:32:23 2014 From: stegeman at gmail.com (Luite Stegeman) Date: Fri, 8 Aug 2014 07:32:23 +0200 Subject: Building GHC under Wine? In-Reply-To: <53E45FCF.7060009@fuuzetsu.co.uk> References: <1405464922.2694.29.camel@kirk> <53E440A6.4050203@fuuzetsu.co.uk> <53E45FCF.7060009@fuuzetsu.co.uk> Message-ID: Yes it's a regular Windows installation, it just comes with an "IEUser" account preinstalled. I've been using it to test GHCJS on Windows (but not for automatic builds yet, just manual test runs). luite On Fri, Aug 8, 2014 at 7:27 AM, Mateusz Kowalczyk wrote: > On 08/08/2014 07:21 AM, Alexander Kjeldaas wrote: > > Microsoft has free VMs for testing purposes. It expires after 90 days and > > the only relevant limitation that i see is that it's not licensed for a > > "live operating environment". > > > > That might or might not exclude Travis, but scripting a test that > > developers can run personally should be allowed. > > > > https://www.modern.ie/en-us/virtualization-tools > > > > Alexander > > This seems to be a VM dedicated for running Internet Explorer, is it > actually a fully-featured environment? The site doesn't show much info. > > > On Aug 8, 2014 5:14 AM, "Mateusz Kowalczyk" > wrote: > > > >> On 07/16/2014 12:55 AM, Joachim Breitner wrote: > >>> Hi, > >>> > >>> I feel sorry for Simon always repeatedly stuck with an unbuildable > tree, > >>> and an idea crossed my mind: Can we build? GHC under Wine? If so, is it > >>> likely to catch the kind of problems that Simon is getting? If so, > maybe > >>> it runs fast enough to be also tested by travis on every commit? > >>> > >>> (This mail is to find out if people have tried it before. If not, I?ll > >>> give it a quick shot.) > >>> > >>> Greetings, > >>> Joachim > >>> > >>> ? we surely can use it: > >> http://www.haskell.org/haskellwiki/GHC_under_Wine > >>> > >>> > >> > >> Perhaps this is a bit off-tangent but few months ago there were some > >> commits landing to the nix package manager which allow you to run tests > >> in a Windows VM. It was created to run tests for things like > >> cross-compiled packages but it probably could be adapted. > >> > >> If you don't mind actually installing Windows (in a VM) and have nix > >> already/plan on using it then that might be a more preferable workflow: > >> create a nix expression that builds a validates GHC in the VM and spits > >> out the result. > >> > >> It's just something I thought I should mention in case anyone was > >> interested. > >> > >> -- > >> Mateusz K. > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > >> > > > > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.kjeldaas at gmail.com Fri Aug 8 05:32:28 2014 From: alexander.kjeldaas at gmail.com (Alexander Kjeldaas) Date: Fri, 8 Aug 2014 07:32:28 +0200 Subject: Building GHC under Wine? In-Reply-To: <53E45FCF.7060009@fuuzetsu.co.uk> References: <1405464922.2694.29.camel@kirk> <53E440A6.4050203@fuuzetsu.co.uk> <53E45FCF.7060009@fuuzetsu.co.uk> Message-ID: On Aug 8, 2014 7:27 AM, "Mateusz Kowalczyk" wrote: > > On 08/08/2014 07:21 AM, Alexander Kjeldaas wrote: > > Microsoft has free VMs for testing purposes. It expires after 90 days and > > the only relevant limitation that i see is that it's not licensed for a > > "live operating environment". > > > > That might or might not exclude Travis, but scripting a test that > > developers can run personally should be allowed. > > > > https://www.modern.ie/en-us/virtualization-tools > > > > Alexander > > This seems to be a VM dedicated for running Internet Explorer, is it > actually a fully-featured environment? The site doesn't show much info. I don't know as I haven't used it. However, developing for any browser these days can include native code, GPGPU etc, so I don't expect it to be severely crippled. Alexander > > > On Aug 8, 2014 5:14 AM, "Mateusz Kowalczyk" wrote: > > > >> On 07/16/2014 12:55 AM, Joachim Breitner wrote: > >>> Hi, > >>> > >>> I feel sorry for Simon always repeatedly stuck with an unbuildable tree, > >>> and an idea crossed my mind: Can we build? GHC under Wine? If so, is it > >>> likely to catch the kind of problems that Simon is getting? If so, maybe > >>> it runs fast enough to be also tested by travis on every commit? > >>> > >>> (This mail is to find out if people have tried it before. If not, I?ll > >>> give it a quick shot.) > >>> > >>> Greetings, > >>> Joachim > >>> > >>> ? we surely can use it: > >> http://www.haskell.org/haskellwiki/GHC_under_Wine > >>> > >>> > >> > >> Perhaps this is a bit off-tangent but few months ago there were some > >> commits landing to the nix package manager which allow you to run tests > >> in a Windows VM. It was created to run tests for things like > >> cross-compiled packages but it probably could be adapted. > >> > >> If you don't mind actually installing Windows (in a VM) and have nix > >> already/plan on using it then that might be a more preferable workflow: > >> create a nix expression that builds a validates GHC in the VM and spits > >> out the result. > >> > >> It's just something I thought I should mention in case anyone was > >> interested. > >> > >> -- > >> Mateusz K. > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > >> > > > > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Fri Aug 8 05:58:09 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Fri, 08 Aug 2014 07:58:09 +0200 Subject: ARM64 Task Force In-Reply-To: References: Message-ID: <53E466F1.90201@centrum.cz> On 08/ 8/14 01:58 AM, Luke Iannini wrote: > I'm now studying David's patches to LLVM to learn how to add the > ARM64/GHC calling convention to LLVM. Here is also original ARM/GHC calling convention submission. It's always good to have more examples as reference... http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/044173.html Good luck with the ARM64/GHC porting work! Karel From carter.schonwald at gmail.com Fri Aug 8 06:02:23 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 8 Aug 2014 02:02:23 -0400 Subject: Improving the Int/Word story inside GHC In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF22198D95@DBXPRD3001MB024.064d.mgd.msft.net> References: <1407410184-sup-8595@sabre> <618BE556AADD624C9C918AA5D5911BEF22197351@DBXPRD3001MB024.064d.mgd.msft.net> <53E38C63.3040901@gmail.com> <618BE556AADD624C9C918AA5D5911BEF22198726@DBXPRD3001MB024.064d.mgd.msft.net> <53E3E7AE.8070806@gmail.com> <618BE556AADD624C9C918AA5D5911BEF22198D95@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: would this result in evolving how vector/array indexing works internally to using Words rather than Ints? On Thu, Aug 7, 2014 at 5:37 PM, Simon Peyton Jones wrote: > > | > One thought is that the profiling word appears just *before* the start > | of a heap object, so that might need a negative offset, but it seems like > | a rather special case. > | > | Hmmm... the profiling word is the second word of the object, after the > | info pointer. > > Oh, OK, I'm mis-remembering that; apols. > > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Fri Aug 8 07:00:21 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 08 Aug 2014 09:00:21 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53E45F2D.9000806@fuuzetsu.co.uk> (Mateusz Kowalczyk's message of "Fri, 08 Aug 2014 07:25:01 +0200") References: <53E45F2D.9000806@fuuzetsu.co.uk> Message-ID: <87oavvjwje.fsf@gmail.com> Hi Mateusz, I'm mostly interested in understanding the Git-level/workflow changes, so here's a few questions to improve my understanding of what's changing related to Git: On 2014-08-08 at 07:25:01 +0200, Mateusz Kowalczyk wrote: [...] > I do all the work and I think it's a 1 line change in sync-all when > transition is ready. What change in ./sync-all are you thinking about specifically? (or alternatively: what about those not using ./sync-all anymore?) [...] > * GHC itself checks out Haddock as a submodule as it does now. The only > difference is that it points at whatever commit worked last. Let us > assume it is the Haddock 2.14.3 release commit. The vital difference > from current state is that GHC will no longer track changes in master > branch. > > * Now when GHC API changes things proceed as they normally do: whoever > is responsible for the changes, pops into the Haddock submodule applies > the patches necessary for Haddock to build with HEAD and everyone is > happy. What does *not* happen is these patches don't go into master: I > ignore them and keep working with 7.8.3. Just to clarify, as the last sentence contains a double-negation: GHC devs continue pushing to github.com/haddock.git's `master` branch to keep Haddock building with GHC HEAD? It's just that the Haddock development proper happens in a branch other than `master` from now on? If I get this right, there will be a branch (`master`?) that's kept compatible with GHC HEAD, then there's a branch where new Haddock features are implemented (name?), and then there are stable branches for past releases (in the spirit of the current `v2.14`) So the only new thing would be a new `haddock-next` (or whatever you'd call that) branch, and `master` will just be on life-support for GHC HEAD until the next major GHC release is around the corner? [...] > If I receive some patches I was promised then I will then make a 2.14.4 > bugfix/compat release make sure that master is up to date and then > create something like GHC-tracking branch from master and track that. I > will then abandon that branch and not push to it unless it is GHC > release time. The next commit in master will bring Haddock to a state > where it works with 7.8.3: yes, this means removing all new API stuff > until 7.10 or 7.8.4 or whatever. GHC API changes go onto GHC-tracking > while all the stuff I write goes master. When GHC makes a release or is > about to, I make master work with that and make GHC-tracking point to > that instead. This paragraph confuses me a bit about which haddock branch is used for what. Can you maybe enumerate all haddock branches in the new scheme with their purpose? Cheers, hvr From sol at typeful.net Fri Aug 8 07:42:14 2014 From: sol at typeful.net (Simon Hengel) Date: Fri, 8 Aug 2014 15:42:14 +0800 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <87oavvjwje.fsf@gmail.com> References: <53E45F2D.9000806@fuuzetsu.co.uk> <87oavvjwje.fsf@gmail.com> Message-ID: <20140808074214.GD3649@x200> On Fri, Aug 08, 2014 at 09:00:21AM +0200, Herbert Valerio Riedel wrote: > Just to clarify, as the last sentence contains a double-negation: GHC > devs continue pushing to github.com/haddock.git's `master` branch to > keep Haddock building with GHC HEAD? It's just that the Haddock > development proper happens in a branch other than `master` from now on? >From my perspective I would prefer to use `master` for Haddock development and use a branch with some other name for GHC development. My main motivation here is that as a contributor to Haddock "I expect the latest code to be on `master`, and I would use it as a base when developing new features". Alternatively, maybe use `master` for both Haddock and GHC development, but push to different remotes (say use http://git.haskell.org/haddock.git for GHC development and https://github.com/haskell/haddock for Haddock development). I think this is what we already do for e.g. `containers`. Cheers, Simon From simonpj at microsoft.com Fri Aug 8 07:48:35 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 8 Aug 2014 07:48:35 +0000 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53E45F2D.9000806@fuuzetsu.co.uk> References: <53E45F2D.9000806@fuuzetsu.co.uk> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221A14B9@DB3PRD3001MB020.064d.mgd.msft.net> Mateusz What you say makes sense to me. For me, the big thing is that we can make, and push, changes to Haddock in the GHC private branch, without having to negotiate. (Haddock reaches very deep into GHC's internals, so many many changes to GHC have some knock-on effect in Haddock.) You seem OK with this, so I am too. One concern: if you and Simon pay no attention to the GHC HEAD fork of Haddock, there is no guarantee that it works at all. Presumably it compiles (because GHC's build system will build it, forcing us to fix type errors) but it might not actually work! So it would probably pay for you to watch what is happening, to ensure that the patch-ups that ignorant GHC developers apply to Haddock do indeed have the desired effect. Some of these patch-ups might even be panics --- "I don't know how to make Haddock render new construct ". That might be quite reasonable. But in general, thumbs up from me Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Mateusz Kowalczyk | Sent: 08 August 2014 06:25 | To: ghc-devs at haskell.org | Cc: Simon Hengel | Subject: Moving Haddock *development* out of GHC tree | | Hello, | | A slightly long e-mail but I ask that you voice your opinion if you | ever changed GHC API. You can skim over the details, simply know that | it saves me vast amount of time, allows me to try and find contributors | and doesn't impact GHC negatively. It seems like a win-win scenario for | GHC and Haddock. GHC team's workflow does not change and will not | require any new commitment: I do all the work and I think it's a 1 line | change in sync-all when transition is ready. Here it is: | | | It is no secret that many core Haskell projects lack developer hands | and Haddock is no exception: the current maintainers are Simon Hengel | and myself. Simon does not have much time so currently all the issues | and updates are up to me. Ideally I would like if some more people | could come and hack on Haddock but there are a couple of problems with | trying to recruit folk for this: | | 1. Interacting with GHC API is not the easiest thing. This is Haddock's | problem but I thought I'd mention it here. | | 2. Haddock resides directly in the GHC tree and it is currently | *required* that it compiles with GHC HEAD. This is a huge barrier of | entry for anyone: today I wanted to make a fairly simple change but it | still took me 3 validate runs to be at least somewhat confident that I | didn't break much in GHC. On top of this I had help from Edward Z. Yang | on IRC and information from him on what the issue exactly was. If I was | to do everything alone it would have taken even more validates. A | validate is not fast on machine by any means, it takes an hour or two. | | Here is what I want to do unless there are major objections: I want to | move the active development away from GHC tree. Below is how it would | work. For simplicity please imagine that we have *just* released 7.8.3. | | * Haddock development would concentrate on supporting the last public | release of GHC: I stop developing against GHC HEAD and currently would | develop against 7.8.3. | | * GHC itself checks out Haddock as a submodule as it does now. The only | difference is that it points at whatever commit worked last. Let us | assume it is the Haddock 2.14.3 release commit. The vital difference | from current state is that GHC will no longer track changes in master | branch. | | * Now when GHC API changes things proceed as they normally do: whoever | is responsible for the changes, pops into the Haddock submodule applies | the patches necessary for Haddock to build with HEAD and everyone is | happy. What does *not* happen is these patches don't go into master: I | ignore them and keep working with 7.8.3. | | * When a GHC release rolls around, I update Haddock to work with the | new API so that people with new release can still use it. Once it works | against new API, GHC can start tracking from that commit onwards and | proceed as usual. | | Here are the advantages: | | * I don't have to work against GHC HEAD. This means I don't have to | build GHC HEAD and I don't need to worry about GHC API changes. I don't | waste 2-4 hours building before hacking and validating after hacking to | make any minor changes and to make sure I haven't broken anything. | | * More importantly, anyone who wants to write a patch for Haddock can | now do so easily, all they need is recent compiler rather than being | forced to build HEAD. Building and validating against HEAD is a | **huge** barrier of entry. | | * I only have to care about GHC API changes once a release and not | twice a week. I think PatternSynonyms have changed 4 times in a month | but the end result at release time is the same and that's what people | care about. | | * It is less work for anyone changing GHC API: they only have to deal | with their own changes and not my changes which add features or | whatever. | | * If I break something in Haddock HEAD, GHC is not affected. | | * If Haddock's binary interface doesn't change, we may even allow more | versions of GHC be compatible through CPP and other such trickery. If | we were to do it today, it would be an increased burden on the GHC team | to deal with those. | | * I can release as often as I want against the same compiler version. | Currently doing this requires backporting features (see v2.14 branch) | which is a massive pain. I no longer have to tell the users 'yes, your | bug is fixed but to get it you need to compile GHC HEAD or wait 6-12 | months until next GHC release'. I have to do this a lot. | | Here are the disadvantages and why I think they don't make a big | difference: | | * GHC HEAD doesn't get any new-and-cool features that we might | implement. I say this doesn't matter because no one uses varying GHC | HEAD versions to develop actual software, documentation and all. What I | mean to say is that the only user of the Haddock that's developed in | GHC tree is GHC itself. The only case where GHC actually used in-tree | Haddock was when Herbert generated documentation for base-4.7 early for | me to eye before the release. Even this doesn't matter because so close | to the release I'll already have the existing GHC API integrated | anyway. | Again, it does not matter if GHC HEAD itself doesn't get pretty | operator rendering or whatever right when I implement it because no one | cares about it until it's release time. I know that many people simply | HADDOCK_DOCS=NO to save time. The actual users only care about Haddock | that works with 7.6.x, 7.8.x, 7.10.x; only GHC cares about in-betweens | and only for the purpose of being able to build and validate. | | * GHC team can't easily contribute features and get the back | immediately. In part it doesn't matter because of the previous point | and in the last year or so there were no features contributed directly | from GHC except those necessary to keep Haddock compiling. This just | means there's no demand for such close relationship. | | * Haddock-affecting changes in GHC parser don't 'take effect' straight | away. This is my loss and considering the infrequency at which such | changes happen, it's a tiny price to pay to have to wait until release. | | * ...that's it, no other disadvantages that I can think of, but that's | why I'm sending it to the list to review! | | What's worth mentioning is that the no-external-dependencies thing | still applies because even though we no longer need to compile against | HEAD, we still need to compile against the tree at release time. | | In summary: | | My life gets easier because I stop wasting it on playing with whole GHC | tree, GHC team's life gets easier because they don't have to deal with | the changes I make. My life gets even easier because I only have to | make big API updates once a release. I can actually start looking for | contributors. | | When a release rolls around, GHC and Haddock 'meet up', we make sure it | all works, release happens, GHC starts tracking from that point and we | part ways until the next release. | | What do you think? If there are no major objections in one week then I | will assume I am good to go with this. | | Transition from current setup: | If I receive some patches I was promised then I will then make a 2.14.4 | bugfix/compat release make sure that master is up to date and then | create something like GHC-tracking branch from master and track that. I | will then abandon that branch and not push to it unless it is GHC | release time. The next commit in master will bring Haddock to a state | where it works with 7.8.3: yes, this means removing all new API stuff | until 7.10 or 7.8.4 or whatever. GHC API changes go onto GHC-tracking | while all the stuff I write goes master. When GHC makes a release or is | about to, I make master work with that and make GHC-tracking point to | that instead. | | | Thanks! | -- | Mateusz K. | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From johan.tibell at gmail.com Fri Aug 8 08:07:22 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 8 Aug 2014 10:07:22 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53E45F2D.9000806@fuuzetsu.co.uk> References: <53E45F2D.9000806@fuuzetsu.co.uk> Message-ID: The biggest disadvantage in my mind is that you're setting yourself up for a potentially huge merge just before the GHC release and might block the GHC release until that merge is done (assuming that haddock is still shipped with GHC). -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Aug 8 08:11:03 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 8 Aug 2014 08:11:03 +0000 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: References: <53E45F2D.9000806@fuuzetsu.co.uk> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221A26BD@DB3PRD3001MB020.064d.mgd.msft.net> The biggest disadvantage in my mind is that you're setting yourself up for a potentially huge merge just before the GHC release and might block the GHC release until that merge is done (assuming that haddock is still shipped with GHC). Excellent point. The merge shouldn?t block the release, though. In extremis, I guess we could always release the GHC fork of Haddock if the tip of Haddock wasn?t merged to match GHC! But I doubt it?ll come to that Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Johan Tibell Sent: 08 August 2014 09:07 To: Mateusz Kowalczyk Cc: ghc-devs at haskell.org; Simon Hengel Subject: Re: Moving Haddock *development* out of GHC tree The biggest disadvantage in my mind is that you're setting yourself up for a potentially huge merge just before the GHC release and might block the GHC release until that merge is done (assuming that haddock is still shipped with GHC). -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Fri Aug 8 08:15:41 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 8 Aug 2014 10:15:41 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221A26BD@DB3PRD3001MB020.064d.mgd.msft.net> References: <53E45F2D.9000806@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF221A26BD@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Fri, Aug 8, 2014 at 10:11 AM, Simon Peyton Jones wrote: > The biggest disadvantage in my mind is that you're setting yourself up > for a potentially huge merge just before the GHC release and might block > the GHC release until that merge is done (assuming that haddock is still > shipped with GHC). > > > > Excellent point. > > > > The merge shouldn?t block the release, though. In extremis, I guess we > could always release the GHC fork of Haddock if the tip of Haddock wasn?t > merged to match GHC! But I doubt it?ll come to that > But as you mentioned the GHC fork of Haddock might not work (it might just type check) so at the very least Mateusz is signing up for validating that it indeed works before a GHC release. That's of course fine, I just want people to understand what we're signing up for. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Aug 8 08:18:06 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 08 Aug 2014 09:18:06 +0100 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53E45F2D.9000806@fuuzetsu.co.uk> References: <53E45F2D.9000806@fuuzetsu.co.uk> Message-ID: <53E487BE.5090902@gmail.com> I thought this was what you were already doing :-) Anyway, this is more or less the setup we had in mind when Haddock was added to the GHC tree. The only question is which branches are used for GHC and for regular development, and where they live. As long as that's clear for everyone (both Haddock and GHC developers), then this should be fine. The GHC release engineer will need to give the Haddock maintainers plenty of heads-up time before a release so that the merge can be done - Austin could you add that to the release checklist? Cheers, Simon On 08/08/2014 06:25, Mateusz Kowalczyk wrote: > Hello, > > A slightly long e-mail but I ask that you voice your opinion if you ever > changed GHC API. You can skim over the details, simply know that it > saves me vast amount of time, allows me to try and find contributors and > doesn't impact GHC negatively. It seems like a win-win scenario for GHC > and Haddock. GHC team's workflow does not change and will not require > any new commitment: I do all the work and I think it's a 1 line change > in sync-all when transition is ready. Here it is: > > > It is no secret that many core Haskell projects lack developer hands and > Haddock is no exception: the current maintainers are Simon Hengel and > myself. Simon does not have much time so currently all the issues and > updates are up to me. Ideally I would like if some more people could > come and hack on Haddock but there are a couple of problems with trying > to recruit folk for this: > > 1. Interacting with GHC API is not the easiest thing. This is Haddock's > problem but I thought I'd mention it here. > > 2. Haddock resides directly in the GHC tree and it is currently > *required* that it compiles with GHC HEAD. This is a huge barrier of > entry for anyone: today I wanted to make a fairly simple change but it > still took me 3 validate runs to be at least somewhat confident that I > didn't break much in GHC. On top of this I had help from Edward Z. Yang > on IRC and information from him on what the issue exactly was. If I was > to do everything alone it would have taken even more validates. A > validate is not fast on machine by any means, it takes an hour or two. > > Here is what I want to do unless there are major objections: I want to > move the active development away from GHC tree. Below is how it would > work. For simplicity please imagine that we have *just* released 7.8.3. > > * Haddock development would concentrate on supporting the last public > release of GHC: I stop developing against GHC HEAD and currently would > develop against 7.8.3. > > * GHC itself checks out Haddock as a submodule as it does now. The only > difference is that it points at whatever commit worked last. Let us > assume it is the Haddock 2.14.3 release commit. The vital difference > from current state is that GHC will no longer track changes in master > branch. > > * Now when GHC API changes things proceed as they normally do: whoever > is responsible for the changes, pops into the Haddock submodule applies > the patches necessary for Haddock to build with HEAD and everyone is > happy. What does *not* happen is these patches don't go into master: I > ignore them and keep working with 7.8.3. > > * When a GHC release rolls around, I update Haddock to work with the new > API so that people with new release can still use it. Once it works > against new API, GHC can start tracking from that commit onwards and > proceed as usual. > > Here are the advantages: > > * I don't have to work against GHC HEAD. This means I don't have to > build GHC HEAD and I don't need to worry about GHC API changes. I don't > waste 2-4 hours building before hacking and validating after hacking to > make any minor changes and to make sure I haven't broken anything. > > * More importantly, anyone who wants to write a patch for Haddock can > now do so easily, all they need is recent compiler rather than being > forced to build HEAD. Building and validating against HEAD is a **huge** > barrier of entry. > > * I only have to care about GHC API changes once a release and not twice > a week. I think PatternSynonyms have changed 4 times in a month but the > end result at release time is the same and that's what people care about. > > * It is less work for anyone changing GHC API: they only have to deal > with their own changes and not my changes which add features or whatever. > > * If I break something in Haddock HEAD, GHC is not affected. > > * If Haddock's binary interface doesn't change, we may even allow more > versions of GHC be compatible through CPP and other such trickery. If we > were to do it today, it would be an increased burden on the GHC team to > deal with those. > > * I can release as often as I want against the same compiler version. > Currently doing this requires backporting features (see v2.14 branch) > which is a massive pain. I no longer have to tell the users ?yes, your > bug is fixed but to get it you need to compile GHC HEAD or wait 6-12 > months until next GHC release?. I have to do this a lot. > > Here are the disadvantages and why I think they don't make a big difference: > > * GHC HEAD doesn't get any new-and-cool features that we might > implement. I say this doesn't matter because no one uses varying GHC > HEAD versions to develop actual software, documentation and all. What I > mean to say is that the only user of the Haddock that's developed in GHC > tree is GHC itself. The only case where GHC actually used in-tree > Haddock was when Herbert generated documentation for base-4.7 early for > me to eye before the release. Even this doesn't matter because so close > to the release I'll already have the existing GHC API integrated anyway. > Again, it does not matter if GHC HEAD itself doesn't get pretty operator > rendering or whatever right when I implement it because no one cares > about it until it's release time. I know that many people simply > HADDOCK_DOCS=NO to save time. The actual users only care about Haddock > that works with 7.6.x, 7.8.x, 7.10.x; only GHC cares about in-betweens > and only for the purpose of being able to build and validate. > > * GHC team can't easily contribute features and get the back > immediately. In part it doesn't matter because of the previous point and > in the last year or so there were no features contributed directly from > GHC except those necessary to keep Haddock compiling. This just means > there's no demand for such close relationship. > > * Haddock-affecting changes in GHC parser don't ?take effect? straight > away. This is my loss and considering the infrequency at which such > changes happen, it's a tiny price to pay to have to wait until release. > > * ?that's it, no other disadvantages that I can think of, but that's why > I'm sending it to the list to review! > > What's worth mentioning is that the no-external-dependencies thing still > applies because even though we no longer need to compile against HEAD, > we still need to compile against the tree at release time. > > In summary: > > My life gets easier because I stop wasting it on playing with whole GHC > tree, GHC team's life gets easier because they don't have to deal with > the changes I make. My life gets even easier because I only have to make > big API updates once a release. I can actually start looking for > contributors. > > When a release rolls around, GHC and Haddock ?meet up?, we make sure it > all works, release happens, GHC starts tracking from that point and we > part ways until the next release. > > What do you think? If there are no major objections in one week then I > will assume I am good to go with this. > > Transition from current setup: > If I receive some patches I was promised then I will then make a 2.14.4 > bugfix/compat release make sure that master is up to date and then > create something like GHC-tracking branch from master and track that. I > will then abandon that branch and not push to it unless it is GHC > release time. The next commit in master will bring Haddock to a state > where it works with 7.8.3: yes, this means removing all new API stuff > until 7.10 or 7.8.4 or whatever. GHC API changes go onto GHC-tracking > while all the stuff I write goes master. When GHC makes a release or is > about to, I make master work with that and make GHC-tracking point to > that instead. > > > Thanks! > From marlowsd at gmail.com Fri Aug 8 08:25:47 2014 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 08 Aug 2014 09:25:47 +0100 Subject: biographical profiling is broken? In-Reply-To: References: Message-ID: <53E4898B.7030207@gmail.com> On 07/08/2014 16:29, ?mer Sinan A?acan wrote: > Hi all, > > I'm trying to use LDV profiling features of GHC but I'm failing. > Here's what I try: > > (I'm using GHC 7.8.2) > > * I'm compiling my app with `-prof` and I'm also using `-fprof-auto` > just to be sure. > * I'm running my app using `+RTS -hbdrag,void` as described in the > docs. (https://www.haskell.org/ghc/docs/latest/html/users_guide/prof-heap.html#biography-prof) The flag "-hbdrag,void" says "I want to restrict the heap profile to objects in the DRAG and VOID classes", you also need to give a flag to say what kind of profile you want, e.g. -hc, as in the example in the docs. > * I also tried adding more arguments like `-hc`, `-hm`, `-hr` etc. but > I got same results. That should work. If not, please file a ticket. There is a ticket open for biographical profiling that I haven't looked at yet, but it seems to be different to your issue: https://ghc.haskell.org/trac/ghc/ticket/8982 Cheers, Simon > > I feel like the feature is broken. I checked the test suite to find > some working LDV profiling programs. But as far as I can see we don't > have any tests for LDV stuff. There's a `bio001.stdout` which I > believe is related with "biographical profiling"(which means LDV) but > again AFAICS it's not used. > > (I'm not having any different behaviors or exceptions while running > programs using LDV RTS arguments.) > > Can anyone help me with this? Is anyone using this feature? Am I right > that this feature is not tested? > > Thanks. > > --- > ?mer Sinan A?acan > http://osa1.net > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From hvriedel at gmail.com Fri Aug 8 08:35:44 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 08 Aug 2014 10:35:44 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <20140808074214.GD3649@x200> (Simon Hengel's message of "Fri, 8 Aug 2014 15:42:14 +0800") References: <53E45F2D.9000806@fuuzetsu.co.uk> <87oavvjwje.fsf@gmail.com> <20140808074214.GD3649@x200> Message-ID: <87k36jjs4f.fsf@gmail.com> On 2014-08-08 at 09:42:14 +0200, Simon Hengel wrote: > On Fri, Aug 08, 2014 at 09:00:21AM +0200, Herbert Valerio Riedel wrote: >> Just to clarify, as the last sentence contains a double-negation: GHC >> devs continue pushing to github.com/haddock.git's `master` branch to >> keep Haddock building with GHC HEAD? It's just that the Haddock >> development proper happens in a branch other than `master` from now on? > > From my perspective I would prefer to use `master` for Haddock > development and use a branch with some other name for GHC development. > My main motivation here is that as a contributor to Haddock "I expect > the latest code to be on `master`, and I would use it as a base when > developing new features". Just a minor nitpick (but I agree with having `master` used for hosting active Haddock development): "latest code" might not be a canonical concept, as there will be "latest code that works with GHC HEAD", and "latest code that works with last released GHC" > Alternatively, maybe use `master` for both Haddock and GHC development, > but push to different remotes (say use > http://git.haskell.org/haddock.git for GHC development and > https://github.com/haskell/haddock for Haddock development). I think > this is what we already do for e.g. `containers`. I'd rather reduce the number of doubled repositories (not the least to simplify the mirroring setup) to avoid confusion about where things live/need to be pushed to. If this is just an alpha-conversion modulo thing, then let's just call the new branch for GHC HEAD simply `ghc-head` (or something like that) and keep hosting it in github.com/haskell/haddock.git, and have GHC HEAD developers push to that instead (fwiw, you can specify the default branch in .gitmodules, which some few Git tools honor). Cheers, hvr From stegeman at gmail.com Fri Aug 8 08:59:51 2014 From: stegeman at gmail.com (Luite Stegeman) Date: Fri, 8 Aug 2014 10:59:51 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53E487BE.5090902@gmail.com> References: <53E45F2D.9000806@fuuzetsu.co.uk> <53E487BE.5090902@gmail.com> Message-ID: I'm also in favour of a more decoupled development/release process. I'd like to change a few things in haddock to make it more suitable for use as a library, so that I can set up a haddock for GHCJS without duplicating the whole package (it needs a custom platform setup and some changes in file name handling). It'd be great if such a release could be made independently of GHC and changes like this could be made without requiring the user to update their GHC. Also I'd be happy to do some of the work for the out-of-tree change, for example backporting fixes to the 2.14 branch or updating the 2.15 branch to work with the 7.8 api (but I can't promise more, GHCJS is taking enough of my time, and I'm not sure how much time I can afford to keep spending on it, so I'd like to minimize my other maintenance tasks as much as possible). luite On Fri, Aug 8, 2014 at 10:18 AM, Simon Marlow wrote: > I thought this was what you were already doing :-) Anyway, this is more > or less the setup we had in mind when Haddock was added to the GHC tree. > The only question is which branches are used for GHC and for regular > development, and where they live. As long as that's clear for everyone > (both Haddock and GHC developers), then this should be fine. > > The GHC release engineer will need to give the Haddock maintainers plenty > of heads-up time before a release so that the merge can be done - Austin > could you add that to the release checklist? > > Cheers, > Simon > > > On 08/08/2014 06:25, Mateusz Kowalczyk wrote: > >> Hello, >> >> A slightly long e-mail but I ask that you voice your opinion if you ever >> changed GHC API. You can skim over the details, simply know that it >> saves me vast amount of time, allows me to try and find contributors and >> doesn't impact GHC negatively. It seems like a win-win scenario for GHC >> and Haddock. GHC team's workflow does not change and will not require >> any new commitment: I do all the work and I think it's a 1 line change >> in sync-all when transition is ready. Here it is: >> >> >> It is no secret that many core Haskell projects lack developer hands and >> Haddock is no exception: the current maintainers are Simon Hengel and >> myself. Simon does not have much time so currently all the issues and >> updates are up to me. Ideally I would like if some more people could >> come and hack on Haddock but there are a couple of problems with trying >> to recruit folk for this: >> >> 1. Interacting with GHC API is not the easiest thing. This is Haddock's >> problem but I thought I'd mention it here. >> >> 2. Haddock resides directly in the GHC tree and it is currently >> *required* that it compiles with GHC HEAD. This is a huge barrier of >> entry for anyone: today I wanted to make a fairly simple change but it >> still took me 3 validate runs to be at least somewhat confident that I >> didn't break much in GHC. On top of this I had help from Edward Z. Yang >> on IRC and information from him on what the issue exactly was. If I was >> to do everything alone it would have taken even more validates. A >> validate is not fast on machine by any means, it takes an hour or two. >> >> Here is what I want to do unless there are major objections: I want to >> move the active development away from GHC tree. Below is how it would >> work. For simplicity please imagine that we have *just* released 7.8.3. >> >> * Haddock development would concentrate on supporting the last public >> release of GHC: I stop developing against GHC HEAD and currently would >> develop against 7.8.3. >> >> * GHC itself checks out Haddock as a submodule as it does now. The only >> difference is that it points at whatever commit worked last. Let us >> assume it is the Haddock 2.14.3 release commit. The vital difference >> from current state is that GHC will no longer track changes in master >> branch. >> >> * Now when GHC API changes things proceed as they normally do: whoever >> is responsible for the changes, pops into the Haddock submodule applies >> the patches necessary for Haddock to build with HEAD and everyone is >> happy. What does *not* happen is these patches don't go into master: I >> ignore them and keep working with 7.8.3. >> >> * When a GHC release rolls around, I update Haddock to work with the new >> API so that people with new release can still use it. Once it works >> against new API, GHC can start tracking from that commit onwards and >> proceed as usual. >> >> Here are the advantages: >> >> * I don't have to work against GHC HEAD. This means I don't have to >> build GHC HEAD and I don't need to worry about GHC API changes. I don't >> waste 2-4 hours building before hacking and validating after hacking to >> make any minor changes and to make sure I haven't broken anything. >> >> * More importantly, anyone who wants to write a patch for Haddock can >> now do so easily, all they need is recent compiler rather than being >> forced to build HEAD. Building and validating against HEAD is a **huge** >> barrier of entry. >> >> * I only have to care about GHC API changes once a release and not twice >> a week. I think PatternSynonyms have changed 4 times in a month but the >> end result at release time is the same and that's what people care about. >> >> * It is less work for anyone changing GHC API: they only have to deal >> with their own changes and not my changes which add features or whatever. >> >> * If I break something in Haddock HEAD, GHC is not affected. >> >> * If Haddock's binary interface doesn't change, we may even allow more >> versions of GHC be compatible through CPP and other such trickery. If we >> were to do it today, it would be an increased burden on the GHC team to >> deal with those. >> >> * I can release as often as I want against the same compiler version. >> Currently doing this requires backporting features (see v2.14 branch) >> which is a massive pain. I no longer have to tell the users ?yes, your >> bug is fixed but to get it you need to compile GHC HEAD or wait 6-12 >> months until next GHC release?. I have to do this a lot. >> >> Here are the disadvantages and why I think they don't make a big >> difference: >> >> * GHC HEAD doesn't get any new-and-cool features that we might >> implement. I say this doesn't matter because no one uses varying GHC >> HEAD versions to develop actual software, documentation and all. What I >> mean to say is that the only user of the Haddock that's developed in GHC >> tree is GHC itself. The only case where GHC actually used in-tree >> Haddock was when Herbert generated documentation for base-4.7 early for >> me to eye before the release. Even this doesn't matter because so close >> to the release I'll already have the existing GHC API integrated anyway. >> Again, it does not matter if GHC HEAD itself doesn't get pretty operator >> rendering or whatever right when I implement it because no one cares >> about it until it's release time. I know that many people simply >> HADDOCK_DOCS=NO to save time. The actual users only care about Haddock >> that works with 7.6.x, 7.8.x, 7.10.x; only GHC cares about in-betweens >> and only for the purpose of being able to build and validate. >> >> * GHC team can't easily contribute features and get the back >> immediately. In part it doesn't matter because of the previous point and >> in the last year or so there were no features contributed directly from >> GHC except those necessary to keep Haddock compiling. This just means >> there's no demand for such close relationship. >> >> * Haddock-affecting changes in GHC parser don't ?take effect? straight >> away. This is my loss and considering the infrequency at which such >> changes happen, it's a tiny price to pay to have to wait until release. >> >> * ?that's it, no other disadvantages that I can think of, but that's why >> I'm sending it to the list to review! >> >> What's worth mentioning is that the no-external-dependencies thing still >> applies because even though we no longer need to compile against HEAD, >> we still need to compile against the tree at release time. >> >> In summary: >> >> My life gets easier because I stop wasting it on playing with whole GHC >> tree, GHC team's life gets easier because they don't have to deal with >> the changes I make. My life gets even easier because I only have to make >> big API updates once a release. I can actually start looking for >> contributors. >> >> When a release rolls around, GHC and Haddock ?meet up?, we make sure it >> all works, release happens, GHC starts tracking from that point and we >> part ways until the next release. >> >> What do you think? If there are no major objections in one week then I >> will assume I am good to go with this. >> >> Transition from current setup: >> If I receive some patches I was promised then I will then make a 2.14.4 >> bugfix/compat release make sure that master is up to date and then >> create something like GHC-tracking branch from master and track that. I >> will then abandon that branch and not push to it unless it is GHC >> release time. The next commit in master will bring Haddock to a state >> where it works with 7.8.3: yes, this means removing all new API stuff >> until 7.10 or 7.8.4 or whatever. GHC API changes go onto GHC-tracking >> while all the stuff I write goes master. When GHC makes a release or is >> about to, I make master work with that and make GHC-tracking point to >> that instead. >> >> >> Thanks! >> >> _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sol at typeful.net Fri Aug 8 09:23:29 2014 From: sol at typeful.net (Simon Hengel) Date: Fri, 8 Aug 2014 17:23:29 +0800 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <87k36jjs4f.fsf@gmail.com> References: <53E45F2D.9000806@fuuzetsu.co.uk> <87oavvjwje.fsf@gmail.com> <20140808074214.GD3649@x200> <87k36jjs4f.fsf@gmail.com> Message-ID: <20140808092329.GE3649@x200> On Fri, Aug 08, 2014 at 10:35:44AM +0200, Herbert Valerio Riedel wrote: > If this is just an alpha-conversion modulo thing, then let's just call > the new branch for GHC HEAD simply `ghc-head` (or something like that) > and keep hosting it in github.com/haskell/haddock.git, and have GHC HEAD > developers push to that instead (fwiw, you can specify the default > branch in .gitmodules, which some few Git tools honor). Ok, cool, that would work for me. Cheers. From johan.tibell at gmail.com Fri Aug 8 11:15:43 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 8 Aug 2014 13:15:43 +0200 Subject: wORD_SIZE vs platformWordSize of targetPlatform Message-ID: Hi, We seem to have two ways to get the same piece of information. We can get the target (?) word size as wORD_SIZE but there's also the platformWordSize field in the Platform data type, which is held as targetPlatform in DynFlags. Which one should I use? Most code uses wORD_SIZE. -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Fri Aug 8 12:00:37 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 08 Aug 2014 13:00:37 +0100 Subject: HEADS UP: Running cabal install with the latest GHC Message-ID: <1407498991-sup-1278@sabre> Hey all, SPJ pointed out to me today that if you try to run: cabal install --with-ghc=/path/to/inplace/bin/ghc-stage2 with the latest GHC HEAD, this probably will not actually work, because your system installed version of Cabal is probably too old to deal with the new package key stuff in HEAD. So, how do you get a version of cabal-install (and Cabal) which is new enough to do what you need it to? The trick is to compile Cabal using your /old/ GHC. Step-by-step, this involves cd'ing into libraries/Cabal/Cabal and running `cabal install` (or install it in a sandbox, if you like) and then cd'ing to libraries/Cabal/cabal-install and cabal install'ing that. Cabal devs, is cutting a new release of Cabal and cabal-install in the near future possible? In that case, users can just cabal update; cabal install cabal-install and get a version of Cabal which will work for them. Cheers, Edward From fuuzetsu at fuuzetsu.co.uk Fri Aug 8 15:04:16 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 08 Aug 2014 17:04:16 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <87k36jjs4f.fsf@gmail.com> References: <53E45F2D.9000806@fuuzetsu.co.uk> <87oavvjwje.fsf@gmail.com> <20140808074214.GD3649@x200> <87k36jjs4f.fsf@gmail.com> Message-ID: <53E4E6F0.8040207@fuuzetsu.co.uk> On 08/08/2014 10:35 AM, Herbert Valerio Riedel wrote: > On 2014-08-08 at 09:42:14 +0200, Simon Hengel wrote: >> On Fri, Aug 08, 2014 at 09:00:21AM +0200, Herbert Valerio Riedel wrote: >>> Just to clarify, as the last sentence contains a double-negation: GHC >>> devs continue pushing to github.com/haddock.git's `master` branch to >>> keep Haddock building with GHC HEAD? It's just that the Haddock >>> development proper happens in a branch other than `master` from now on? >> >> From my perspective I would prefer to use `master` for Haddock >> development and use a branch with some other name for GHC development. >> My main motivation here is that as a contributor to Haddock "I expect >> the latest code to be on `master`, and I would use it as a base when >> developing new features". > > Just a minor nitpick (but I agree with having `master` used for hosting > active Haddock development): "latest code" might not be a canonical > concept, as there will be "latest code that works with GHC HEAD", and > "latest code that works with last released GHC" > >> Alternatively, maybe use `master` for both Haddock and GHC development, >> but push to different remotes (say use >> http://git.haskell.org/haddock.git for GHC development and >> https://github.com/haskell/haddock for Haddock development). I think >> this is what we already do for e.g. `containers`. > > I'd rather reduce the number of doubled repositories (not the least to > simplify the mirroring setup) to avoid confusion about where things > live/need to be pushed to. > > If this is just an alpha-conversion modulo thing, then let's just call > the new branch for GHC HEAD simply `ghc-head` (or something like that) > and keep hosting it in github.com/haskell/haddock.git, and have GHC HEAD > developers push to that instead (fwiw, you can specify the default > branch in .gitmodules, which some few Git tools honor). > > Cheers, > hvr > Hi, Here is what my plan was Haddock branches would be: master ? Haddock devs push here, the fixes go here GHC-tracking ? GHC team pushes here At GHC release master would be brought up to a state where it works with current GHC API. GHC-tracking then would be reset to master. The change in sync-all I was referring to is that ./sync-all get && ./sync-all pull would not end up pointing at a master branch but after some sleep I realise that's probably not the case anyway. We simply need GHC team to push to their own branch. >If I get this right, there will be a branch (`master`?) that's kept compatible with GHC HEAD, then there's a branch where new Haddock features are implemented (name?), The other way around, master is for Haddock while the other branch is for GHC. -- Mateusz K. From fuuzetsu at fuuzetsu.co.uk Fri Aug 8 15:07:55 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 08 Aug 2014 17:07:55 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221A14B9@DB3PRD3001MB020.064d.mgd.msft.net> References: <53E45F2D.9000806@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF221A14B9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <53E4E7CB.4050604@fuuzetsu.co.uk> On 08/08/2014 09:48 AM, Simon Peyton Jones wrote: > Mateusz > > What you say makes sense to me. > > For me, the big thing is that we can make, and push, changes to Haddock in the GHC private branch, without having to negotiate. (Haddock reaches very deep into GHC's internals, so many many changes to GHC have some knock-on effect in Haddock.) You seem OK with this, so I am too. Nothing changes here except that GHC team no longer pushes to the branch where actual feature dev goes on. > One concern: if you and Simon pay no attention to the GHC HEAD fork of Haddock, there is no guarantee that it works at all. Presumably it compiles (because GHC's build system will build it, forcing us to fix type errors) but it might not actually work! So it would probably pay for you to watch what is happening, to ensure that the patch-ups that ignorant GHC developers apply to Haddock do indeed have the desired effect. GHC is still a user although with special needs. What I mean when I say abandon is that I will not worry about having to port any new features or non-critical fixes to the version that GHC. Of course if there is Haddock breakage in GHC tree then I'll have a look at it and see what I can do but the difference is that I only have to do it when things break (if ever) rather than at any time I make a change. > Some of these patch-ups might even be panics --- "I don't know how to make Haddock render new construct ". That might be quite reasonable. > > But in general, thumbs up from me Great! > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Mateusz Kowalczyk > | Sent: 08 August 2014 06:25 > | To: ghc-devs at haskell.org > | Cc: Simon Hengel > | Subject: Moving Haddock *development* out of GHC tree > | > | Hello, > | > [snip] -- Mateusz K. From fuuzetsu at fuuzetsu.co.uk Fri Aug 8 15:13:35 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 08 Aug 2014 17:13:35 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: References: <53E45F2D.9000806@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF221A26BD@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <53E4E91F.4090907@fuuzetsu.co.uk> On 08/08/2014 10:15 AM, Johan Tibell wrote: > On Fri, Aug 8, 2014 at 10:11 AM, Simon Peyton Jones > wrote: > >> The biggest disadvantage in my mind is that you're setting yourself up >> for a potentially huge merge just before the GHC release and might block >> the GHC release until that merge is done (assuming that haddock is still >> shipped with GHC). >> >> >> >> Excellent point. >> >> >> >> The merge shouldn?t block the release, though. In extremis, I guess we >> could always release the GHC fork of Haddock if the tip of Haddock wasn?t >> merged to match GHC! But I doubt it?ll come to that >> > > But as you mentioned the GHC fork of Haddock might not work (it might just > type check) so at the very least Mateusz is signing up for validating that > it indeed works before a GHC release. That's of course fine, I just want > people to understand what we're signing up for. > Well, I stick around and am usually aware of GHC release early. In the usual case Haddock will be fixed up before the actual GHC release. I don't think API changes were ever drastic enough to provide major problems especially seeing as I'll be able to refer to the GHC-tracked branch to see what patches were applied there. However let's consider I can't make it for the release because I'm not available around that time or otherwise. This should still not hold up GHC release. I would expect the GHC team to release Haddock + their fixes, it would simply be like an existing release with some patches on top to have it work with new GHC. I can then come around and once I apply any API patches, I make an actual Haddock release. People can then simply cabal install haddock and use what they get here rather than what came with GHC. Does this make sense? -- Mateusz K. From fuuzetsu at fuuzetsu.co.uk Fri Aug 8 15:16:07 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 08 Aug 2014 17:16:07 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53E487BE.5090902@gmail.com> References: <53E45F2D.9000806@fuuzetsu.co.uk> <53E487BE.5090902@gmail.com> Message-ID: <53E4E9B7.9020609@fuuzetsu.co.uk> On 08/08/2014 10:18 AM, Simon Marlow wrote: > I thought this was what you were already doing :-) Anyway, this is more > or less the setup we had in mind when Haddock was added to the GHC tree. > The only question is which branches are used for GHC and for regular > development, and where they live. As long as that's clear for everyone > (both Haddock and GHC developers), then this should be fine. I think there is no problem if they both live in the existing repository (github.com/haskell/haddock) or whatever the submodule refers to today. > The GHC release engineer will need to give the Haddock maintainers > plenty of heads-up time before a release so that the merge can be done - > Austin could you add that to the release checklist? Right, although I don't exactly plan to abandon any of the GHC information channels I'm on today: I tend to be well aware of a release coming. > Cheers, > Simon > -- Mateusz K. From fuuzetsu at fuuzetsu.co.uk Fri Aug 8 15:20:13 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 08 Aug 2014 17:20:13 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: References: <53E45F2D.9000806@fuuzetsu.co.uk> <53E487BE.5090902@gmail.com> Message-ID: <53E4EAAD.4020100@fuuzetsu.co.uk> On 08/08/2014 10:59 AM, Luite Stegeman wrote: > I'm also in favour of a more decoupled development/release process. I'd > like to change a few things in haddock to make it more suitable for use as > a library, so that I can set up a haddock for GHCJS without duplicating the > whole package (it needs a custom platform setup and some changes in file > name handling). It'd be great if such a release could be made independently > of GHC and changes like this could be made without requiring the user to > update their GHC. Note that while we can release more often, if we have to bump an interface file version and changes are incompatible, that's probably our cut-off point for compatibility. This might be three stable releases or one minor. I don't expect it to change soon anyhow. > Also I'd be happy to do some of the work for the out-of-tree change, for > example backporting fixes to the 2.14 branch or updating the 2.15 branch to > work with the 7.8 api (but I can't promise more, GHCJS is taking enough of > my time, and I'm not sure how much time I can afford to keep spending on > it, so I'd like to minimize my other maintenance tasks as much as possible). Once this discussion goes through, I'll put what I can on 2.14, release and abandon the branch continuing from master (2.15). Updating (or rather downgrading) the master to work with 7.8.3 should not be a problem, I think the changes API weren't numerous. > luite > -- Mateusz K. From eir at cis.upenn.edu Fri Aug 8 19:15:05 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Fri, 8 Aug 2014 15:15:05 -0400 Subject: arc diff linter looping / stuck In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF22198DDC@DBXPRD3001MB024.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF22198DDC@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: <71FF0A59-9458-4A53-8778-805AB9083A3E@cis.upenn.edu> On Aug 7, 2014, at 5:51 PM, Simon Peyton Jones wrote: > I'm off on holiday for a week, but you and I have discussed most of these changes, some at length. If you are happy with your implementation, then go ahead and commit, from my pov. OK. > > I did take a quick look though. For #9200 and TcTyClsDecls, I think you have implemented "Possible new strategy" on https://ghc.haskell.org/trac/ghc/wiki/GhcKinds/KindInference, but not "A possible variation" (same page). correct? If so, worth a note in the source code. And actually I'd transfer the algorithm itself, including the definition of CUSK, into the code. I've made a new ticket, #9427, for the "variation", which is properly a feature request, not a bug. I can cycle back around to this in a little while. > > kcStrategy seems a very odd name for a predicate on HsDecls that is just a Bool saying whether or not it has a CUSK. Also odd is that every call to kcHsTyVarBndrs has a corresponding call to kcStrategy, and both functions are in TcHsType; why not just combine them into one? There is enough variation in how kcHsTyVarBndrs is called to make this a little inconvenient (it's sometimes called on a FamilyDecl, not a TyClDecl, and with existentials in a data constructor, with no clear declaration at all). Instead, I've decided that CUSKness a property of the declaration itself, and put the CUSK-checking code in HsDecls. There is a Note there as well. And, I've removed the last vestiges of KindCheckingStrategy. I hope this is OK. If it validates, I'll push, and we can revisit refactoring later if necessary. Thanks, Richard > > Thanks for doing this > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Richard > | Eisenberg > | Sent: 07 August 2014 19:34 > | To: ghc-devs > | Subject: arc diff linter looping / stuck > | > | Hi all, > | > | I've prepared a bunch of commits to fix several tickets. After pushing > | these commits to branch wip/rae (to save my place and to get validate > | running on Travis), I then `git checkout`ed back to a point where `git > | diff origin/master` gave me a patch for precisely one bug (instead of the > | several unrelated ones I had fixed). I wanted to post to Differential. > | `arc diff` allowed me to fill out a description message (which mentioned, > | in its comments, the right set of commits), but then hung on the > | "linting..." stage. I suppose I could skip the linter, but it's more > | likely I've done something wrong here... > | > | Any advice? > | > | In the meantime, please consider this to be a request for feedback on > | everything in wip/rae! The bugs fixed are #9200, #9415, #9404, and #9371. > | > | Thanks! > | Richard > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > From johan.tibell at gmail.com Fri Aug 8 21:01:18 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 8 Aug 2014 23:01:18 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53E4E91F.4090907@fuuzetsu.co.uk> References: <53E45F2D.9000806@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF221A26BD@DB3PRD3001MB020.064d.mgd.msft.net> <53E4E91F.4090907@fuuzetsu.co.uk> Message-ID: On Fri, Aug 8, 2014 at 5:13 PM, Mateusz Kowalczyk wrote: > On 08/08/2014 10:15 AM, Johan Tibell wrote: > > On Fri, Aug 8, 2014 at 10:11 AM, Simon Peyton Jones < > simonpj at microsoft.com> > > wrote: > > > >> The biggest disadvantage in my mind is that you're setting yourself up > >> for a potentially huge merge just before the GHC release and might block > >> the GHC release until that merge is done (assuming that haddock is still > >> shipped with GHC). > >> > >> > >> > >> Excellent point. > >> > >> > >> > >> The merge shouldn?t block the release, though. In extremis, I guess we > >> could always release the GHC fork of Haddock if the tip of Haddock > wasn?t > >> merged to match GHC! But I doubt it?ll come to that > >> > > > > But as you mentioned the GHC fork of Haddock might not work (it might > just > > type check) so at the very least Mateusz is signing up for validating > that > > it indeed works before a GHC release. That's of course fine, I just want > > people to understand what we're signing up for. > > > > Well, I stick around and am usually aware of GHC release early. In the > usual case Haddock will be fixed up before the actual GHC release. I > don't think API changes were ever drastic enough to provide major > problems especially seeing as I'll be able to refer to the GHC-tracked > branch to see what patches were applied there. > > However let's consider I can't make it for the release because I'm not > available around that time or otherwise. This should still not hold up > GHC release. I would expect the GHC team to release Haddock + their > fixes, it would simply be like an existing release with some patches on > top to have it work with new GHC. I can then come around and once I > apply any API patches, I make an actual Haddock release. People can then > simply cabal install haddock and use what they get here rather than what > came with GHC. > Be careful hear so 1) patches aren't lost (that almost happened once when GHC HQ made a containers release) and 2) that the version numbers used by GHC HQ and your releases make sense (i.e. follow the PVP). P.S. I would recommend naming the main development branch 'master' and the other 'ghc-head'. 'master' is the branch new potential developers will see first on GitHub and it's the one people default to when making pull requests. I used to have the main 'network' development in a branch called 'develop'. This led to lots of confusion and pull requests against the wrong branches. The name 'ghc-head' also makes it much clearer what that branch is for and why it might be special. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Fri Aug 8 21:02:25 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 8 Aug 2014 23:02:25 +0200 Subject: HEADS UP: Running cabal install with the latest GHC In-Reply-To: <1407498991-sup-1278@sabre> References: <1407498991-sup-1278@sabre> Message-ID: I'm not again putting out another release, but I'd prefer to make it on top of 1.20 if possible. Making a 1.22 release takes much more work (RC time, etc). Which are the patches in question. Can they easily be cherry-picked onto the 1.20 branch? Are there any risk of breakages? On Fri, Aug 8, 2014 at 2:00 PM, Edward Z. Yang wrote: > Hey all, > > SPJ pointed out to me today that if you try to run: > > cabal install --with-ghc=/path/to/inplace/bin/ghc-stage2 > > with the latest GHC HEAD, this probably will not actually work, because > your system installed version of Cabal is probably too old to deal with > the new package key stuff in HEAD. So, how do you get a version > of cabal-install (and Cabal) which is new enough to do what you need > it to? > > The trick is to compile Cabal using your /old/ GHC. Step-by-step, this > involves cd'ing into libraries/Cabal/Cabal and running `cabal install` > (or install it in a sandbox, if you like) and then cd'ing to > libraries/Cabal/cabal-install and cabal install'ing that. > > Cabal devs, is cutting a new release of Cabal and cabal-install in the > near future possible? In that case, users can just cabal update; cabal > install cabal-install and get a version of Cabal which will work for > them. > > Cheers, > Edward > _______________________________________________ > cabal-devel mailing list > cabal-devel at haskell.org > http://www.haskell.org/mailman/listinfo/cabal-devel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Fri Aug 8 21:17:45 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 08 Aug 2014 22:17:45 +0100 Subject: HEADS UP: Running cabal install with the latest GHC In-Reply-To: References: <1407498991-sup-1278@sabre> Message-ID: <1407532120-sup-5118@sabre> They would be: 2b50d0a Fix regression for V09 test library handling. d3a696a Disable reinstalls with distinct package keys for now. 1d33c8f Add $pkgkey template variable, and use it for install paths. 41610a0 Implement package keys, distinguishing packages built with different deps/flags Unfortunately, these patches fuzz a bit without this next patch: 62450f9 Implement "reexported-modules" field, towards fixing GHC bug #8407. When you include that patch, there is only one piece of fuzz from 41610a0. One important caveat is that these patches do rearrange some of the API, so you wouldn't be able to build GHC 7.8 against these patches. So maybe we don't want to. If we had a way of releasing experimental, non-default picked up versions, that would be nice (i.e. Cabal 1.21). No warranty, but easy enough for GHC devs to say 'cabal install Cabal-1.21 cabal-install-1.21' or something. Edward Excerpts from Johan Tibell's message of 2014-08-08 22:02:25 +0100: > I'm not again putting out another release, but I'd prefer to make it on top > of 1.20 if possible. Making a 1.22 release takes much more work (RC time, > etc). Which are the patches in question. Can they easily be cherry-picked > onto the 1.20 branch? Are there any risk of breakages? > > On Fri, Aug 8, 2014 at 2:00 PM, Edward Z. Yang wrote: > > > Hey all, > > > > SPJ pointed out to me today that if you try to run: > > > > cabal install --with-ghc=/path/to/inplace/bin/ghc-stage2 > > > > with the latest GHC HEAD, this probably will not actually work, because > > your system installed version of Cabal is probably too old to deal with > > the new package key stuff in HEAD. So, how do you get a version > > of cabal-install (and Cabal) which is new enough to do what you need > > it to? > > > > The trick is to compile Cabal using your /old/ GHC. Step-by-step, this > > involves cd'ing into libraries/Cabal/Cabal and running `cabal install` > > (or install it in a sandbox, if you like) and then cd'ing to > > libraries/Cabal/cabal-install and cabal install'ing that. > > > > Cabal devs, is cutting a new release of Cabal and cabal-install in the > > near future possible? In that case, users can just cabal update; cabal > > install cabal-install and get a version of Cabal which will work for > > them. > > > > Cheers, > > Edward > > _______________________________________________ > > cabal-devel mailing list > > cabal-devel at haskell.org > > http://www.haskell.org/mailman/listinfo/cabal-devel > > From lukexipd at gmail.com Sat Aug 9 03:27:23 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Fri, 8 Aug 2014 20:27:23 -0700 Subject: ARM64 Task Force In-Reply-To: <53E466F1.90201@centrum.cz> References: <53E466F1.90201@centrum.cz> Message-ID: Hi Karel, Thanks! A question: https://git.haskell.org/ghc.git/commitdiff/454b34cb3b67dec21f023339c4d53d734af7605d adds references to s16, s17, s18, s19, d10 and d11 but I don't see those where I though to expect them in https://github.com/ghc/ghc/blob/master/includes/CodeGen.Platform.hs Am I down a wrong path? Luke On Thu, Aug 7, 2014 at 10:58 PM, Karel Gardas wrote: > On 08/ 8/14 01:58 AM, Luke Iannini wrote: > >> I'm now studying David's patches to LLVM to learn how to add the >> ARM64/GHC calling convention to LLVM. >> > > Here is also original ARM/GHC calling convention submission. It's always > good to have more examples as reference... > > http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/044173.html > > Good luck with the ARM64/GHC porting work! > > Karel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Sat Aug 9 11:22:11 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Sat, 09 Aug 2014 13:22:11 +0200 Subject: ARM64 Task Force In-Reply-To: References: <53E466F1.90201@centrum.cz> Message-ID: <53E60463.2080608@centrum.cz> On 08/ 9/14 05:27 AM, Luke Iannini wrote: > Hi Karel, > Thanks! > > A question: > https://git.haskell.org/ghc.git/commitdiff/454b34cb3b67dec21f023339c4d53d734af7605d > adds references to s16, s17, s18, s19, d10 and d11 but I don't see those Yes, that adds FPU support for ARM. > where I though to expect them in > https://github.com/ghc/ghc/blob/master/includes/CodeGen.Platform.hs Hmm, whole ARM reg set is missing in this file. IIRC Simon Marlow were discussing this with Ben Gamari recently. I've not investigated if this is needed or not since I don't know if this is used only in NCG or in registerised build in general. If the former, ARM will not be there as there is no ARM NCG yet, if the later, then ARM should be there as ARM/LLVM/registerised build is a reality. Cheers, Karel From johan.tibell at gmail.com Sun Aug 10 10:17:56 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Sun, 10 Aug 2014 12:17:56 +0200 Subject: Can Phabricator be made to understand git better? Message-ID: Hi, I have to fight Phab too much because it doesn't understand git or git working conventions very well. For example, the other day I uploaded https://phabricator.haskell.org/D128, built from a single git commit on a feature branch I made on a local feature branch (based on my local master). First, doing that was in itself a bit of a pain. 'arc diff' didn't do the right thing and included some random commit(s) instead of my single commit in my feature branch. Perhaps it tried to diff against origin/master or something, which doesn't make any sense if you know how git is meant to be used (doing that would imply that you have to constantly sync with upstream to create a patch.) I eventually got it to used the right commit by doing 'arc diff `. Second, today when I wanted to update my commit to address some review comments I 1) amended my commit in my feature branch and 2) ran 'arc diff', expecting that it would do the right thing. It didn't and now D128 contains a bunch of changes that aren't mine. I don't know where they came from. Is there a way to configure Phab so it works with git established workflows? -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Sun Aug 10 11:01:05 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 10 Aug 2014 13:01:05 +0200 Subject: Can Phabricator be made to understand git better? In-Reply-To: (Johan Tibell's message of "Sun, 10 Aug 2014 12:17:56 +0200") References: Message-ID: <87r40o1udq.fsf@gmail.com> On 2014-08-10 at 12:17:56 +0200, Johan Tibell wrote: [...] > First, doing that was in itself a bit of a pain. 'arc diff' didn't do the > right thing and included some random commit(s) instead of my single commit > in my feature branch. Perhaps it tried to diff against origin/master or > something, which doesn't make any sense if you know how git is meant to be > used (doing that would imply that you have to constantly sync with upstream > to create a patch.) I eventually got it to used the right commit by doing > 'arc diff `. While this doesn't answer your question (I'm looking forward to Austin chiming in to answer as I'd like to understand 'arc diff' better myself), I've been using arc which myself before performing 'arc diff's to reduce my surprise/confusion about which commits 'arc diff' will effectively pick. HTH, hvr From slyich at gmail.com Sun Aug 10 18:30:30 2014 From: slyich at gmail.com (Sergei Trofimovich) Date: Sun, 10 Aug 2014 21:30:30 +0300 Subject: Perf regression: ghc --make: add nicer names to RTS threads (threaded IO manager, make workers) (f686682) In-Reply-To: <53E3903F.40804@gmail.com> References: <20140804131313.1D834240EA@ghc.haskell.org> <1407310245.1760.1.camel@joachim-breitner.de> <20140806221534.1a5a922a@sf> <20140806234036.12e6f758@sf> <53E3903F.40804@gmail.com> Message-ID: <20140810213030.225f00e9@sf> On Thu, 07 Aug 2014 15:42:07 +0100 Simon Marlow wrote: > On 06/08/2014 21:40, Sergei Trofimovich wrote: > > I think I know what happens. According to perf the benchmark spends 34%+ > > of time in garbage collection ('perf record -- $args'/'perf report'): > > > > 27,91% test test [.] evacuate > > 9,29% test test [.] s9Lz_info > > 7,46% test test [.] scavenge_block > > > > And the whole benchmark runs a tiny bit more than 300ms. > > It is exactly in line with major GC timer (0.3s). > > 0.3s is the *idle* GC timer, it has no effect when the program is > running normally. There's no timed GC or anything like that. > > It sometimes happens that a tiny change somewhere tips a program over > into doing one more major GC, though. > > > If we run > > $ time ./test inverter 345 10n 4u 1>/dev/null > > multiple times there is heavy instability in there (with my patch reverted): > > real 0m0.319s > > real 0m0.305s > > real 0m0.307s > > real 0m0.373s > > real 0m0.381s > > which is +/- 80ms drift! > > > > Let's try to kick major GC earlier instead of running right at runtime > > shutdown time: > > $ time ./test inverter 345 10n 4u +RTS -I0.1 1>/dev/null > > > > real 0m0.304s > > real 0m0.308s > > real 0m0.302s > > real 0m0.304s > > real 0m0.308s > > real 0m0.306s > > real 0m0.305s > > real 0m0.312s > > which is way more stable behaviour. > > > > Thus my theory is that my changed stepped from > > "90% of time 1 GC run per run" > > to > > "90% of time 2 GC runs per run" > > Is this program idle? I have no idea why this might be happening! If > the program is busy computing stuff, the idle GC should not be firing. > If it is, that's a bug. The task is a completely CPU-bound thing. Then I was very lucky to get that change with -I option. I'll try to find exact place where profiling times change (or just float), but it will take more time. -- Sergei -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From iavor.diatchki at gmail.com Sun Aug 10 20:12:04 2014 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Sun, 10 Aug 2014 13:12:04 -0700 Subject: Overlapping and incoherent and intentionally omitted instances In-Reply-To: <53E77F54.5010007@henning-thielemann.de> References: <618BE556AADD624C9C918AA5D5911BEF2207B3A1@DB3PRD3001MB020.064d.mgd.msft.net> <53D76989.2070808@nh2.me> <87lhrcbijz.fsf@gnu.org> <53E77F54.5010007@henning-thielemann.de> Message-ID: Hello, Such a pragma sounds useful, and is very much like the "fails" instance from the "Instance chains" paper. You may also be interested in ticket #9334 (https://ghc.haskell.org/trac/ghc/ticket/9334), which proposes an alternative to overlapping instances, and I just updated it to point to #7775. -Iavor On Sun, Aug 10, 2014 at 7:19 AM, Henning Thielemann < schlepptop at henning-thielemann.de> wrote: > Am 29.07.2014 um 12:02 schrieb Johan Tibell: > > P.S. For e.g. INLINABLE we require that you mention the function name >> next to the pragma (which means that you can e.g. put the pragma after >> the declaration). What's the rationale to not require >> >> {-# OVERLAPPING Show [Char] #-} >> >> here? Perhaps it's too annoying to have to repeat the types? >> > > Once I proposed a pragma for documenting intentionally unimplemented > instances. In this case there is no instance you can write a pragma in > front of. Your OVERLAPPING syntax would be conform with the one of > NOINSTANCE: > > https://ghc.haskell.org/trac/ghc/ticket/7775 > > Maybe NOINSTANCE can be reconsidered in the course of the introduction of > the OVERLAP pragma? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Sun Aug 10 21:06:25 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Sun, 10 Aug 2014 23:06:25 +0200 Subject: CPP usage in GHC. Message-ID: <53E7DED1.6080609@centrum.cz> Folks, in my attempt to lower number of failing tests on Solaris I've found several tests which fail on just difference in file name report. My ghc reports warning/error in /tmp/ghc/ while expected is clear T.hs. See http://haskell.inf.elte.hu/builders/solaris-x86-head/125/21.html and search for T7145b as an example of this behavior. The reason why this happen is that Solaris GNU C 4.x does not emit line markers in preprocessed file when it's preprocessed with -x assembler-with-cpp. The reason behind this is documented in this thread[1] on GCC mailing list. Simply speaking Sun's assembler in the past chokes on some linemarkers generated. This was apparently case of as on older Solaris then 10 version and perhaps this will be fixed in future major GCC release as Solaris 9 is not supported anymore. Anyway, we still do have a case with GNU C compilers provided by Solaris 10 and Solaris 11. FYI: Solaris' 10 GNU C 3.4.x is OK, Solaris 11's GNU C 4.5.2 is broken and with this all more modern 4.x releases so probably also all 4.x release provided by Solaris 11.1/11.2. So far I've solved the issue of those failing tests by passing --with-hs-cpp=/usr/sfw/bin/gcc -- so configured this way GHC will use old not-buggy GNU C 3.4.x on my Solaris 11 builder as CPP and otherwise it'll use /usr/bin/gcc (GNU C 4.5.2) and everything will pass fine hopefully. Anyway, the thread[1] also contains a question which also rings in my head and that is: why we use -x assembler-with-cpp at all? Isn't simple -E enough. Or isn't simple usage of system provided CPP enough /usr/lib/cpp on Solaris)? Or what will happen if we for example change -x assembler-with-cpp to -x c or -x c-header or something like that? Please note that the testcase is OK with -x c/c-header even using this "buggy" GNU C 4.5.2 since the compiler/cpp is really buggy just for the case of -x assembler-with-cpp. Thanks! Karel [1]: https://gcc.gnu.org/ml/gcc/2014-08/msg00114.html From carter.schonwald at gmail.com Sun Aug 10 23:27:28 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sun, 10 Aug 2014 19:27:28 -0400 Subject: CPP usage in GHC. In-Reply-To: <53E7DED1.6080609@centrum.cz> References: <53E7DED1.6080609@centrum.cz> Message-ID: I could be wrong, but I think assembler-with-cpp came up only as part of certain clang work arounds, it should suffice to use any GCC like traditional mode CPP (like the CPPHS tool). note that the configure script tries to detect what the CPP program you specify using --with-hs-cpp IS, and it only has logic for modern GCC/Clang/CPPHS, so you should specify a suitable set of flags if you're picking something different (theres a flag like --with-hs-cpp-flags you can set explicitly) On Sun, Aug 10, 2014 at 5:06 PM, Karel Gardas wrote: > > Folks, > > in my attempt to lower number of failing tests on Solaris I've found > several tests which fail on just difference in file name report. My ghc > reports warning/error in /tmp/ghc/ while > expected is clear T.hs. > > See http://haskell.inf.elte.hu/builders/solaris-x86-head/125/21.html and > search for T7145b as an example of this behavior. > > The reason why this happen is that Solaris GNU C 4.x does not emit line > markers in preprocessed file when it's preprocessed with -x > assembler-with-cpp. The reason behind this is documented in this thread[1] > on GCC mailing list. Simply speaking Sun's assembler in the past chokes on > some linemarkers generated. This was apparently case of as on older Solaris > then 10 version and perhaps this will be fixed in future major GCC release > as Solaris 9 is not supported anymore. Anyway, we still do have a case with > GNU C compilers provided by Solaris 10 and Solaris 11. FYI: Solaris' 10 GNU > C 3.4.x is OK, Solaris 11's GNU C 4.5.2 is broken and with this all more > modern 4.x releases so probably also all 4.x release provided by Solaris > 11.1/11.2. > > So far I've solved the issue of those failing tests by passing > --with-hs-cpp=/usr/sfw/bin/gcc -- so configured this way GHC will use old > not-buggy GNU C 3.4.x on my Solaris 11 builder as CPP and otherwise it'll > use /usr/bin/gcc (GNU C 4.5.2) and everything will pass fine hopefully. > > Anyway, the thread[1] also contains a question which also rings in my head > and that is: why we use -x assembler-with-cpp at all? Isn't simple -E > enough. Or isn't simple usage of system provided CPP enough /usr/lib/cpp on > Solaris)? Or what will happen if we for example change -x > assembler-with-cpp to -x c or -x c-header or something like that? Please > note that the testcase is OK with -x c/c-header even using this "buggy" GNU > C 4.5.2 since the compiler/cpp is really buggy just for the case of -x > assembler-with-cpp. > > Thanks! > Karel > > [1]: https://gcc.gnu.org/ml/gcc/2014-08/msg00114.html > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukexipd at gmail.com Mon Aug 11 01:44:34 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Sun, 10 Aug 2014 18:44:34 -0700 Subject: ARM64 Task Force In-Reply-To: <53E60463.2080608@centrum.cz> References: <53E466F1.90201@centrum.cz> <53E60463.2080608@centrum.cz> Message-ID: I think I've solved this particular mystery -- the registers were never defined there because that integer-representation of them is only used by the NCG. In LLVM land they were only ever stringified by the REG() macro. Except now globalRegMaybe is being used in CmmSink.hs (as Simon and Ben were discussing), and globalRegMaybe needs an integer value for each register to put into its Maybe RealReg return value. Since CmmSink.hs only checks 'isJust', it doesn't actually matter what the integer value is. So I've just gone ahead and defined them sequentially for now which seems to get me past this. Thanks! Luke On Sat, Aug 9, 2014 at 4:22 AM, Karel Gardas wrote: > On 08/ 9/14 05:27 AM, Luke Iannini wrote: > >> Hi Karel, >> Thanks! >> >> A question: >> https://git.haskell.org/ghc.git/commitdiff/454b34cb3b67dec21f023339c4d53d >> 734af7605d >> adds references to s16, s17, s18, s19, d10 and d11 but I don't see those >> > > Yes, that adds FPU support for ARM. > > > where I though to expect them in >> https://github.com/ghc/ghc/blob/master/includes/CodeGen.Platform.hs >> > > Hmm, whole ARM reg set is missing in this file. IIRC Simon Marlow were > discussing this with Ben Gamari recently. I've not investigated if this is > needed or not since I don't know if this is used only in NCG or in > registerised build in general. If the former, ARM will not be there as > there is no ARM NCG yet, if the later, then ARM should be there as > ARM/LLVM/registerised build is a reality. > > Cheers, > Karel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Mon Aug 11 12:32:57 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 11 Aug 2014 14:32:57 +0200 Subject: CPP usage in GHC. In-Reply-To: References: <53E7DED1.6080609@centrum.cz> Message-ID: <53E8B7F9.1040607@centrum.cz> On 08/11/14 01:27 AM, Carter Schonwald wrote: > I could be wrong, but I think assembler-with-cpp came up only as part of > certain clang work arounds, > it should suffice to use any GCC like traditional mode CPP (like the > CPPHS tool). This is interesting, but it looks like -x assembler-with-cpp is hard-coded into DriverPipeline.hs in doCpp function and either assembler-with-cpp or assembler is used in runPhase as and this is completely independent from target cpp even when configure with --with-hs-cpp= option. > note that the configure script tries to detect what the CPP program you > specify using --with-hs-cpp IS, and it only has logic for modern > GCC/Clang/CPPHS, so you should specify a suitable set of flags if you're > picking something different (theres a flag like --with-hs-cpp-flags you > can set explicitly) Hmm, seeing CPPHS give me an idea about either - prioritizing CPPHS usage, when configure detects CPPHS availability it is then set as with --with-hs-cpp option and used as a preprocessor or - integrate CPPHS directly into GHC as it seems it provides some library API. Sidenote: builders are testing ghc binary dist installed into the "install dir" directory and it looks like this process of installation completely forgot about original --with-hs-cpp option. And seeing Solaris builder test_bindist[1] output it looks like not only --with-hs-cpp option is forgotten but every option of original configure run is forgotten... Karel [1]: http://haskell.inf.elte.hu/builders/solaris-x86-head/135/20.html > > > On Sun, Aug 10, 2014 at 5:06 PM, Karel Gardas > wrote: > > > Folks, > > in my attempt to lower number of failing tests on Solaris I've found > several tests which fail on just difference in file name report. My > ghc reports warning/error in /tmp/ghc/ thing> while expected is clear T.hs. > > See > http://haskell.inf.elte.hu/__builders/solaris-x86-head/125/__21.html > > and search for T7145b as an example of this behavior. > > The reason why this happen is that Solaris GNU C 4.x does not emit > line markers in preprocessed file when it's preprocessed with -x > assembler-with-cpp. The reason behind this is documented in this > thread[1] on GCC mailing list. Simply speaking Sun's assembler in > the past chokes on some linemarkers generated. This was apparently > case of as on older Solaris then 10 version and perhaps this will be > fixed in future major GCC release as Solaris 9 is not supported > anymore. Anyway, we still do have a case with GNU C compilers > provided by Solaris 10 and Solaris 11. FYI: Solaris' 10 GNU C 3.4.x > is OK, Solaris 11's GNU C 4.5.2 is broken and with this all more > modern 4.x releases so probably also all 4.x release provided by > Solaris 11.1/11.2. > > So far I've solved the issue of those failing tests by passing > --with-hs-cpp=/usr/sfw/bin/gcc -- so configured this way GHC will > use old not-buggy GNU C 3.4.x on my Solaris 11 builder as CPP and > otherwise it'll use /usr/bin/gcc (GNU C 4.5.2) and everything will > pass fine hopefully. > > Anyway, the thread[1] also contains a question which also rings in > my head and that is: why we use -x assembler-with-cpp at all? Isn't > simple -E enough. Or isn't simple usage of system provided CPP > enough /usr/lib/cpp on Solaris)? Or what will happen if we for > example change -x assembler-with-cpp to -x c or -x c-header or > something like that? Please note that the testcase is OK with -x > c/c-header even using this "buggy" GNU C 4.5.2 since the > compiler/cpp is really buggy just for the case of -x assembler-with-cpp. > > Thanks! > Karel > > [1]: https://gcc.gnu.org/ml/gcc/__2014-08/msg00114.html > > _________________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/__mailman/listinfo/ghc-devs > > > From karel.gardas at centrum.cz Mon Aug 11 12:56:12 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 11 Aug 2014 14:56:12 +0200 Subject: CPP usage in GHC. In-Reply-To: <53E8B7F9.1040607@centrum.cz> References: <53E7DED1.6080609@centrum.cz> <53E8B7F9.1040607@centrum.cz> Message-ID: <53E8BD6C.1010109@centrum.cz> On 08/11/14 02:32 PM, Karel Gardas wrote: > Hmm, seeing CPPHS give me an idea about either > > - prioritizing CPPHS usage, when configure detects CPPHS availability it > is then set as with --with-hs-cpp option and used as a preprocessor https://phabricator.haskell.org/D142 -- implements this option Karel From carter.schonwald at gmail.com Mon Aug 11 18:48:00 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 11 Aug 2014 14:48:00 -0400 Subject: CPP usage in GHC. In-Reply-To: <53E8BD6C.1010109@centrum.cz> References: <53E7DED1.6080609@centrum.cz> <53E8B7F9.1040607@centrum.cz> <53E8BD6C.1010109@centrum.cz> Message-ID: What i'm hearing you say is we actually need TWO sets of CPP flags, one for normal haskell, and another for the CPP used on the assembler? wheres this hardcoding? -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Mon Aug 11 20:27:51 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 11 Aug 2014 22:27:51 +0200 Subject: CPP usage in GHC. In-Reply-To: References: <53E7DED1.6080609@centrum.cz> <53E8B7F9.1040607@centrum.cz> <53E8BD6C.1010109@centrum.cz> Message-ID: <53E92747.50000@centrum.cz> On 08/11/14 08:48 PM, Carter Schonwald wrote: > What i'm hearing you say is we actually need TWO sets of CPP flags, one > for normal haskell, and another for the CPP used on the assembler? > wheres this hardcoding? DriverPipeline.hs -- grep for "assembler-with-cpp" and you will find it. IMHO best would be to move this "-x assembler-with-cpp" into the hs-cpp-flags managed by configure. This way it may be even possible to use system supplied plain cpp instead of cpp builtinto GNU C. Karel From carter.schonwald at gmail.com Mon Aug 11 21:18:03 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 11 Aug 2014 17:18:03 -0400 Subject: CPP usage in GHC. In-Reply-To: <53E92747.50000@centrum.cz> References: <53E7DED1.6080609@centrum.cz> <53E8B7F9.1040607@centrum.cz> <53E8BD6C.1010109@centrum.cz> <53E92747.50000@centrum.cz> Message-ID: why should this flag be passed to cpp when invoked on HS files? It'd be easy to expose another field in the settings file for this other invokecation.. though i should look more closely at the use site before opinining :) On Mon, Aug 11, 2014 at 4:27 PM, Karel Gardas wrote: > On 08/11/14 08:48 PM, Carter Schonwald wrote: > >> What i'm hearing you say is we actually need TWO sets of CPP flags, one >> for normal haskell, and another for the CPP used on the assembler? >> wheres this hardcoding? >> > > DriverPipeline.hs -- grep for "assembler-with-cpp" and you will find it. > > IMHO best would be to move this "-x assembler-with-cpp" into the > hs-cpp-flags managed by configure. This way it may be even possible to use > system supplied plain cpp instead of cpp builtinto GNU C. > > Karel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Mon Aug 11 21:22:14 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 11 Aug 2014 23:22:14 +0200 Subject: CPP usage in GHC. In-Reply-To: References: <53E7DED1.6080609@centrum.cz> <53E8B7F9.1040607@centrum.cz> <53E8BD6C.1010109@centrum.cz> <53E92747.50000@centrum.cz> Message-ID: <53E93406.6070106@centrum.cz> On 08/11/14 11:18 PM, Carter Schonwald wrote: > why should this flag be passed to cpp when invoked on HS files? It'd be > easy to expose another field in the settings file for this other > invokecation.. though i should look more closely at the use site before > opinining :) Hmm, isn't doCpp function what's invoked when cpp is invoked for HS files? If so, then -x assembler-with-cpp is already used for HS files anyway. Karel > > > On Mon, Aug 11, 2014 at 4:27 PM, Karel Gardas > wrote: > > On 08/11/14 08:48 PM, Carter Schonwald wrote: > > What i'm hearing you say is we actually need TWO sets of CPP > flags, one > for normal haskell, and another for the CPP used on the assembler? > wheres this hardcoding? > > > DriverPipeline.hs -- grep for "assembler-with-cpp" and you will find it. > > IMHO best would be to move this "-x assembler-with-cpp" into the > hs-cpp-flags managed by configure. This way it may be even possible > to use system supplied plain cpp instead of cpp builtinto GNU C. > > Karel > > From carter.schonwald at gmail.com Mon Aug 11 23:35:03 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 11 Aug 2014 19:35:03 -0400 Subject: CPP usage in GHC. In-Reply-To: <53E93406.6070106@centrum.cz> References: <53E7DED1.6080609@centrum.cz> <53E8B7F9.1040607@centrum.cz> <53E8BD6C.1010109@centrum.cz> <53E92747.50000@centrum.cz> <53E93406.6070106@centrum.cz> Message-ID: Oooooo. Then it's possibly debris leftover from Austin's initial clang compatibility work predating the improvements via the settings file work. I'm Afk right now, but that probably can be safely removed from ghc, especially since the configure script for clang cpp adds that anyways now I think? I might be wrong though. On Monday, August 11, 2014, Karel Gardas wrote: > On 08/11/14 11:18 PM, Carter Schonwald wrote: > >> why should this flag be passed to cpp when invoked on HS files? It'd be >> easy to expose another field in the settings file for this other >> invokecation.. though i should look more closely at the use site before >> opinining :) >> > > Hmm, isn't doCpp function what's invoked when cpp is invoked for HS files? > If so, then -x assembler-with-cpp is already used for HS files anyway. > > Karel > > >> >> On Mon, Aug 11, 2014 at 4:27 PM, Karel Gardas > > wrote: >> >> On 08/11/14 08:48 PM, Carter Schonwald wrote: >> >> What i'm hearing you say is we actually need TWO sets of CPP >> flags, one >> for normal haskell, and another for the CPP used on the assembler? >> wheres this hardcoding? >> >> >> DriverPipeline.hs -- grep for "assembler-with-cpp" and you will find >> it. >> >> IMHO best would be to move this "-x assembler-with-cpp" into the >> hs-cpp-flags managed by configure. This way it may be even possible >> to use system supplied plain cpp instead of cpp builtinto GNU C. >> >> Karel >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Tue Aug 12 01:24:41 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Mon, 11 Aug 2014 21:24:41 -0400 Subject: diff'ing in Diffusion Message-ID: Hi all, I wanted to see a side-by-side diff of the GHC code between 7.8.2 and 7.8.3. So, I went to Phab's Diffusion application. I can access the different commits I wish to compare, but I can't seem to find a "Compare" or "Diff" button. Any hints? Thanks! Richard From lukexipd at gmail.com Tue Aug 12 09:03:19 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Tue, 12 Aug 2014 02:03:19 -0700 Subject: ARM64 Task Force In-Reply-To: References: <53E466F1.90201@centrum.cz> <53E60463.2080608@centrum.cz> Message-ID: I've pushed my WIP patches here: https://github.com/lukexi/llvm/commit/dfe74bb48eb05ce7847fa262f6e563d20d0b1fc5 https://github.com/lukexi/ghc/commit/e99b7a41e64f3ddb9bb420c0d5583f0e302e321e (they also require the latest libffi to be dropped in ftp://sourceware.org/pub/libffi/libffi-3.0.13.tar.gz due to https://ghc.haskell.org/trac/ghc/ticket/8664) These can produce an ARM64 GHC but the produced binaries aren't fully functional yet. They make it through hs_init() but crash rather opaquely when I try to call a simple fib function through the FFI. It looks like it's jumping somewhere strange; lldb tells me it's to 0x100e05110: .long 0x00000000 ; unknown opcode 0x100e05114: .long 0x00000000 ; unknown opcode 0x100e05118: .long 0x00000000 ; unknown opcode 0x100e0511c: .long 0x00000000 ; unknown opcode 0x100e05120: .long 0x00000000 ; unknown opcode 0x100e05124: .long 0x00000000 ; unknown opcode 0x100e05128: .long 0x00000000 ; unknown opcode 0x100e0512c: .long 0x00000000 ; unknown opcode If I put a breakpoint on StgRun and step by instruction, I seem to make it to about: https://github.com/lukexi/ghc/blob/e99b7a41e64f3ddb9bb420c0d5583f0e302e321e/rts/StgCRun.c#L770 (give or take a line) before something goes mysteriously wrong and I'm no longer able to interact with the debugger. So I guess I'll try taking out float register support and see if that gets me anywhere. If anyone has some ideas on how to debug this I'd love to hear them! I've mostly assembled the patches by adapting the existing ARM support so it's quite possibly I'm doing something boneheaded. Cheers Luke On Sun, Aug 10, 2014 at 6:44 PM, Luke Iannini wrote: > I think I've solved this particular mystery -- the registers were never > defined there because that integer-representation of them is only used by > the NCG. In LLVM land they were only ever stringified by the REG() macro. > > Except now globalRegMaybe is being used in CmmSink.hs (as Simon and Ben > were discussing), and globalRegMaybe needs an integer value for each > register to put into its Maybe RealReg return value. Since CmmSink.hs only > checks 'isJust', it doesn't actually matter what the integer value is. > > So I've just gone ahead and defined them sequentially for now which seems > to get me past this. > > Thanks! > Luke > > > On Sat, Aug 9, 2014 at 4:22 AM, Karel Gardas > wrote: > >> On 08/ 9/14 05:27 AM, Luke Iannini wrote: >> >>> Hi Karel, >>> Thanks! >>> >>> A question: >>> https://git.haskell.org/ghc.git/commitdiff/ >>> 454b34cb3b67dec21f023339c4d53d734af7605d >>> adds references to s16, s17, s18, s19, d10 and d11 but I don't see those >>> >> >> Yes, that adds FPU support for ARM. >> >> >> where I though to expect them in >>> https://github.com/ghc/ghc/blob/master/includes/CodeGen.Platform.hs >>> >> >> Hmm, whole ARM reg set is missing in this file. IIRC Simon Marlow were >> discussing this with Ben Gamari recently. I've not investigated if this is >> needed or not since I don't know if this is used only in NCG or in >> registerised build in general. If the former, ARM will not be there as >> there is no ARM NCG yet, if the later, then ARM should be there as >> ARM/LLVM/registerised build is a reality. >> >> Cheers, >> Karel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Tue Aug 12 18:24:12 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Tue, 12 Aug 2014 20:24:12 +0200 Subject: ARM64 Task Force In-Reply-To: References: <53E466F1.90201@centrum.cz> <53E60463.2080608@centrum.cz> Message-ID: <53EA5BCC.3060406@centrum.cz> On 08/12/14 11:03 AM, Luke Iannini wrote: > It looks like it's jumping somewhere strange; lldb tells me it's to > 0x100e05110: .long 0x00000000 ; unknown opcode > 0x100e05114: .long 0x00000000 ; unknown opcode > 0x100e05118: .long 0x00000000 ; unknown opcode > 0x100e0511c: .long 0x00000000 ; unknown opcode > 0x100e05120: .long 0x00000000 ; unknown opcode > 0x100e05124: .long 0x00000000 ; unknown opcode > 0x100e05128: .long 0x00000000 ; unknown opcode > 0x100e0512c: .long 0x00000000 ; unknown opcode > > If I put a breakpoint on StgRun and step by instruction, I seem to make > it to about: > https://github.com/lukexi/ghc/blob/e99b7a41e64f3ddb9bb420c0d5583f0e302e321e/rts/StgCRun.c#L770 > (give or take a line) strange that it's in the middle of the stp isns block. Anyway, this looks like a cpu exception doesn't it? You will need to find out the reg which holds the "exception reason" value and then look on it in your debugger to find out what's going wrong there. Karel From slyich at gmail.com Tue Aug 12 20:31:13 2014 From: slyich at gmail.com (Sergei Trofimovich) Date: Tue, 12 Aug 2014 23:31:13 +0300 Subject: making ./validate run tests on all CPUs by default Message-ID: <20140812233113.64c2e20e@sf> Good evening all! Currently when being ran './validate' script (without any parameters): - builds ghc using 2 parallel jobs - runs testsuite using 2 parallel jobs I propose to change the default value to amount of available CPUs: - build ghc using N+1 parallel jobs - run testsuite using N+1 parallel jobs Pros: - first-time users will get faster ./validate - seasoned users will need less tweaking for buildbots Cons: - for imbalanced boxes (32 cores, 8GB RAM) it might be quite painful to drag box out of swap What do you think about it? Actual patch: https://phabricator.haskell.org/D146 Thanks! -- Sergei -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From alan.zimm at gmail.com Tue Aug 12 20:51:47 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Tue, 12 Aug 2014 22:51:47 +0200 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Status update I have worked through a proof of concept update to the GHC AST whereby the type is provided as a parameter to each data type. This was basically a mechanical process of changing type signatures, and required very little actual code changes, being only to initialise the placeholder types. The enabling types are type PostTcType = Type -- Used for slots in the abstract syntax -- where we want to keep slot for a type -- to be added by the type checker...but -- [before typechecking it's just bogus] type PreTcType = () -- used before typechecking class PlaceHolderType a where placeHolderType :: a instance PlaceHolderType PostTcType where placeHolderType = panic "Evaluated the place holder for a PostTcType" instance PlaceHolderType PreTcType where placeHolderType = () These are used to replace all instances of PostTcType in the hsSyn types. The change was applied against HEAD as of last friday, and can be found here https://github.com/alanz/ghc/tree/wip/landmine-param https://github.com/alanz/haddock/tree/wip/landmine-param They pass 'sh validate' with GHC 7.6.3, and compile against GHC 7.8.3. I have not tried to validate that yet, have no reason to expect failure. Can I please get some feedback as to whether this is a worthwhile change? It is the first step to getting a generic traversal safe AST Regards Alan On Mon, Jul 28, 2014 at 5:45 PM, Alan & Kim Zimmerman wrote: > FYI I edited the paste at http://lpaste.net/108262 to show the problem > > > On Mon, Jul 28, 2014 at 5:41 PM, Alan & Kim Zimmerman > wrote: > >> I already tried that, the syntax does not seem to allow it. >> >> I suspect some higher form of sorcery will be required, as alluded to >> here >> http://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family >> >> Alan >> >> >> On Mon, Jul 28, 2014 at 4:55 PM, wrote: >> >>> Dear Alan, >>> >>> >>> >>> I would think you would want to constrain the result, i.e. >>> >>> >>> >>> type family (Data (PostTcType a)) => PostTcType a where ? >>> >>> >>> >>> The Data-instance of ?a? doesn?t give you much if you have a ?PostTcType >>> a?. >>> >>> >>> >>> Your point about SYB-recognition of WrongPhase is, of course, a good one >>> ;) >>> >>> >>> >>> Regards, >>> >>> Philip >>> >>> >>> >>> >>> >>> >>> >>> *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] >>> *Sent:* maandag 28 juli 2014 14:10 >>> *To:* Holzenspies, P.K.F. (EWI) >>> *Cc:* Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org >>> >>> *Subject:* Re: Broken Data.Data instances >>> >>> >>> >>> Philip >>> >>> I think the main reason for the WrongPhase thing is to have something >>> that explicitly has a Data and Typeable instance, to allow generic (SYB) >>> traversal. If we can get by without this so much the better. >>> >>> On a related note, is there any way to constrain the 'a' in >>> >>> type family PostTcType a where >>> PostTcType Id = TcType >>> PostTcType other = WrongPhaseTyp >>> >>> to have an instance of Data? >>> >>> I am experimenting with traversals over my earlier paste, and got stuck >>> here (which is the reason the Show instances were commentet out in the >>> original). >>> >>> Alan >>> >>> >>> >>> >>> >>> On Mon, Jul 28, 2014 at 12:30 PM, wrote: >>> >>> Sorry about that? I?m having it out with my terminal server and the >>> server seems to be winning. Here?s another go: >>> >>> >>> >>> I always read the () as ?there?s nothing meaningful to stick in here, >>> but I have to stick in something? so I don?t necessarily want the >>> WrongPhase-thing. There is very old commentary stating it would be lovely >>> if someone could expose the PostTcType as a parameter of the AST-types, but >>> that there are so many types and constructors, that it?s a boring chore to >>> do. Actually, I was hoping haRe would come up to speed to be able to do >>> this. That being said, I think Simon?s idea to turn PostTcType into a >>> type-family is a better way altogether; it also documents intent, i.e. () >>> may not say so much, but PostTcType RdrName says quite a lot. >>> >>> >>> >>> Simon commented that a lot of the internal structures aren?t trees, but >>> cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just >>> and Nothing, which again refer to the TyCon for Maybe. I was wondering >>> whether it would be possible to make stateful lenses for this. Of course, >>> for specific cases, we could do this, but I wonder if it is also possible >>> to have lenses remember the things they visited and not visit them twice. >>> Any ideas on this, Edward? >>> >>> >>> >>> Regards, >>> >>> Philip >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] >>> >>> *Sent:* maandag 28 juli 2014 11:14 >>> >>> *To:* Simon Peyton Jones >>> *Cc:* Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs >>> >>> >>> *Subject:* Re: Broken Data.Data instances >>> >>> >>> >>> I have made a conceptual example of this here http://lpaste.net/108262 >>> >>> Alan >>> >>> >>> >>> On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman < >>> alan.zimm at gmail.com> wrote: >>> >>> What about creating a specific type with a single constructor for the >>> "not relevant to this phase" type to be used instead of () above? That >>> would also clearly document what was going on. >>> >>> Alan >>> >>> >>> >>> On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones < >>> simonpj at microsoft.com> wrote: >>> >>> I've had to mangle a bunch of hand-written Data instances and push out >>> patches to a dozen packages that used to be built this way before I >>> convinced the authors to switch to safer versions of Data. Using virtual >>> smart constructors like we do now in containers and Text where needed can >>> be used to preserve internal invariants, etc. >>> >>> >>> >>> If the ?hand grenades? are the PostTcTypes, etc, then I can explain why >>> they are there. >>> >>> >>> >>> There simply is no sensible type you can put before the type checker >>> runs. For example one of the constructors in HsExpr is >>> >>> | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] >>> >>> After type checking we know what type the thing has, but before we have >>> no clue. >>> >>> >>> >>> We could get around this by saying >>> >>> type PostTcType = Maybe TcType >>> >>> but that would mean that every post-typechecking consumer would need a >>> redundant pattern-match on a Just that would always succeed. >>> >>> >>> >>> It?s nothing deeper than that. Adding Maybes everywhere would be >>> possible, just clunky. >>> >>> >>> >>> >>> >>> However we now have type functions, and HsExpr is parameterised by an >>> ?id? parameter, which changes from RdrName (after parsing) to Name (after >>> renaming) to Id (after typechecking). So we could do this: >>> >>> | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] >>> >>> and define PostTcType as a closed type family thus >>> >>> >>> >>> type family PostTcType a where >>> >>> PostTcType Id = TcType >>> >>> PostTcType other = () >>> >>> >>> >>> That would be better than filling it with bottoms. But it might not >>> help with generic programming, because there?d be a component whose type >>> wasn?t fixed. I have no idea how generics and type functions interact. >>> >>> >>> >>> Simon >>> >>> >>> >>> *From:* Edward Kmett [mailto:ekmett at gmail.com] >>> *Sent:* 27 July 2014 18:27 >>> *To:* p.k.f.holzenspies at utwente.nl >>> *Cc:* alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs >>> >>> >>> *Subject:* Re: Broken Data.Data instances >>> >>> >>> >>> Philip, Alan, >>> >>> >>> >>> If you need a hand, I'm happy to pitch in guidance. >>> >>> >>> >>> I've had to mangle a bunch of hand-written Data instances and push out >>> patches to a dozen packages that used to be built this way before I >>> convinced the authors to switch to safer versions of Data. Using virtual >>> smart constructors like we do now in containers and Text where needed can >>> be used to preserve internal invariants, etc. >>> >>> >>> >>> This works far better for users of the API than just randomly throwing >>> them a live hand grenade. As I recall, these little grenades in generic >>> programming over the GHC API have been a constant source of pain for >>> libraries like haddock. >>> >>> >>> >>> Simon, >>> >>> >>> >>> It seems to me that regarding circular data structures, nothing prevents >>> you from walking a circular data structure with Data.Data. You can generate >>> a new one productively that looks just like the old with the contents >>> swapped out, it is indistinguishable to an observer if the fixed point is >>> lost, and a clever observer can use observable sharing to get it back, >>> supposing that they are allowed to try. >>> >>> >>> >>> Alternately, we could use the 'virtual constructor' trick there to break >>> the cycle and reintroduce it, but I'm less enthusiastic about that idea, >>> even if it is simpler in many ways. >>> >>> >>> >>> -Edward >>> >>> >>> >>> On Sun, Jul 27, 2014 at 10:17 AM, wrote: >>> >>> Alan, >>> >>> In that case, let's have a short feedback-loop between the two of us. It >>> seems many of these files (Name.lhs, for example) are really stable through >>> the repo-history. It would be nice to have one bigger refactoring all in >>> one go (some of the code could use a polish, a lot of code seems removable). >>> >>> Regards, >>> Philip >>> ------------------------------ >>> >>> *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] >>> *Verzonden:* vrijdag 25 juli 2014 13:44 >>> *Aan:* Simon Peyton Jones >>> *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org >>> *Onderwerp:* Re: Broken Data.Data instances >>> >>> By the way, I would be happy to attempt this task, if the concept is >>> viable. >>> >>> >>> >>> On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < >>> alan.zimm at gmail.com> wrote: >>> >>> While we are talking about fixing traversals, how about getting rid >>> of the phase specific panic initialisers for placeHolderType, >>> placeHolderKind and friends? >>> >>> In order to safely traverse with SYB, the following needs to be inserted >>> into all the SYB schemes (see >>> >>> https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs >>> ) >>> >>> -- Check the Typeable items >>> checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool >>> checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` >>> fixity `SYB.extQ` nameSet) x >>> where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) >>> :: GHC.NameSet -> Bool >>> postTcType = const (stage < SYB.TypeChecker ) >>> :: GHC.PostTcType -> Bool >>> fixity = const (stage < SYB.Renamer ) >>> :: GHC.Fixity -> Bool >>> >>> And in addition HsCmdTop and ParStmtBlock are initialised with explicit >>> 'undefined values. >>> >>> Perhaps use an initialiser that can have its panic turned off when >>> called via the GHC API? >>> >>> Regards >>> >>> Alan >>> >>> >>> >>> >>> >>> On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < >>> simonpj at microsoft.com> wrote: >>> >>> So... does anyone object to me changing these "broken" instances >>> with the ones given by DeriveDataTypeable? >>> >>> That?s fine with me provided (a) the default behaviour is not immediate >>> divergence (which it might well be), and (b) the pitfalls are documented. >>> >>> >>> >>> Simon >>> >>> >>> >>> *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] >>> *Sent:* 24 July 2014 18:42 >>> *To:* Simon Peyton Jones >>> *Cc:* ghc-devs at haskell.org >>> *Subject:* Re: Broken Data.Data instances >>> >>> >>> >>> Dear Simon, et al, >>> >>> These are very good points to make for people writing such traversals >>> and queries. I would be more than happy to write a page on the pitfalls >>> etc. on the wiki, but in my experience so far, exploring the innards of GHC >>> is tremendously helped by trying small things out and showing (bits of) the >>> intermediate structures. For me, personally, this has always been hindered >>> by the absence of good instances of Data and/or Show (not having to bring >>> DynFlags and not just visualising with the pretty printer are very helpful). >>> >>> So... does anyone object to me changing these "broken" instances with >>> the ones given by DeriveDataTypeable? >>> >>> Also, many of these internal data structures could be provided with >>> useful lenses to improve such traversals further. Anyone ever go at that? >>> Would be people be interested? >>> >>> Regards, >>> Philip >>> >>> *Simon Peyton Jones* >>> >>> 24 Jul 2014 18:22 >>> >>> GHC?s data structures are often mutually recursive. e.g. >>> >>> ? The TyCon for Maybe contains the DataCon for Just >>> >>> ? The DataCon For just contains Just?s type >>> >>> ? Just?s type contains the TyCon for Maybe >>> >>> >>> >>> So any attempt to recursively walk over all these structures, as you >>> would a tree, will fail. >>> >>> >>> >>> Also there?s a lot of sharing. For example, every occurrence of ?map? >>> is a Var, and inside that Var is map?s type, its strictness, its rewrite >>> RULE, etc etc. In walking over a term you may not want to walk over all >>> that stuff at every occurrence of map. >>> >>> >>> >>> Maybe that?s it; I?m not certain since I did not write the Data >>> instances for any of GHC?s types >>> >>> >>> >>> Simon >>> >>> >>> >>> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org >>> ] *On Behalf Of * >>> p.k.f.holzenspies at utwente.nl >>> *Sent:* 24 July 2014 16:42 >>> *To:* ghc-devs at haskell.org >>> *Subject:* Broken Data.Data instances >>> >>> >>> >>> Dear GHC-ers, >>> >>> >>> >>> Is there a reason for explicitly broken Data.Data instances? Case in >>> point: >>> >>> >>> >>> > instance Data Var where >>> >>> > -- don't traverse? >>> >>> > toConstr _ = abstractConstr "Var" >>> >>> > gunfold _ _ = error "gunfold" >>> >>> > dataTypeOf _ = mkNoRepType "Var" >>> >>> >>> >>> I understand (vaguely) arguments about abstract data types, but this >>> also excludes convenient queries that can, e.g. extract all types from a >>> CoreExpr. I had hoped to do stuff like this: >>> >>> >>> >>> > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b >>> >>> > collect = everything mplus $ mkQ mzero return >>> >>> > >>> >>> > allTypes :: CoreExpr -> [Type] >>> >>> > allTypes = collect >>> >>> >>> >>> Especially when still exploring (parts of) the GHC API, being able to >>> extract things in this fashion is very helpful. SYB?s ?everything? being >>> broken by these instances, not so much. >>> >>> >>> >>> Would a patch ?fixing? these instances be acceptable? >>> >>> >>> >>> Regards, >>> >>> Philip >>> >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >>> >>> >>> >>> >>> >>> >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From karel.gardas at centrum.cz Tue Aug 12 21:12:01 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Tue, 12 Aug 2014 23:12:01 +0200 Subject: CPP usage in GHC. In-Reply-To: References: <53E7DED1.6080609@centrum.cz> <53E8B7F9.1040607@centrum.cz> <53E8BD6C.1010109@centrum.cz> <53E92747.50000@centrum.cz> <53E93406.6070106@centrum.cz> Message-ID: <53EA8321.6080709@centrum.cz> Actually probably not debris leftover but needed code. I just removed -x assembler-with-cpp and got this: gcc: ghc/Main.hs: linker input file unused because linking not done so we definitely need some -x to set language even for GNU C. Tested also with old 3.4.x. From langs available, assembler-with-cpp looks like the best option unfortunately... Karel On 08/12/14 01:35 AM, Carter Schonwald wrote: > Oooooo. Then it's possibly debris leftover from Austin's initial clang > compatibility work predating the improvements via the settings file work. > > I'm Afk right now, but that probably can be safely removed from ghc, > especially since the configure script for clang cpp adds that anyways > now I think? I might be wrong though. > > On Monday, August 11, 2014, Karel Gardas > wrote: > > On 08/11/14 11:18 PM, Carter Schonwald wrote: > > why should this flag be passed to cpp when invoked on HS files? > It'd be > easy to expose another field in the settings file for this other > invokecation.. though i should look more closely at the use site > before > opinining :) > > > Hmm, isn't doCpp function what's invoked when cpp is invoked for HS > files? If so, then -x assembler-with-cpp is already used for HS > files anyway. > > Karel > > > > On Mon, Aug 11, 2014 at 4:27 PM, Karel Gardas > > wrote: > > On 08/11/14 08:48 PM, Carter Schonwald wrote: > > What i'm hearing you say is we actually need TWO sets > of CPP > flags, one > for normal haskell, and another for the CPP used on the > assembler? > wheres this hardcoding? > > > DriverPipeline.hs -- grep for "assembler-with-cpp" and you > will find it. > > IMHO best would be to move this "-x assembler-with-cpp" > into the > hs-cpp-flags managed by configure. This way it may be even > possible > to use system supplied plain cpp instead of cpp builtinto > GNU C. > > Karel > > > From lukexipd at gmail.com Tue Aug 12 23:47:40 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Tue, 12 Aug 2014 16:47:40 -0700 Subject: ARM64 Task Force In-Reply-To: <53EA5BCC.3060406@centrum.cz> References: <53E466F1.90201@centrum.cz> <53E60463.2080608@centrum.cz> <53EA5BCC.3060406@centrum.cz> Message-ID: Hi all, Yahoo, happy news -- I think I've got it. Studying enough of the non-handwritten ASM that I was stepping through led me to make this change: https://github.com/lukexi/ghc/commit/1140e11db07817fcfc12446c74cd5a2bcdf92781 (I think disabling the floating point registers was just a red herring; I'll confirm that next) And I can now call this fib code happily via the FFI: fibs :: [Int] fibs = 1:1:zipWith (+) fibs (tail fibs) foreign export ccall fib :: Int -> Int fib :: Int -> Int fib = (fibs !!) For posterity, more detail on the crashing case earlier (this is fixed now with proper storage and updating of the frame pointer): Calling fib(1) or fib(2) worked, but calling fib(3) triggered the crash. This was the backtrace, where you can see the errant 0x0000000100f05110 frame values. (lldb) bt * thread #1: tid = 0xac6ed, 0x0000000100f05110, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x100f05110) frame #0: 0x0000000100f05110 frame #1: 0x0000000100f05110 * frame #2: 0x00000001000ffc9c HelloHaskell`-[SPJViewController viewDidLoad](self=0x0000000144e0cf10, _cmd=0x0000000186ae429a) + 76 at SPJViewController.m:22 frame #3: 0x00000001862f8b70 UIKit`-[UIViewController loadViewIfRequired] + 692 frame #4: 0x00000001862f8880 UIKit`-[UIViewController view] + 32 frame #5: 0x00000001862feeb0 UIKit`-[UIWindow addRootViewControllerViewIfPossible] + 72 frame #6: 0x00000001862fc6d4 UIKit`-[UIWindow _setHidden:forced:] + 296 frame #7: 0x000000018636d2bc UIKit`-[UIWindow makeKeyAndVisible] + 56 frame #8: 0x000000018657ff74 UIKit`-[UIApplication _callInitializationDelegatesForMainScene:transitionContext:] + 2804 frame #9: 0x00000001865824ec UIKit`-[UIApplication _runWithMainScene:transitionContext:completion:] + 1480 frame #10: 0x0000000186580b84 UIKit`-[UIApplication workspaceDidEndTransaction:] + 184 frame #11: 0x0000000189d846ac FrontBoardServices`__31-[FBSSerialQueue performAsync:]_block_invoke + 28 frame #12: 0x0000000181c7a360 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_BLOCK__ + 20 frame #13: 0x0000000181c79468 CoreFoundation`__CFRunLoopDoBlocks + 312 frame #14: 0x0000000181c77a8c CoreFoundation`__CFRunLoopRun + 1756 frame #15: 0x0000000181ba5664 CoreFoundation`CFRunLoopRunSpecific + 396 frame #16: 0x0000000186363140 UIKit`-[UIApplication _run] + 552 frame #17: 0x000000018635e164 UIKit`UIApplicationMain + 1488 frame #18: 0x0000000100100268 HelloHaskell`main(argc=1, argv=0x000000016fd07a58) + 204 at main.m:24 frame #19: 0x00000001921eea08 libdyld.dylib`start + 4 On Tue, Aug 12, 2014 at 11:24 AM, Karel Gardas wrote: > On 08/12/14 11:03 AM, Luke Iannini wrote: > >> It looks like it's jumping somewhere strange; lldb tells me it's to >> 0x100e05110: .long 0x00000000 ; unknown opcode >> 0x100e05114: .long 0x00000000 ; unknown opcode >> 0x100e05118: .long 0x00000000 ; unknown opcode >> 0x100e0511c: .long 0x00000000 ; unknown opcode >> 0x100e05120: .long 0x00000000 ; unknown opcode >> 0x100e05124: .long 0x00000000 ; unknown opcode >> 0x100e05128: .long 0x00000000 ; unknown opcode >> 0x100e0512c: .long 0x00000000 ; unknown opcode >> >> If I put a breakpoint on StgRun and step by instruction, I seem to make >> it to about: >> https://github.com/lukexi/ghc/blob/e99b7a41e64f3ddb9bb420c0d5583f >> 0e302e321e/rts/StgCRun.c#L770 >> (give or take a line) >> > > strange that it's in the middle of the stp isns block. Anyway, this looks > like a cpu exception doesn't it? You will need to find out the reg which > holds the "exception reason" value and then look on it in your debugger to > find out what's going wrong there. > > Karel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bgamari.foss at gmail.com Wed Aug 13 02:30:22 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Tue, 12 Aug 2014 22:30:22 -0400 Subject: Grumpy harbormaster Message-ID: <878umtruip.fsf@gmail.com> I submitted two unrelated differentials today, D152 and D150. Somehow the Harbormaster builds of both have failed in the same peculiar way, silently dying apparently after right after finishing the initial ghc-cabal build. It's entirely possible I'm missing something silly here, but neither of the patches touch anything near Cabal or the build system so I'm a bit perplexed. What to do from here? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From alan.zimm at gmail.com Wed Aug 13 06:50:25 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 13 Aug 2014 08:50:25 +0200 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: And I dipped my toes into the phabricator water, and uploaded a diff to https://phabricator.haskell.org/D153 I left the lines long for now, so that it is clear that I simply added parameters to existing type signatures. On Tue, Aug 12, 2014 at 10:51 PM, Alan & Kim Zimmerman wrote: > Status update > > I have worked through a proof of concept update to the GHC AST whereby the > type is provided as a parameter to each data type. This was basically a > mechanical process of changing type signatures, and required very little > actual code changes, being only to initialise the placeholder types. > > The enabling types are > > > type PostTcType = Type -- Used for slots in the abstract syntax > -- where we want to keep slot for a type > -- to be added by the type checker...but > -- [before typechecking it's just bogus] > type PreTcType = () -- used before typechecking > > > class PlaceHolderType a where > placeHolderType :: a > > instance PlaceHolderType PostTcType where > > placeHolderType = panic "Evaluated the place holder for a > PostTcType" > > instance PlaceHolderType PreTcType where > placeHolderType = () > > These are used to replace all instances of PostTcType in the hsSyn types. > > The change was applied against HEAD as of last friday, and can be found > here > > https://github.com/alanz/ghc/tree/wip/landmine-param > https://github.com/alanz/haddock/tree/wip/landmine-param > > They pass 'sh validate' with GHC 7.6.3, and compile against GHC 7.8.3. I > have not tried to validate that yet, have no reason to expect failure. > > > Can I please get some feedback as to whether this is a worthwhile change? > > It is the first step to getting a generic traversal safe AST > > Regards > Alan > > > On Mon, Jul 28, 2014 at 5:45 PM, Alan & Kim Zimmerman > wrote: > >> FYI I edited the paste at http://lpaste.net/108262 to show the problem >> >> >> On Mon, Jul 28, 2014 at 5:41 PM, Alan & Kim Zimmerman < >> alan.zimm at gmail.com> wrote: >> >>> I already tried that, the syntax does not seem to allow it. >>> >>> I suspect some higher form of sorcery will be required, as alluded to >>> here >>> http://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family >>> >>> Alan >>> >>> >>> On Mon, Jul 28, 2014 at 4:55 PM, wrote: >>> >>>> Dear Alan, >>>> >>>> >>>> >>>> I would think you would want to constrain the result, i.e. >>>> >>>> >>>> >>>> type family (Data (PostTcType a)) => PostTcType a where ? >>>> >>>> >>>> >>>> The Data-instance of ?a? doesn?t give you much if you have a >>>> ?PostTcType a?. >>>> >>>> >>>> >>>> Your point about SYB-recognition of WrongPhase is, of course, a good >>>> one ;) >>>> >>>> >>>> >>>> Regards, >>>> >>>> Philip >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] >>>> *Sent:* maandag 28 juli 2014 14:10 >>>> *To:* Holzenspies, P.K.F. (EWI) >>>> *Cc:* Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org >>>> >>>> *Subject:* Re: Broken Data.Data instances >>>> >>>> >>>> >>>> Philip >>>> >>>> I think the main reason for the WrongPhase thing is to have something >>>> that explicitly has a Data and Typeable instance, to allow generic (SYB) >>>> traversal. If we can get by without this so much the better. >>>> >>>> On a related note, is there any way to constrain the 'a' in >>>> >>>> type family PostTcType a where >>>> PostTcType Id = TcType >>>> PostTcType other = WrongPhaseTyp >>>> >>>> to have an instance of Data? >>>> >>>> I am experimenting with traversals over my earlier paste, and got stuck >>>> here (which is the reason the Show instances were commentet out in the >>>> original). >>>> >>>> Alan >>>> >>>> >>>> >>>> >>>> >>>> On Mon, Jul 28, 2014 at 12:30 PM, wrote: >>>> >>>> Sorry about that? I?m having it out with my terminal server and the >>>> server seems to be winning. Here?s another go: >>>> >>>> >>>> >>>> I always read the () as ?there?s nothing meaningful to stick in here, >>>> but I have to stick in something? so I don?t necessarily want the >>>> WrongPhase-thing. There is very old commentary stating it would be lovely >>>> if someone could expose the PostTcType as a parameter of the AST-types, but >>>> that there are so many types and constructors, that it?s a boring chore to >>>> do. Actually, I was hoping haRe would come up to speed to be able to do >>>> this. That being said, I think Simon?s idea to turn PostTcType into a >>>> type-family is a better way altogether; it also documents intent, i.e. () >>>> may not say so much, but PostTcType RdrName says quite a lot. >>>> >>>> >>>> >>>> Simon commented that a lot of the internal structures aren?t trees, but >>>> cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just >>>> and Nothing, which again refer to the TyCon for Maybe. I was wondering >>>> whether it would be possible to make stateful lenses for this. Of course, >>>> for specific cases, we could do this, but I wonder if it is also possible >>>> to have lenses remember the things they visited and not visit them twice. >>>> Any ideas on this, Edward? >>>> >>>> >>>> >>>> Regards, >>>> >>>> Philip >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] >>>> >>>> *Sent:* maandag 28 juli 2014 11:14 >>>> >>>> *To:* Simon Peyton Jones >>>> *Cc:* Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs >>>> >>>> >>>> *Subject:* Re: Broken Data.Data instances >>>> >>>> >>>> >>>> I have made a conceptual example of this here http://lpaste.net/108262 >>>> >>>> Alan >>>> >>>> >>>> >>>> On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman < >>>> alan.zimm at gmail.com> wrote: >>>> >>>> What about creating a specific type with a single constructor for the >>>> "not relevant to this phase" type to be used instead of () above? That >>>> would also clearly document what was going on. >>>> >>>> Alan >>>> >>>> >>>> >>>> On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones < >>>> simonpj at microsoft.com> wrote: >>>> >>>> I've had to mangle a bunch of hand-written Data instances and push out >>>> patches to a dozen packages that used to be built this way before I >>>> convinced the authors to switch to safer versions of Data. Using virtual >>>> smart constructors like we do now in containers and Text where needed can >>>> be used to preserve internal invariants, etc. >>>> >>>> >>>> >>>> If the ?hand grenades? are the PostTcTypes, etc, then I can explain why >>>> they are there. >>>> >>>> >>>> >>>> There simply is no sensible type you can put before the type checker >>>> runs. For example one of the constructors in HsExpr is >>>> >>>> | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] >>>> >>>> After type checking we know what type the thing has, but before we have >>>> no clue. >>>> >>>> >>>> >>>> We could get around this by saying >>>> >>>> type PostTcType = Maybe TcType >>>> >>>> but that would mean that every post-typechecking consumer would need a >>>> redundant pattern-match on a Just that would always succeed. >>>> >>>> >>>> >>>> It?s nothing deeper than that. Adding Maybes everywhere would be >>>> possible, just clunky. >>>> >>>> >>>> >>>> >>>> >>>> However we now have type functions, and HsExpr is parameterised by an >>>> ?id? parameter, which changes from RdrName (after parsing) to Name (after >>>> renaming) to Id (after typechecking). So we could do this: >>>> >>>> | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] >>>> >>>> and define PostTcType as a closed type family thus >>>> >>>> >>>> >>>> type family PostTcType a where >>>> >>>> PostTcType Id = TcType >>>> >>>> PostTcType other = () >>>> >>>> >>>> >>>> That would be better than filling it with bottoms. But it might not >>>> help with generic programming, because there?d be a component whose type >>>> wasn?t fixed. I have no idea how generics and type functions interact. >>>> >>>> >>>> >>>> Simon >>>> >>>> >>>> >>>> *From:* Edward Kmett [mailto:ekmett at gmail.com] >>>> *Sent:* 27 July 2014 18:27 >>>> *To:* p.k.f.holzenspies at utwente.nl >>>> *Cc:* alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs >>>> >>>> >>>> *Subject:* Re: Broken Data.Data instances >>>> >>>> >>>> >>>> Philip, Alan, >>>> >>>> >>>> >>>> If you need a hand, I'm happy to pitch in guidance. >>>> >>>> >>>> >>>> I've had to mangle a bunch of hand-written Data instances and push out >>>> patches to a dozen packages that used to be built this way before I >>>> convinced the authors to switch to safer versions of Data. Using virtual >>>> smart constructors like we do now in containers and Text where needed can >>>> be used to preserve internal invariants, etc. >>>> >>>> >>>> >>>> This works far better for users of the API than just randomly throwing >>>> them a live hand grenade. As I recall, these little grenades in generic >>>> programming over the GHC API have been a constant source of pain for >>>> libraries like haddock. >>>> >>>> >>>> >>>> Simon, >>>> >>>> >>>> >>>> It seems to me that regarding circular data structures, nothing >>>> prevents you from walking a circular data structure with Data.Data. You can >>>> generate a new one productively that looks just like the old with the >>>> contents swapped out, it is indistinguishable to an observer if the fixed >>>> point is lost, and a clever observer can use observable sharing to get it >>>> back, supposing that they are allowed to try. >>>> >>>> >>>> >>>> Alternately, we could use the 'virtual constructor' trick there to >>>> break the cycle and reintroduce it, but I'm less enthusiastic about that >>>> idea, even if it is simpler in many ways. >>>> >>>> >>>> >>>> -Edward >>>> >>>> >>>> >>>> On Sun, Jul 27, 2014 at 10:17 AM, wrote: >>>> >>>> Alan, >>>> >>>> In that case, let's have a short feedback-loop between the two of us. >>>> It seems many of these files (Name.lhs, for example) are really stable >>>> through the repo-history. It would be nice to have one bigger refactoring >>>> all in one go (some of the code could use a polish, a lot of code seems >>>> removable). >>>> >>>> Regards, >>>> Philip >>>> ------------------------------ >>>> >>>> *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] >>>> *Verzonden:* vrijdag 25 juli 2014 13:44 >>>> *Aan:* Simon Peyton Jones >>>> *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org >>>> *Onderwerp:* Re: Broken Data.Data instances >>>> >>>> By the way, I would be happy to attempt this task, if the concept is >>>> viable. >>>> >>>> >>>> >>>> On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < >>>> alan.zimm at gmail.com> wrote: >>>> >>>> While we are talking about fixing traversals, how about getting rid >>>> of the phase specific panic initialisers for placeHolderType, >>>> placeHolderKind and friends? >>>> >>>> In order to safely traverse with SYB, the following needs to be >>>> inserted into all the SYB schemes (see >>>> >>>> https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs >>>> ) >>>> >>>> -- Check the Typeable items >>>> checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool >>>> checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` >>>> fixity `SYB.extQ` nameSet) x >>>> where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) >>>> :: GHC.NameSet -> Bool >>>> postTcType = const (stage < SYB.TypeChecker ) >>>> :: GHC.PostTcType -> Bool >>>> fixity = const (stage < SYB.Renamer ) >>>> :: GHC.Fixity -> Bool >>>> >>>> And in addition HsCmdTop and ParStmtBlock are initialised with explicit >>>> 'undefined values. >>>> >>>> Perhaps use an initialiser that can have its panic turned off when >>>> called via the GHC API? >>>> >>>> Regards >>>> >>>> Alan >>>> >>>> >>>> >>>> >>>> >>>> On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < >>>> simonpj at microsoft.com> wrote: >>>> >>>> So... does anyone object to me changing these "broken" instances >>>> with the ones given by DeriveDataTypeable? >>>> >>>> That?s fine with me provided (a) the default behaviour is not immediate >>>> divergence (which it might well be), and (b) the pitfalls are documented. >>>> >>>> >>>> >>>> Simon >>>> >>>> >>>> >>>> *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] >>>> >>>> *Sent:* 24 July 2014 18:42 >>>> *To:* Simon Peyton Jones >>>> *Cc:* ghc-devs at haskell.org >>>> *Subject:* Re: Broken Data.Data instances >>>> >>>> >>>> >>>> Dear Simon, et al, >>>> >>>> These are very good points to make for people writing such traversals >>>> and queries. I would be more than happy to write a page on the pitfalls >>>> etc. on the wiki, but in my experience so far, exploring the innards of GHC >>>> is tremendously helped by trying small things out and showing (bits of) the >>>> intermediate structures. For me, personally, this has always been hindered >>>> by the absence of good instances of Data and/or Show (not having to bring >>>> DynFlags and not just visualising with the pretty printer are very helpful). >>>> >>>> So... does anyone object to me changing these "broken" instances with >>>> the ones given by DeriveDataTypeable? >>>> >>>> Also, many of these internal data structures could be provided with >>>> useful lenses to improve such traversals further. Anyone ever go at that? >>>> Would be people be interested? >>>> >>>> Regards, >>>> Philip >>>> >>>> *Simon Peyton Jones* >>>> >>>> 24 Jul 2014 18:22 >>>> >>>> GHC?s data structures are often mutually recursive. e.g. >>>> >>>> ? The TyCon for Maybe contains the DataCon for Just >>>> >>>> ? The DataCon For just contains Just?s type >>>> >>>> ? Just?s type contains the TyCon for Maybe >>>> >>>> >>>> >>>> So any attempt to recursively walk over all these structures, as you >>>> would a tree, will fail. >>>> >>>> >>>> >>>> Also there?s a lot of sharing. For example, every occurrence of ?map? >>>> is a Var, and inside that Var is map?s type, its strictness, its rewrite >>>> RULE, etc etc. In walking over a term you may not want to walk over all >>>> that stuff at every occurrence of map. >>>> >>>> >>>> >>>> Maybe that?s it; I?m not certain since I did not write the Data >>>> instances for any of GHC?s types >>>> >>>> >>>> >>>> Simon >>>> >>>> >>>> >>>> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org >>>> ] *On Behalf Of * >>>> p.k.f.holzenspies at utwente.nl >>>> *Sent:* 24 July 2014 16:42 >>>> *To:* ghc-devs at haskell.org >>>> *Subject:* Broken Data.Data instances >>>> >>>> >>>> >>>> Dear GHC-ers, >>>> >>>> >>>> >>>> Is there a reason for explicitly broken Data.Data instances? Case in >>>> point: >>>> >>>> >>>> >>>> > instance Data Var where >>>> >>>> > -- don't traverse? >>>> >>>> > toConstr _ = abstractConstr "Var" >>>> >>>> > gunfold _ _ = error "gunfold" >>>> >>>> > dataTypeOf _ = mkNoRepType "Var" >>>> >>>> >>>> >>>> I understand (vaguely) arguments about abstract data types, but this >>>> also excludes convenient queries that can, e.g. extract all types from a >>>> CoreExpr. I had hoped to do stuff like this: >>>> >>>> >>>> >>>> > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b >>>> >>>> > collect = everything mplus $ mkQ mzero return >>>> >>>> > >>>> >>>> > allTypes :: CoreExpr -> [Type] >>>> >>>> > allTypes = collect >>>> >>>> >>>> >>>> Especially when still exploring (parts of) the GHC API, being able to >>>> extract things in this fashion is very helpful. SYB?s ?everything? being >>>> broken by these instances, not so much. >>>> >>>> >>>> >>>> Would a patch ?fixing? these instances be acceptable? >>>> >>>> >>>> >>>> Regards, >>>> >>>> Philip >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://www.haskell.org/mailman/listinfo/ghc-devs >>>> >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://www.haskell.org/mailman/listinfo/ghc-devs >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From tuncer.ayaz at gmail.com Wed Aug 13 09:39:56 2014 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Wed, 13 Aug 2014 11:39:56 +0200 Subject: making ./validate run tests on all CPUs by default In-Reply-To: <20140812233113.64c2e20e@sf> References: <20140812233113.64c2e20e@sf> Message-ID: On Tue, Aug 12, 2014 at 10:31 PM, Sergei Trofimovich wrote: > Good evening all! > > Currently when being ran './validate' script (without any parameters): > - builds ghc using 2 parallel jobs > - runs testsuite using 2 parallel jobs > > I propose to change the default value to amount of available CPUs: > - build ghc using N+1 parallel jobs > - run testsuite using N+1 parallel jobs > > Pros: > - first-time users will get faster ./validate > - seasoned users will need less tweaking for buildbots > > Cons: > - for imbalanced boxes (32 cores, 8GB RAM) it might > be quite painful to drag box out of swap > > What do you think about it? Isn't the memory use also a problem on boxes with a much lower number of cores (e.g. 7.8 space leak(s))? On one machine with 1GB per core, I had to limit cabal install's parallelism when using 7.8. Assuming the patch goes in, is there a way to limit parallel jobs on the command line? From p.k.f.holzenspies at utwente.nl Wed Aug 13 10:58:38 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Wed, 13 Aug 2014 10:58:38 +0000 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Dear Alan, I?ve had a look at the diffs on Phabricator. They?re looking good. I have a few comments / questions: 1) As you said, the renamer and typechecker are heavily interwoven, but when you *know* that you?re between renamer and typechecker (i.e. when things have ?Name?s, but not ?Id?s), isn?t it better to choose the PreTcType as argument? (Basically, look for any occurrence of ?Name PostTcType? and replace with Pre.) 2) I saw your point about being able to distinguish PreTcType from () in SYB-traversals, but you have now defined PreTcType as a synonym for (). With an eye on the maximum line-width of 80 characters and these things being explicit everywhere as a type parameter (as opposed to a type family over the exposed id-parameter), how much added value is there still in having the names PreTcType and PostTcType? Would ?()? and ?Type? not be as clear? I ask, because when I started looking at GHC, I was overwhelmed with all the names for things in there, most of which then turn out to be different names for the same thing. The main reason to call the thing PostTcType in the first place was to give some kind of warning that there would be nothing there before TC. 3) The variable name ?ptt? is a bit misleading to me. I would use ?ty?. 4) In the cases of the types that have recently been parameterized in what they contain, is there a reason to have the ty-argument *after* the content-argument? E.g. why is it ?LGRHS RdrName (LHsExpr RdrName PreTcType) PreTcType? instead of ?LGRHS RdrName PreTcType (LHsExpr RdrName PreTcType)?? This may very well be a tiny stylistic thing, but it?s worth thinking about. 5) I much prefer deleting code over commenting it out. I understand the urge, but if you don?t remove these lines before your final commit, they will become noise in the long term. Versioning systems preserve the code for you. (Example: Convert.void) Regards, Philip From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: woensdag 13 augustus 2014 8:50 To: Holzenspies, P.K.F. (EWI) Cc: Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org Subject: Re: Broken Data.Data instances And I dipped my toes into the phabricator water, and uploaded a diff to https://phabricator.haskell.org/D153 I left the lines long for now, so that it is clear that I simply added parameters to existing type signatures. On Tue, Aug 12, 2014 at 10:51 PM, Alan & Kim Zimmerman > wrote: Status update I have worked through a proof of concept update to the GHC AST whereby the type is provided as a parameter to each data type. This was basically a mechanical process of changing type signatures, and required very little actual code changes, being only to initialise the placeholder types. The enabling types are type PostTcType = Type -- Used for slots in the abstract syntax -- where we want to keep slot for a type -- to be added by the type checker...but -- [before typechecking it's just bogus] type PreTcType = () -- used before typechecking class PlaceHolderType a where placeHolderType :: a instance PlaceHolderType PostTcType where placeHolderType = panic "Evaluated the place holder for a PostTcType" instance PlaceHolderType PreTcType where placeHolderType = () These are used to replace all instances of PostTcType in the hsSyn types. The change was applied against HEAD as of last friday, and can be found here https://github.com/alanz/ghc/tree/wip/landmine-param https://github.com/alanz/haddock/tree/wip/landmine-param They pass 'sh validate' with GHC 7.6.3, and compile against GHC 7.8.3. I have not tried to validate that yet, have no reason to expect failure. Can I please get some feedback as to whether this is a worthwhile change? It is the first step to getting a generic traversal safe AST Regards Alan On Mon, Jul 28, 2014 at 5:45 PM, Alan & Kim Zimmerman > wrote: FYI I edited the paste at http://lpaste.net/108262 to show the problem On Mon, Jul 28, 2014 at 5:41 PM, Alan & Kim Zimmerman > wrote: I already tried that, the syntax does not seem to allow it. I suspect some higher form of sorcery will be required, as alluded to here http://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family Alan On Mon, Jul 28, 2014 at 4:55 PM, > wrote: Dear Alan, I would think you would want to constrain the result, i.e. type family (Data (PostTcType a)) => PostTcType a where ? The Data-instance of ?a? doesn?t give you much if you have a ?PostTcType a?. Your point about SYB-recognition of WrongPhase is, of course, a good one ;) Regards, Philip From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: maandag 28 juli 2014 14:10 To: Holzenspies, P.K.F. (EWI) Cc: Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Philip I think the main reason for the WrongPhase thing is to have something that explicitly has a Data and Typeable instance, to allow generic (SYB) traversal. If we can get by without this so much the better. On a related note, is there any way to constrain the 'a' in type family PostTcType a where PostTcType Id = TcType PostTcType other = WrongPhaseTyp to have an instance of Data? I am experimenting with traversals over my earlier paste, and got stuck here (which is the reason the Show instances were commentet out in the original). Alan On Mon, Jul 28, 2014 at 12:30 PM, > wrote: Sorry about that? I?m having it out with my terminal server and the server seems to be winning. Here?s another go: I always read the () as ?there?s nothing meaningful to stick in here, but I have to stick in something? so I don?t necessarily want the WrongPhase-thing. There is very old commentary stating it would be lovely if someone could expose the PostTcType as a parameter of the AST-types, but that there are so many types and constructors, that it?s a boring chore to do. Actually, I was hoping haRe would come up to speed to be able to do this. That being said, I think Simon?s idea to turn PostTcType into a type-family is a better way altogether; it also documents intent, i.e. () may not say so much, but PostTcType RdrName says quite a lot. Simon commented that a lot of the internal structures aren?t trees, but cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just and Nothing, which again refer to the TyCon for Maybe. I was wondering whether it would be possible to make stateful lenses for this. Of course, for specific cases, we could do this, but I wonder if it is also possible to have lenses remember the things they visited and not visit them twice. Any ideas on this, Edward? Regards, Philip From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: maandag 28 juli 2014 11:14 To: Simon Peyton Jones Cc: Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs Subject: Re: Broken Data.Data instances I have made a conceptual example of this here http://lpaste.net/108262 Alan On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman > wrote: What about creating a specific type with a single constructor for the "not relevant to this phase" type to be used instead of () above? That would also clearly document what was going on. Alan On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones > wrote: I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. If the ?hand grenades? are the PostTcTypes, etc, then I can explain why they are there. There simply is no sensible type you can put before the type checker runs. For example one of the constructors in HsExpr is | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] After type checking we know what type the thing has, but before we have no clue. We could get around this by saying type PostTcType = Maybe TcType but that would mean that every post-typechecking consumer would need a redundant pattern-match on a Just that would always succeed. It?s nothing deeper than that. Adding Maybes everywhere would be possible, just clunky. However we now have type functions, and HsExpr is parameterised by an ?id? parameter, which changes from RdrName (after parsing) to Name (after renaming) to Id (after typechecking). So we could do this: | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] and define PostTcType as a closed type family thus type family PostTcType a where PostTcType Id = TcType PostTcType other = () That would be better than filling it with bottoms. But it might not help with generic programming, because there?d be a component whose type wasn?t fixed. I have no idea how generics and type functions interact. Simon From: Edward Kmett [mailto:ekmett at gmail.com] Sent: 27 July 2014 18:27 To: p.k.f.holzenspies at utwente.nl Cc: alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs Subject: Re: Broken Data.Data instances Philip, Alan, If you need a hand, I'm happy to pitch in guidance. I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. This works far better for users of the API than just randomly throwing them a live hand grenade. As I recall, these little grenades in generic programming over the GHC API have been a constant source of pain for libraries like haddock. Simon, It seems to me that regarding circular data structures, nothing prevents you from walking a circular data structure with Data.Data. You can generate a new one productively that looks just like the old with the contents swapped out, it is indistinguishable to an observer if the fixed point is lost, and a clever observer can use observable sharing to get it back, supposing that they are allowed to try. Alternately, we could use the 'virtual constructor' trick there to break the cycle and reintroduce it, but I'm less enthusiastic about that idea, even if it is simpler in many ways. -Edward On Sun, Jul 27, 2014 at 10:17 AM, > wrote: Alan, In that case, let's have a short feedback-loop between the two of us. It seems many of these files (Name.lhs, for example) are really stable through the repo-history. It would be nice to have one bigger refactoring all in one go (some of the code could use a polish, a lot of code seems removable). Regards, Philip ________________________________ Van: Alan & Kim Zimmerman [alan.zimm at gmail.com] Verzonden: vrijdag 25 juli 2014 13:44 Aan: Simon Peyton Jones CC: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: Re: Broken Data.Data instances By the way, I would be happy to attempt this task, if the concept is viable. On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman > wrote: While we are talking about fixing traversals, how about getting rid of the phase specific panic initialisers for placeHolderType, placeHolderKind and friends? In order to safely traverse with SYB, the following needs to be inserted into all the SYB schemes (see https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs) -- Check the Typeable items checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` fixity `SYB.extQ` nameSet) x where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: GHC.NameSet -> Bool postTcType = const (stage < SYB.TypeChecker ) :: GHC.PostTcType -> Bool fixity = const (stage < SYB.Renamer ) :: GHC.Fixity -> Bool And in addition HsCmdTop and ParStmtBlock are initialised with explicit 'undefined values. Perhaps use an initialiser that can have its panic turned off when called via the GHC API? Regards Alan On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones > wrote: So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? That?s fine with me provided (a) the default behaviour is not immediate divergence (which it might well be), and (b) the pitfalls are documented. Simon From: "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] Sent: 24 July 2014 18:42 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Dear Simon, et al, These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? Regards, Philip [cid:image001.jpg at 01CFB6F2.177F3FF0] Simon Peyton Jones 24 Jul 2014 18:22 GHC?s data structures are often mutually recursive. e.g. ? The TyCon for Maybe contains the DataCon for Just ? The DataCon For just contains Just?s type ? Just?s type contains the TyCon for Maybe So any attempt to recursively walk over all these structures, as you would a tree, will fail. Also there?s a lot of sharing. For example, every occurrence of ?map? is a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. Maybe that?s it; I?m not certain since I did not write the Data instances for any of GHC?s types Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 24 July 2014 16:42 To: ghc-devs at haskell.org Subject: Broken Data.Data instances Dear GHC-ers, Is there a reason for explicitly broken Data.Data instances? Case in point: > instance Data Var where > -- don't traverse? > toConstr _ = abstractConstr "Var" > gunfold _ _ = error "gunfold" > dataTypeOf _ = mkNoRepType "Var" I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > collect = everything mplus $ mkQ mzero return > > allTypes :: CoreExpr -> [Type] > allTypes = collect Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB?s ?everything? being broken by these instances, not so much. Would a patch ?fixing? these instances be acceptable? Regards, Philip _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: image001.jpg URL: From alan.zimm at gmail.com Wed Aug 13 13:21:24 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 13 Aug 2014 15:21:24 +0200 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hi Philip Thanks for the feedback. Firstly, I see this as a draft change as a proof of concept, and as such I deliberately tried to keep things "obvious" until it had been fully worked through. It helped in managing my own confusion to limit the changes to be things that either HAD to change (PostTcType), or the introduction of new things that did not previously exist (ptt, PreTcType). Naming them the way I did I was able to make sure that I did not end up making cascading changes to currently good code when I was in a sticky point. This definitely helped in the renamer code. It also makes it clearer to current reviewers that this is in fact a straightforward change. If there is a consensus that this is something worth doing, then I agree on your proposed changes and will work them through. On the void thing I only realised afterwards what was happening, I am now not sure whether it is better to keep the new placeHolderType values or restore void as a synonym for it. It must definitely go it it is not used though. Alan On Wed, Aug 13, 2014 at 12:58 PM, wrote: > Dear Alan, > > > > I?ve had a look at the diffs on Phabricator. They?re looking good. I have > a few comments / questions: > > > > 1) As you said, the renamer and typechecker are heavily interwoven, but > when you **know** that you?re between renamer and typechecker (i.e. when > things have ?Name?s, but not ?Id?s), isn?t it better to choose the > PreTcType as argument? (Basically, look for any occurrence of ?Name > PostTcType? and replace with Pre.) > > > > 2) I saw your point about being able to distinguish PreTcType from () in > SYB-traversals, but you have now defined PreTcType as a synonym for (). > With an eye on the maximum line-width of 80 characters and these things > being explicit everywhere as a type parameter (as opposed to a type family > over the exposed id-parameter), how much added value is there still in > having the names PreTcType and PostTcType? Would ?()? and ?Type? not be as > clear? I ask, because when I started looking at GHC, I was overwhelmed with > all the names for things in there, most of which then turn out to be > different names for the same thing. The main reason to call the thing > PostTcType in the first place was to give some kind of warning that there > would be nothing there before TC. > > > > 3) The variable name ?ptt? is a bit misleading to me. I would use ?ty?. > > > > 4) In the cases of the types that have recently been parameterized in what > they contain, is there a reason to have the ty-argument **after** the > content-argument? E.g. why is it ?LGRHS RdrName (LHsExpr RdrName PreTcType) > PreTcType? instead of ?LGRHS RdrName PreTcType (LHsExpr RdrName > PreTcType)?? This may very well be a tiny stylistic thing, but it?s worth > thinking about. > > > > 5) I much prefer deleting code over commenting it out. I understand the > urge, but if you don?t remove these lines before your final commit, they > will become noise in the long term. Versioning systems preserve the code > for you. (Example: Convert.void) > > > > Regards, > > Philip > > > > > > > > > > > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* woensdag 13 augustus 2014 8:50 > > *To:* Holzenspies, P.K.F. (EWI) > *Cc:* Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org > *Subject:* Re: Broken Data.Data instances > > > > And I dipped my toes into the phabricator water, and uploaded a diff to > https://phabricator.haskell.org/D153 > > I left the lines long for now, so that it is clear that I simply added > parameters to existing type signatures. > > > > On Tue, Aug 12, 2014 at 10:51 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> wrote: > > Status update > > I have worked through a proof of concept update to the GHC AST whereby the > type is provided as a parameter to each data type. This was basically a > mechanical process of changing type signatures, and required very little > actual code changes, being only to initialise the placeholder types. > > The enabling types are > > type PostTcType = Type -- Used for slots in the abstract > syntax > -- where we want to keep slot for a type > -- to be added by the type checker...but > -- [before typechecking it's just bogus] > > type PreTcType = () -- used before typechecking > > > class PlaceHolderType a where > placeHolderType :: a > > instance PlaceHolderType PostTcType where > > > placeHolderType = panic "Evaluated the place holder for a > PostTcType" > > instance PlaceHolderType PreTcType where > placeHolderType = () > > These are used to replace all instances of PostTcType in the hsSyn types. > > The change was applied against HEAD as of last friday, and can be found > here > > https://github.com/alanz/ghc/tree/wip/landmine-param > https://github.com/alanz/haddock/tree/wip/landmine-param > > They pass 'sh validate' with GHC 7.6.3, and compile against GHC 7.8.3. I > have not tried to validate that yet, have no reason to expect failure. > > Can I please get some feedback as to whether this is a worthwhile > change? > > > It is the first step to getting a generic traversal safe AST > > Regards > > Alan > > > > On Mon, Jul 28, 2014 at 5:45 PM, Alan & Kim Zimmerman > wrote: > > FYI I edited the paste at http://lpaste.net/108262 to show the problem > > > > On Mon, Jul 28, 2014 at 5:41 PM, Alan & Kim Zimmerman > wrote: > > I already tried that, the syntax does not seem to allow it. > > I suspect some higher form of sorcery will be required, as alluded to here > http://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family > > Alan > > > > On Mon, Jul 28, 2014 at 4:55 PM, wrote: > > Dear Alan, > > > > I would think you would want to constrain the result, i.e. > > > > type family (Data (PostTcType a)) => PostTcType a where ? > > > > The Data-instance of ?a? doesn?t give you much if you have a ?PostTcType > a?. > > > > Your point about SYB-recognition of WrongPhase is, of course, a good one ;) > > > > Regards, > > Philip > > > > > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* maandag 28 juli 2014 14:10 > *To:* Holzenspies, P.K.F. (EWI) > *Cc:* Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org > > > *Subject:* Re: Broken Data.Data instances > > > > Philip > > I think the main reason for the WrongPhase thing is to have something that > explicitly has a Data and Typeable instance, to allow generic (SYB) > traversal. If we can get by without this so much the better. > > On a related note, is there any way to constrain the 'a' in > > type family PostTcType a where > PostTcType Id = TcType > PostTcType other = WrongPhaseTyp > > to have an instance of Data? > > I am experimenting with traversals over my earlier paste, and got stuck > here (which is the reason the Show instances were commentet out in the > original). > > Alan > > > > > > On Mon, Jul 28, 2014 at 12:30 PM, wrote: > > Sorry about that? I?m having it out with my terminal server and the server > seems to be winning. Here?s another go: > > > > I always read the () as ?there?s nothing meaningful to stick in here, but > I have to stick in something? so I don?t necessarily want the > WrongPhase-thing. There is very old commentary stating it would be lovely > if someone could expose the PostTcType as a parameter of the AST-types, but > that there are so many types and constructors, that it?s a boring chore to > do. Actually, I was hoping haRe would come up to speed to be able to do > this. That being said, I think Simon?s idea to turn PostTcType into a > type-family is a better way altogether; it also documents intent, i.e. () > may not say so much, but PostTcType RdrName says quite a lot. > > > > Simon commented that a lot of the internal structures aren?t trees, but > cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just > and Nothing, which again refer to the TyCon for Maybe. I was wondering > whether it would be possible to make stateful lenses for this. Of course, > for specific cases, we could do this, but I wonder if it is also possible > to have lenses remember the things they visited and not visit them twice. > Any ideas on this, Edward? > > > > Regards, > > Philip > > > > > > > > > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > > *Sent:* maandag 28 juli 2014 11:14 > > *To:* Simon Peyton Jones > *Cc:* Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs > > > *Subject:* Re: Broken Data.Data instances > > > > I have made a conceptual example of this here http://lpaste.net/108262 > > Alan > > > > On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman > wrote: > > What about creating a specific type with a single constructor for the "not > relevant to this phase" type to be used instead of () above? That would > also clearly document what was going on. > > Alan > > > > On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones > wrote: > > I've had to mangle a bunch of hand-written Data instances and push out > patches to a dozen packages that used to be built this way before I > convinced the authors to switch to safer versions of Data. Using virtual > smart constructors like we do now in containers and Text where needed can > be used to preserve internal invariants, etc. > > > > If the ?hand grenades? are the PostTcTypes, etc, then I can explain why > they are there. > > > > There simply is no sensible type you can put before the type checker > runs. For example one of the constructors in HsExpr is > > | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] > > After type checking we know what type the thing has, but before we have no > clue. > > > > We could get around this by saying > > type PostTcType = Maybe TcType > > but that would mean that every post-typechecking consumer would need a > redundant pattern-match on a Just that would always succeed. > > > > It?s nothing deeper than that. Adding Maybes everywhere would be > possible, just clunky. > > > > > > However we now have type functions, and HsExpr is parameterised by an ?id? > parameter, which changes from RdrName (after parsing) to Name (after > renaming) to Id (after typechecking). So we could do this: > > | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] > > and define PostTcType as a closed type family thus > > > > type family PostTcType a where > > PostTcType Id = TcType > > PostTcType other = () > > > > That would be better than filling it with bottoms. But it might not help > with generic programming, because there?d be a component whose type wasn?t > fixed. I have no idea how generics and type functions interact. > > > > Simon > > > > *From:* Edward Kmett [mailto:ekmett at gmail.com] > *Sent:* 27 July 2014 18:27 > *To:* p.k.f.holzenspies at utwente.nl > *Cc:* alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs > > > *Subject:* Re: Broken Data.Data instances > > > > Philip, Alan, > > > > If you need a hand, I'm happy to pitch in guidance. > > > > I've had to mangle a bunch of hand-written Data instances and push out > patches to a dozen packages that used to be built this way before I > convinced the authors to switch to safer versions of Data. Using virtual > smart constructors like we do now in containers and Text where needed can > be used to preserve internal invariants, etc. > > > > This works far better for users of the API than just randomly throwing > them a live hand grenade. As I recall, these little grenades in generic > programming over the GHC API have been a constant source of pain for > libraries like haddock. > > > > Simon, > > > > It seems to me that regarding circular data structures, nothing prevents > you from walking a circular data structure with Data.Data. You can generate > a new one productively that looks just like the old with the contents > swapped out, it is indistinguishable to an observer if the fixed point is > lost, and a clever observer can use observable sharing to get it back, > supposing that they are allowed to try. > > > > Alternately, we could use the 'virtual constructor' trick there to break > the cycle and reintroduce it, but I'm less enthusiastic about that idea, > even if it is simpler in many ways. > > > > -Edward > > > > On Sun, Jul 27, 2014 at 10:17 AM, wrote: > > Alan, > > In that case, let's have a short feedback-loop between the two of us. It > seems many of these files (Name.lhs, for example) are really stable through > the repo-history. It would be nice to have one bigger refactoring all in > one go (some of the code could use a polish, a lot of code seems removable). > > Regards, > Philip > ------------------------------ > > *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] > *Verzonden:* vrijdag 25 juli 2014 13:44 > *Aan:* Simon Peyton Jones > *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > *Onderwerp:* Re: Broken Data.Data instances > > By the way, I would be happy to attempt this task, if the concept is > viable. > > > > On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> wrote: > > While we are talking about fixing traversals, how about getting rid of > the phase specific panic initialisers for placeHolderType, placeHolderKind > and friends? > > In order to safely traverse with SYB, the following needs to be inserted > into all the SYB schemes (see > > https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs > ) > > -- Check the Typeable items > checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool > checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` > fixity `SYB.extQ` nameSet) x > where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: > GHC.NameSet -> Bool > postTcType = const (stage < SYB.TypeChecker ) :: > GHC.PostTcType -> Bool > fixity = const (stage < SYB.Renamer ) :: > GHC.Fixity -> Bool > > And in addition HsCmdTop and ParStmtBlock are initialised with explicit > 'undefined values. > > Perhaps use an initialiser that can have its panic turned off when called > via the GHC API? > > Regards > > Alan > > > > > > On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > > So... does anyone object to me changing these "broken" instances with > the ones given by DeriveDataTypeable? > > That?s fine with me provided (a) the default behaviour is not immediate > divergence (which it might well be), and (b) the pitfalls are documented. > > > > Simon > > > > *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] > *Sent:* 24 July 2014 18:42 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Broken Data.Data instances > > > > Dear Simon, et al, > > These are very good points to make for people writing such traversals and > queries. I would be more than happy to write a page on the pitfalls etc. on > the wiki, but in my experience so far, exploring the innards of GHC is > tremendously helped by trying small things out and showing (bits of) the > intermediate structures. For me, personally, this has always been hindered > by the absence of good instances of Data and/or Show (not having to bring > DynFlags and not just visualising with the pretty printer are very helpful). > > So... does anyone object to me changing these "broken" instances with the > ones given by DeriveDataTypeable? > > Also, many of these internal data structures could be provided with useful > lenses to improve such traversals further. Anyone ever go at that? Would be > people be interested? > > Regards, > Philip > > *Simon Peyton Jones* > > 24 Jul 2014 18:22 > > GHC?s data structures are often mutually recursive. e.g. > > ? The TyCon for Maybe contains the DataCon for Just > > ? The DataCon For just contains Just?s type > > ? Just?s type contains the TyCon for Maybe > > > > So any attempt to recursively walk over all these structures, as you would > a tree, will fail. > > > > Also there?s a lot of sharing. For example, every occurrence of ?map? is > a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, > etc etc. In walking over a term you may not want to walk over all that > stuff at every occurrence of map. > > > > Maybe that?s it; I?m not certain since I did not write the Data instances > for any of GHC?s types > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org > ] *On Behalf Of * > p.k.f.holzenspies at utwente.nl > *Sent:* 24 July 2014 16:42 > *To:* ghc-devs at haskell.org > *Subject:* Broken Data.Data instances > > > > Dear GHC-ers, > > > > Is there a reason for explicitly broken Data.Data instances? Case in point: > > > > > instance Data Var where > > > -- don't traverse? > > > toConstr _ = abstractConstr "Var" > > > gunfold _ _ = error "gunfold" > > > dataTypeOf _ = mkNoRepType "Var" > > > > I understand (vaguely) arguments about abstract data types, but this also > excludes convenient queries that can, e.g. extract all types from a > CoreExpr. I had hoped to do stuff like this: > > > > > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > > > collect = everything mplus $ mkQ mzero return > > > > > > allTypes :: CoreExpr -> [Type] > > > allTypes = collect > > > > Especially when still exploring (parts of) the GHC API, being able to > extract things in this fashion is very helpful. SYB?s ?everything? being > broken by these instances, not so much. > > > > Would a patch ?fixing? these instances be acceptable? > > > > Regards, > > Philip > > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From johan.tibell at gmail.com Wed Aug 13 14:12:13 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 13 Aug 2014 16:12:13 +0200 Subject: HEADS UP: Running cabal install with the latest GHC In-Reply-To: <1407532120-sup-5118@sabre> References: <1407498991-sup-1278@sabre> <1407532120-sup-5118@sabre> Message-ID: Edward made some changes so that GHC 7.10 is backwards compatible with older cabals (older cabals just can't use the new goodies, that's all), which means that we won't need an earlier release. I'm still aiming for another major release before 7.10? When's 7.10 scheduled before? On Fri, Aug 8, 2014 at 11:17 PM, Edward Z. Yang wrote: > They would be: > > 2b50d0a Fix regression for V09 test library handling. > d3a696a Disable reinstalls with distinct package keys for now. > 1d33c8f Add $pkgkey template variable, and use it for install paths. > 41610a0 Implement package keys, distinguishing packages built with > different deps/flags > > Unfortunately, these patches fuzz a bit without this next patch: > > 62450f9 Implement "reexported-modules" field, towards fixing GHC bug > #8407. > > When you include that patch, there is only one piece of fuzz from > 41610a0. > > One important caveat is that these patches do rearrange some of the API, > so you wouldn't be able to build GHC 7.8 against these patches. So > maybe we don't want to. > > If we had a way of releasing experimental, non-default picked up > versions, that would be nice (i.e. Cabal 1.21). No warranty, but > easy enough for GHC devs to say 'cabal install Cabal-1.21 > cabal-install-1.21' or something. > > Edward > > Excerpts from Johan Tibell's message of 2014-08-08 22:02:25 +0100: > > I'm not again putting out another release, but I'd prefer to make it on > top > > of 1.20 if possible. Making a 1.22 release takes much more work (RC time, > > etc). Which are the patches in question. Can they easily be cherry-picked > > onto the 1.20 branch? Are there any risk of breakages? > > > > On Fri, Aug 8, 2014 at 2:00 PM, Edward Z. Yang wrote: > > > > > Hey all, > > > > > > SPJ pointed out to me today that if you try to run: > > > > > > cabal install --with-ghc=/path/to/inplace/bin/ghc-stage2 > > > > > > with the latest GHC HEAD, this probably will not actually work, because > > > your system installed version of Cabal is probably too old to deal with > > > the new package key stuff in HEAD. So, how do you get a version > > > of cabal-install (and Cabal) which is new enough to do what you need > > > it to? > > > > > > The trick is to compile Cabal using your /old/ GHC. Step-by-step, this > > > involves cd'ing into libraries/Cabal/Cabal and running `cabal install` > > > (or install it in a sandbox, if you like) and then cd'ing to > > > libraries/Cabal/cabal-install and cabal install'ing that. > > > > > > Cabal devs, is cutting a new release of Cabal and cabal-install in the > > > near future possible? In that case, users can just cabal update; cabal > > > install cabal-install and get a version of Cabal which will work for > > > them. > > > > > > Cheers, > > > Edward > > > _______________________________________________ > > > cabal-devel mailing list > > > cabal-devel at haskell.org > > > http://www.haskell.org/mailman/listinfo/cabal-devel > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.dead.shall.rise at gmail.com Wed Aug 13 14:22:14 2014 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Wed, 13 Aug 2014 16:22:14 +0200 Subject: HEADS UP: Running cabal install with the latest GHC In-Reply-To: References: <1407498991-sup-1278@sabre> <1407532120-sup-5118@sabre> Message-ID: Hi, On 13 August 2014 16:12, Johan Tibell wrote: > I'm still aiming for another > major release before 7.10? When's 7.10 scheduled before? End of the year, I think. From the.dead.shall.rise at gmail.com Wed Aug 13 14:50:42 2014 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Wed, 13 Aug 2014 16:50:42 +0200 Subject: HEADS UP: Running cabal install with the latest GHC In-Reply-To: References: <1407498991-sup-1278@sabre> <1407532120-sup-5118@sabre> Message-ID: Hi, On 13 August 2014 16:22, Mikhail Glushenkov wrote: > End of the year, I think. Correction: https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.10.1 says "February 2015". From johan.tibell at gmail.com Wed Aug 13 15:02:51 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 13 Aug 2014 17:02:51 +0200 Subject: How's the integration of DWARF support coming along? Message-ID: Hi, How's the integration of DWARF support coming along? It's probably one of the most important improvements to the runtime in quite some time since unlocks *two* important features, namely * trustworthy profiling (using e.g. Linux perf events and other low-overhead, code preserving, sampling profilers), and * stack traces. The former is really important to move our core libraries performance up a notch. Right now -prof is too invasive for it to be useful when evaluating the hotspots in these libraries (which are already often heavily tuned). The latter one is really important for real life Haskell on the server, where you can sometimes can get some crash that only happens once a day under very specific conditions. Knowing where the crash happens is then *very* useful. -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuncer.ayaz at gmail.com Wed Aug 13 15:07:52 2014 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Wed, 13 Aug 2014 17:07:52 +0200 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: Message-ID: On Wed, Aug 13, 2014 at 5:02 PM, Johan Tibell wrote: > Hi, > > How's the integration of DWARF support coming along? It's probably > one of the most important improvements to the runtime in quite some > time since unlocks *two* important features, namely > > * trustworthy profiling (using e.g. Linux perf events and other > low-overhead, code preserving, sampling profilers), and > * stack traces. > > The former is really important to move our core libraries > performance up a notch. Right now -prof is too invasive for it to be > useful when evaluating the hotspots in these libraries (which are > already often heavily tuned). > > The latter one is really important for real life Haskell on the > server, where you can sometimes can get some crash that only happens > once a day under very specific conditions. Knowing where the crash > happens is then *very* useful. Doesn't it also enable using gdb and lldb, or is there another missing piece? From johan.tibell at gmail.com Wed Aug 13 15:13:17 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 13 Aug 2014 17:13:17 +0200 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: Message-ID: On Wed, Aug 13, 2014 at 5:07 PM, Tuncer Ayaz wrote: > On Wed, Aug 13, 2014 at 5:02 PM, Johan Tibell wrote: > > Hi, > > > > How's the integration of DWARF support coming along? It's probably > > one of the most important improvements to the runtime in quite some > > time since unlocks *two* important features, namely > > > > * trustworthy profiling (using e.g. Linux perf events and other > > low-overhead, code preserving, sampling profilers), and > > * stack traces. > > > > The former is really important to move our core libraries > > performance up a notch. Right now -prof is too invasive for it to be > > useful when evaluating the hotspots in these libraries (which are > > already often heavily tuned). > > > > The latter one is really important for real life Haskell on the > > server, where you can sometimes can get some crash that only happens > > once a day under very specific conditions. Knowing where the crash > > happens is then *very* useful. > > Doesn't it also enable using gdb and lldb, or is there another missing > piece? > No, those should also work. It enables *a lot* of generic infrastructure that programmers has written over the years. -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Wed Aug 13 16:45:54 2014 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 13 Aug 2014 19:45:54 +0300 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: Message-ID: Is this stack trace support different than what we have currently? (e.g. the one implemented with GHC.Stack and cost centers) --- ?mer Sinan A?acan http://osa1.net 2014-08-13 18:02 GMT+03:00 Johan Tibell : > Hi, > > How's the integration of DWARF support coming along? It's probably one of > the most important improvements to the runtime in quite some time since > unlocks *two* important features, namely > > * trustworthy profiling (using e.g. Linux perf events and other > low-overhead, code preserving, sampling profilers), and > * stack traces. > > The former is really important to move our core libraries performance up a > notch. Right now -prof is too invasive for it to be useful when evaluating > the hotspots in these libraries (which are already often heavily tuned). > > The latter one is really important for real life Haskell on the server, > where you can sometimes can get some crash that only happens once a day > under very specific conditions. Knowing where the crash happens is then > *very* useful. > > -- Johan > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From johan.tibell at gmail.com Wed Aug 13 16:56:59 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 13 Aug 2014 18:56:59 +0200 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: Message-ID: Yes, it doesn't use any code modification so it doesn't have runtime overhead (except when generating the actual trace) or interfere with compiler optimizations. In other words you can actually have it enabled at all time. It only requires that you compile with -g, just like with a C compiler. On Wed, Aug 13, 2014 at 6:45 PM, ?mer Sinan A?acan wrote: > Is this stack trace support different than what we have currently? > (e.g. the one implemented with GHC.Stack and cost centers) > > --- > ?mer Sinan A?acan > http://osa1.net > > > 2014-08-13 18:02 GMT+03:00 Johan Tibell : > > Hi, > > > > How's the integration of DWARF support coming along? It's probably one of > > the most important improvements to the runtime in quite some time since > > unlocks *two* important features, namely > > > > * trustworthy profiling (using e.g. Linux perf events and other > > low-overhead, code preserving, sampling profilers), and > > * stack traces. > > > > The former is really important to move our core libraries performance up > a > > notch. Right now -prof is too invasive for it to be useful when > evaluating > > the hotspots in these libraries (which are already often heavily tuned). > > > > The latter one is really important for real life Haskell on the server, > > where you can sometimes can get some crash that only happens once a day > > under very specific conditions. Knowing where the crash happens is then > > *very* useful. > > > > -- Johan > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Wed Aug 13 17:08:07 2014 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 13 Aug 2014 20:08:07 +0300 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: Message-ID: Will generated stack traces be different that --- ?mer Sinan A?acan http://osa1.net 2014-08-13 19:56 GMT+03:00 Johan Tibell : > Yes, it doesn't use any code modification so it doesn't have runtime > overhead (except when generating the actual trace) or interfere with > compiler optimizations. In other words you can actually have it enabled at > all time. It only requires that you compile with -g, just like with a C > compiler. > > > On Wed, Aug 13, 2014 at 6:45 PM, ?mer Sinan A?acan > wrote: >> >> Is this stack trace support different than what we have currently? >> (e.g. the one implemented with GHC.Stack and cost centers) >> >> --- >> ?mer Sinan A?acan >> http://osa1.net >> >> >> 2014-08-13 18:02 GMT+03:00 Johan Tibell : >> > Hi, >> > >> > How's the integration of DWARF support coming along? It's probably one >> > of >> > the most important improvements to the runtime in quite some time since >> > unlocks *two* important features, namely >> > >> > * trustworthy profiling (using e.g. Linux perf events and other >> > low-overhead, code preserving, sampling profilers), and >> > * stack traces. >> > >> > The former is really important to move our core libraries performance up >> > a >> > notch. Right now -prof is too invasive for it to be useful when >> > evaluating >> > the hotspots in these libraries (which are already often heavily tuned). >> > >> > The latter one is really important for real life Haskell on the server, >> > where you can sometimes can get some crash that only happens once a day >> > under very specific conditions. Knowing where the crash happens is then >> > *very* useful. >> > >> > -- Johan >> > >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > From omeragacan at gmail.com Wed Aug 13 17:13:12 2014 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Wed, 13 Aug 2014 20:13:12 +0300 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: Message-ID: Sorry for my previous email. (used a gmail shortcut by mistake) We won't have stacks as we have in imperative(without TCO) and strict languages. So we still need some kind of emulation and I think this means some extra run-time operations. I'm wondering about two things: 1) Do we still get same traces as we get using GHC.Stack right now? 2) If yes, then how can we have that without any runtime costs? Thanks and sorry again for my previous email. --- ?mer Sinan A?acan http://osa1.net 2014-08-13 20:08 GMT+03:00 ?mer Sinan A?acan : > Will generated stack traces be different that > > --- > ?mer Sinan A?acan > http://osa1.net > > > 2014-08-13 19:56 GMT+03:00 Johan Tibell : >> Yes, it doesn't use any code modification so it doesn't have runtime >> overhead (except when generating the actual trace) or interfere with >> compiler optimizations. In other words you can actually have it enabled at >> all time. It only requires that you compile with -g, just like with a C >> compiler. >> >> >> On Wed, Aug 13, 2014 at 6:45 PM, ?mer Sinan A?acan >> wrote: >>> >>> Is this stack trace support different than what we have currently? >>> (e.g. the one implemented with GHC.Stack and cost centers) >>> >>> --- >>> ?mer Sinan A?acan >>> http://osa1.net >>> >>> >>> 2014-08-13 18:02 GMT+03:00 Johan Tibell : >>> > Hi, >>> > >>> > How's the integration of DWARF support coming along? It's probably one >>> > of >>> > the most important improvements to the runtime in quite some time since >>> > unlocks *two* important features, namely >>> > >>> > * trustworthy profiling (using e.g. Linux perf events and other >>> > low-overhead, code preserving, sampling profilers), and >>> > * stack traces. >>> > >>> > The former is really important to move our core libraries performance up >>> > a >>> > notch. Right now -prof is too invasive for it to be useful when >>> > evaluating >>> > the hotspots in these libraries (which are already often heavily tuned). >>> > >>> > The latter one is really important for real life Haskell on the server, >>> > where you can sometimes can get some crash that only happens once a day >>> > under very specific conditions. Knowing where the crash happens is then >>> > *very* useful. >>> > >>> > -- Johan >>> > >>> > >>> > _______________________________________________ >>> > ghc-devs mailing list >>> > ghc-devs at haskell.org >>> > http://www.haskell.org/mailman/listinfo/ghc-devs >>> > >> >> From johan.tibell at gmail.com Wed Aug 13 17:15:26 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 13 Aug 2014 19:15:26 +0200 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: Message-ID: Without any overhead we'll get the runtime stack trace, which isn't exactly the same as what we can get with emulation, but has the benefit that we can leave it on in all of our shipped code if we like. This latter is a really crucial property for stack traces to be widely useful. On Wed, Aug 13, 2014 at 7:13 PM, ?mer Sinan A?acan wrote: > Sorry for my previous email. (used a gmail shortcut by mistake) > > We won't have stacks as we have in imperative(without TCO) and strict > languages. So we still need some kind of emulation and I think this > means some extra run-time operations. I'm wondering about two things: > > 1) Do we still get same traces as we get using GHC.Stack right now? > 2) If yes, then how can we have that without any runtime costs? > > Thanks and sorry again for my previous email. > > --- > ?mer Sinan A?acan > http://osa1.net > > > 2014-08-13 20:08 GMT+03:00 ?mer Sinan A?acan : > > Will generated stack traces be different that > > > > --- > > ?mer Sinan A?acan > > http://osa1.net > > > > > > 2014-08-13 19:56 GMT+03:00 Johan Tibell : > >> Yes, it doesn't use any code modification so it doesn't have runtime > >> overhead (except when generating the actual trace) or interfere with > >> compiler optimizations. In other words you can actually have it enabled > at > >> all time. It only requires that you compile with -g, just like with a C > >> compiler. > >> > >> > >> On Wed, Aug 13, 2014 at 6:45 PM, ?mer Sinan A?acan < > omeragacan at gmail.com> > >> wrote: > >>> > >>> Is this stack trace support different than what we have currently? > >>> (e.g. the one implemented with GHC.Stack and cost centers) > >>> > >>> --- > >>> ?mer Sinan A?acan > >>> http://osa1.net > >>> > >>> > >>> 2014-08-13 18:02 GMT+03:00 Johan Tibell : > >>> > Hi, > >>> > > >>> > How's the integration of DWARF support coming along? It's probably > one > >>> > of > >>> > the most important improvements to the runtime in quite some time > since > >>> > unlocks *two* important features, namely > >>> > > >>> > * trustworthy profiling (using e.g. Linux perf events and other > >>> > low-overhead, code preserving, sampling profilers), and > >>> > * stack traces. > >>> > > >>> > The former is really important to move our core libraries > performance up > >>> > a > >>> > notch. Right now -prof is too invasive for it to be useful when > >>> > evaluating > >>> > the hotspots in these libraries (which are already often heavily > tuned). > >>> > > >>> > The latter one is really important for real life Haskell on the > server, > >>> > where you can sometimes can get some crash that only happens once a > day > >>> > under very specific conditions. Knowing where the crash happens is > then > >>> > *very* useful. > >>> > > >>> > -- Johan > >>> > > >>> > > >>> > _______________________________________________ > >>> > ghc-devs mailing list > >>> > ghc-devs at haskell.org > >>> > http://www.haskell.org/mailman/listinfo/ghc-devs > >>> > > >> > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rarash at student.chalmers.se Wed Aug 13 17:31:58 2014 From: rarash at student.chalmers.se (Arash Rouhani) Date: Wed, 13 Aug 2014 19:31:58 +0200 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: Message-ID: <53EBA10E.8060909@student.chalmers.se> Hi Johan! I haven't done much (just been lazy) lately, I've tried to benchmark my results but I don't get any sensible results at all yet. Last time Peter said he's working on a more portable way to read dwarf information that doesn't require Linux. But I'm sure he'll give a more acurate update than me soon in this mail thread. As for stack traces, I don't think there's any big tasks left, but I summarize what I have in mind: * The haskell interface is done and I've iterated on it a bit, so it's in a decent shape at least. Some parts still need testing. * I wish I could implement the `forceCaseContinuation` that I've described in my thesis. If someone is good with code generation (I just suck at it, it's probably simple) and is willing to assist me a bit, please say so. :) * I tried benchmarking, I gave up after not getting any useful results. * I'm unfortunately totally incapable to help out with dwarf debug data generation, only Peter knows that part, particularly I never grasped his theoretical framework of causality in Haskell. * Peter and I have finally agreed on a simple and sensible way to implement `catchWithStack` that have all most good properties you would like. I just need to implement it and test it. I can definitely man up and implement this. :) Here's my master thesis btw [1], it should answer ?mer's question of how we retrieve a stack from a language you think won't have a stack. :) Cheers, Arash [1]: http://arashrouhani.com/papers/master-thesis.pdf On 2014-08-13 17:02, Johan Tibell wrote: > Hi, > > How's the integration of DWARF support coming along? It's probably one > of the most important improvements to the runtime in quite some time > since unlocks *two* important features, namely > > * trustworthy profiling (using e.g. Linux perf events and other > low-overhead, code preserving, sampling profilers), and > * stack traces. > > The former is really important to move our core libraries performance > up a notch. Right now -prof is too invasive for it to be useful when > evaluating the hotspots in these libraries (which are already often > heavily tuned). > > The latter one is really important for real life Haskell on the > server, where you can sometimes can get some crash that only happens > once a day under very specific conditions. Knowing where the crash > happens is then *very* useful. > > -- Johan > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Aug 13 18:01:04 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 13 Aug 2014 20:01:04 +0200 Subject: How's the integration of DWARF support coming along? In-Reply-To: <53EBA10E.8060909@student.chalmers.se> References: <53EBA10E.8060909@student.chalmers.se> Message-ID: What's the minimal amount of work we need to do to just get the dwarf data in the codegen by 7.10 (RC late december) so we can start using e.g. linux perf events to profile Haskell programs? On Wed, Aug 13, 2014 at 7:31 PM, Arash Rouhani wrote: > Hi Johan! > > I haven't done much (just been lazy) lately, I've tried to benchmark my > results but I don't get any sensible results at all yet. > > Last time Peter said he's working on a more portable way to read dwarf > information that doesn't require Linux. But I'm sure he'll give a more > acurate update than me soon in this mail thread. > > As for stack traces, I don't think there's any big tasks left, but I > summarize what I have in mind: > > - The haskell interface is done and I've iterated on it a bit, so it's > in a decent shape at least. Some parts still need testing. > - I wish I could implement the `forceCaseContinuation` that I've > described in my thesis. If someone is good with code generation (I just > suck at it, it's probably simple) and is willing to assist me a bit, please > say so. :) > - I tried benchmarking, I gave up after not getting any useful results. > - I'm unfortunately totally incapable to help out with dwarf debug > data generation, only Peter knows that part, particularly I never grasped > his theoretical framework of causality in Haskell. > - Peter and I have finally agreed on a simple and sensible way to > implement `catchWithStack` that have all most good properties you would > like. I just need to implement it and test it. I can definitely man up and > implement this. :) > > Here's my master thesis btw [1], it should answer ?mer's question of how > we retrieve a stack from a language you think won't have a stack. :) > > Cheers, > Arash > > [1]: http://arashrouhani.com/papers/master-thesis.pdf > > > > > > On 2014-08-13 17:02, Johan Tibell wrote: > > Hi, > > How's the integration of DWARF support coming along? It's probably one > of the most important improvements to the runtime in quite some time since > unlocks *two* important features, namely > > * trustworthy profiling (using e.g. Linux perf events and other > low-overhead, code preserving, sampling profilers), and > * stack traces. > > The former is really important to move our core libraries performance up > a notch. Right now -prof is too invasive for it to be useful when > evaluating the hotspots in these libraries (which are already often heavily > tuned). > > The latter one is really important for real life Haskell on the server, > where you can sometimes can get some crash that only happens once a day > under very specific conditions. Knowing where the crash happens is then > *very* useful. > > -- Johan > > > > _______________________________________________ > ghc-devs mailing listghc-devs at haskell.orghttp://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slyich at gmail.com Wed Aug 13 18:25:17 2014 From: slyich at gmail.com (Sergei Trofimovich) Date: Wed, 13 Aug 2014 21:25:17 +0300 Subject: making ./validate run tests on all CPUs by default In-Reply-To: References: <20140812233113.64c2e20e@sf> Message-ID: <20140813212517.65ecf6de@sf> On Wed, 13 Aug 2014 11:39:56 +0200 Tuncer Ayaz wrote: > On Tue, Aug 12, 2014 at 10:31 PM, Sergei Trofimovich wrote: > > Good evening all! > > > > Currently when being ran './validate' script (without any parameters): > > - builds ghc using 2 parallel jobs > > - runs testsuite using 2 parallel jobs > > > > I propose to change the default value to amount of available CPUs: > > - build ghc using N+1 parallel jobs > > - run testsuite using N+1 parallel jobs > > > > Pros: > > - first-time users will get faster ./validate > > - seasoned users will need less tweaking for buildbots > > > > Cons: > > - for imbalanced boxes (32 cores, 8GB RAM) it might > > be quite painful to drag box out of swap > > > > What do you think about it? > > Isn't the memory use also a problem on boxes with a much lower > number of cores (e.g. 7.8 space leak(s))? > > On one machine with 1GB per core, I had to limit cabal install's > parallelism when using 7.8. It's true in general, but I would not expect such a massive growth on ghc source. Current -Rghc-timing shows ~300MBs per ghc process on amd64. The fallout examples are HsSyn and cabal's PackageDescription modules. ghc's build system is a bit different from Cabal's: - Cabal runs one 'ghc --make' instance for a whole package. It leads to massive RAM usage in case of a multitude of modules (highlighting-kate and qthaskell come to mind). - ghc's buld system uses one 'ghc -c' execution for a single .hs file (roughly) > Assuming the patch goes in, is there a way to limit parallel jobs > on the command line? The mechanism to set limit manually is the same as before: CPUS=8 ./validate It's the default that is proposed to be changed. -- Sergei -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From rarash at student.chalmers.se Wed Aug 13 18:46:29 2014 From: rarash at student.chalmers.se (Arash Rouhani) Date: Wed, 13 Aug 2014 20:46:29 +0200 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: <53EBA10E.8060909@student.chalmers.se> Message-ID: <53EBB285.1050207@student.chalmers.se> Peter will have to answer that. But it seemed to me that it has been working fine all the time. I suppose it's just to resolve merge conflicts. There were some refactorings he wanted to do. In addition to this it will also be some packaging issues I suppose. I'm hoping Peter will answer in this mail thread soon, since he knows this much better. /Arash On 2014-08-13 20:01, Johan Tibell wrote: > What's the minimal amount of work we need to do to just get the dwarf > data in the codegen by 7.10 (RC late december) so we can start using > e.g. linux perf events to profile Haskell programs? > > > On Wed, Aug 13, 2014 at 7:31 PM, Arash Rouhani > > wrote: > > Hi Johan! > > I haven't done much (just been lazy) lately, I've tried to > benchmark my results but I don't get any sensible results at all yet. > > Last time Peter said he's working on a more portable way to read > dwarf information that doesn't require Linux. But I'm sure he'll > give a more acurate update than me soon in this mail thread. > > As for stack traces, I don't think there's any big tasks left, but > I summarize what I have in mind: > > * The haskell interface is done and I've iterated on it a bit, > so it's in a decent shape at least. Some parts still need testing. > * I wish I could implement the `forceCaseContinuation` that I've > described in my thesis. If someone is good with code > generation (I just suck at it, it's probably simple) and is > willing to assist me a bit, please say so. :) > * I tried benchmarking, I gave up after not getting any useful > results. > * I'm unfortunately totally incapable to help out with dwarf > debug data generation, only Peter knows that part, > particularly I never grasped his theoretical framework of > causality in Haskell. > * Peter and I have finally agreed on a simple and sensible way > to implement `catchWithStack` that have all most good > properties you would like. I just need to implement it and > test it. I can definitely man up and implement this. :) > > Here's my master thesis btw [1], it should answer ?mer's question > of how we retrieve a stack from a language you think won't have a > stack. :) > > Cheers, > Arash > > [1]: http://arashrouhani.com/papers/master-thesis.pdf > > > > > > On 2014-08-13 17:02, Johan Tibell wrote: >> Hi, >> >> How's the integration of DWARF support coming along? It's >> probably one of the most important improvements to the runtime in >> quite some time since unlocks *two* important features, namely >> >> * trustworthy profiling (using e.g. Linux perf events and other >> low-overhead, code preserving, sampling profilers), and >> * stack traces. >> >> The former is really important to move our core libraries >> performance up a notch. Right now -prof is too invasive for it to >> be useful when evaluating the hotspots in these libraries (which >> are already often heavily tuned). >> >> The latter one is really important for real life Haskell on the >> server, where you can sometimes can get some crash that only >> happens once a day under very specific conditions. Knowing where >> the crash happens is then *very* useful. >> >> -- Johan >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scpmw at leeds.ac.uk Wed Aug 13 18:49:45 2014 From: scpmw at leeds.ac.uk (Peter Wortmann) Date: Wed, 13 Aug 2014 19:49:45 +0100 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: <53EBA10E.8060909@student.chalmers.se> Message-ID: At this point I have a bit more time on my hands again (modulo post-thesis vacations), but we are basically still in ?review hell?. I think ?just? for perf_events support we?d need the following patches[1]: 1. Source notes (Core support) 2. Source notes (CorePrep & Stg support) 3. Source notes (Cmm support) 4. Tick scopes 5. Debug data extraction (NCG support) 6. Generate .loc/.file directives We have a basic ?okay? from the Simons up to number 2 (conditional on better documentation). Number 4 sticks out because Simon Marlow wanted to have a closer look at it - this is basically about how to maintain source ticks in a robust fashion on the Cmm level (see also section 5.5 of my thesis[2]). Meanwhile I have ported NCG DWARF generation over to Mac Os, and am working on reviving LLVM support. My plan was to check that I didn?t accidentally break Linux support, then push for review again in a week or so (Phab?). Greetings, Peter [1] https://github.com/scpmw/ghc/commits/profiling-import [2] http://www.personal.leeds.ac.uk/~scpmw/static/thesis.pdf On 13 Aug 2014, at 20:01, Johan Tibell > wrote: What's the minimal amount of work we need to do to just get the dwarf data in the codegen by 7.10 (RC late december) so we can start using e.g. linux perf events to profile Haskell programs? On Wed, Aug 13, 2014 at 7:31 PM, Arash Rouhani > wrote: Hi Johan! I haven't done much (just been lazy) lately, I've tried to benchmark my results but I don't get any sensible results at all yet. Last time Peter said he's working on a more portable way to read dwarf information that doesn't require Linux. But I'm sure he'll give a more acurate update than me soon in this mail thread. As for stack traces, I don't think there's any big tasks left, but I summarize what I have in mind: * The haskell interface is done and I've iterated on it a bit, so it's in a decent shape at least. Some parts still need testing. * I wish I could implement the `forceCaseContinuation` that I've described in my thesis. If someone is good with code generation (I just suck at it, it's probably simple) and is willing to assist me a bit, please say so. :) * I tried benchmarking, I gave up after not getting any useful results. * I'm unfortunately totally incapable to help out with dwarf debug data generation, only Peter knows that part, particularly I never grasped his theoretical framework of causality in Haskell. * Peter and I have finally agreed on a simple and sensible way to implement `catchWithStack` that have all most good properties you would like. I just need to implement it and test it. I can definitely man up and implement this. :) Here's my master thesis btw [1], it should answer ?mer's question of how we retrieve a stack from a language you think won't have a stack. :) Cheers, Arash [1]: http://arashrouhani.com/papers/master-thesis.pdf On 2014-08-13 17:02, Johan Tibell wrote: Hi, How's the integration of DWARF support coming along? It's probably one of the most important improvements to the runtime in quite some time since unlocks *two* important features, namely * trustworthy profiling (using e.g. Linux perf events and other low-overhead, code preserving, sampling profilers), and * stack traces. The former is really important to move our core libraries performance up a notch. Right now -prof is too invasive for it to be useful when evaluating the hotspots in these libraries (which are already often heavily tuned). The latter one is really important for real life Haskell on the server, where you can sometimes can get some crash that only happens once a day under very specific conditions. Knowing where the crash happens is then *very* useful. -- Johan _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs From johan.tibell at gmail.com Wed Aug 13 19:29:05 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 13 Aug 2014 21:29:05 +0200 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: <53EBA10E.8060909@student.chalmers.se> Message-ID: Seeing the code on Phab it two weeks sounds great. Do you mind expanding on what tick scopes are. It sounds scarily like something that happens at runtime. :) On Wed, Aug 13, 2014 at 8:49 PM, Peter Wortmann wrote: > > > At this point I have a bit more time on my hands again (modulo post-thesis > vacations), but we are basically still in ?review hell?. > > I think ?just? for perf_events support we?d need the following patches[1]: > 1. Source notes (Core support) > 2. Source notes (CorePrep & Stg support) > 3. Source notes (Cmm support) > 4. Tick scopes > 5. Debug data extraction (NCG support) > 6. Generate .loc/.file directives > > We have a basic ?okay? from the Simons up to number 2 (conditional on > better documentation). Number 4 sticks out because Simon Marlow wanted to > have a closer look at it - this is basically about how to maintain source > ticks in a robust fashion on the Cmm level (see also section 5.5 of my > thesis[2]). > > Meanwhile I have ported NCG DWARF generation over to Mac Os, and am > working on reviving LLVM support. My plan was to check that I didn?t > accidentally break Linux support, then push for review again in a week or > so (Phab?). > > Greetings, > Peter > > [1] https://github.com/scpmw/ghc/commits/profiling-import > [2] http://www.personal.leeds.ac.uk/~scpmw/static/thesis.pdf > > On 13 Aug 2014, at 20:01, Johan Tibell johan.tibell at gmail.com>> wrote: > > What's the minimal amount of work we need to do to just get the dwarf data > in the codegen by 7.10 (RC late december) so we can start using e.g. linux > perf events to profile Haskell programs? > > > On Wed, Aug 13, 2014 at 7:31 PM, Arash Rouhani > wrote: > Hi Johan! > > I haven't done much (just been lazy) lately, I've tried to benchmark my > results but I don't get any sensible results at all yet. > > Last time Peter said he's working on a more portable way to read dwarf > information that doesn't require Linux. But I'm sure he'll give a more > acurate update than me soon in this mail thread. > > As for stack traces, I don't think there's any big tasks left, but I > summarize what I have in mind: > > * The haskell interface is done and I've iterated on it a bit, so it's > in a decent shape at least. Some parts still need testing. > * I wish I could implement the `forceCaseContinuation` that I've > described in my thesis. If someone is good with code generation (I just > suck at it, it's probably simple) and is willing to assist me a bit, please > say so. :) > * I tried benchmarking, I gave up after not getting any useful results. > * I'm unfortunately totally incapable to help out with dwarf debug data > generation, only Peter knows that part, particularly I never grasped his > theoretical framework of causality in Haskell. > * Peter and I have finally agreed on a simple and sensible way to > implement `catchWithStack` that have all most good properties you would > like. I just need to implement it and test it. I can definitely man up and > implement this. :) > > Here's my master thesis btw [1], it should answer ?mer's question of how > we retrieve a stack from a language you think won't have a stack. :) > > Cheers, > Arash > > [1]: http://arashrouhani.com/papers/master-thesis.pdf > > > > > > On 2014-08-13 17:02, Johan Tibell wrote: > Hi, > > How's the integration of DWARF support coming along? It's probably one of > the most important improvements to the runtime in quite some time since > unlocks *two* important features, namely > > * trustworthy profiling (using e.g. Linux perf events and other > low-overhead, code preserving, sampling profilers), and > * stack traces. > > The former is really important to move our core libraries performance up a > notch. Right now -prof is too invasive for it to be useful when evaluating > the hotspots in these libraries (which are already often heavily tuned). > > The latter one is really important for real life Haskell on the server, > where you can sometimes can get some crash that only happens once a day > under very specific conditions. Knowing where the crash happens is then > *very* useful. > > -- Johan > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scpmw at leeds.ac.uk Wed Aug 13 20:29:24 2014 From: scpmw at leeds.ac.uk (Peter Wortmann) Date: Wed, 13 Aug 2014 21:29:24 +0100 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: <53EBA10E.8060909@student.chalmers.se> Message-ID: <9EA1C32B-2412-41D5-9EE0-4D4F17524FA5@leeds.ac.uk> Johan Tibell wrote: Do you mind expanding on what tick scopes are. It sounds scarily like something that happens at runtime. :) It?s a pretty basic problem - for Core we can always walk the tree upwards to find some source ticks that might be useful. Cmm on the other hand is flat: Given one block without any annotations on its own, there is no robust way we could "look around" for debugging information. This is especially tricky because Cmm stages want to be able to liberally add or remove blocks. So let?s say we have an extra GC block added: Which source location should we see as associated with it? And if two blocks are combined using common block elimination: What is now the best source location? And how do we express all this in a way that won?t make code generation more complicated? The latter is an important consideration, because code generation is very irregular in how it treats code - often alternating between accumulating it in a monad and passing it around by hand. I have found it quite tricky to find a good solution in this design space - the current idea is that we associate every piece of generated Cmm with a ?tick scope?, which decides how far a tick will ?apply?. So for example a GC block would be generated using the same tick scope as the function?s entry block, and therefore will get all ticks associated with the function?s top level, which is probably the best choice. On the other hand, for merging blocks we can ?combine? the scopes in a way that guarantees that we find (at least) the same ticks as before, therefore losing no information. And yes, this design could be simplified somewhat for pure DWARF generation. After all, for that particular purpose every tick scope will just boil down to a single source location anyway. So we could simply replace scopes with the source link right away. But I think it would come down to about the same code complexity, plus having a robust structure around makes it easier to carry along extra information such as unwind information, extra source ticks or the generating Core. Greetings, Peter On Wed, Aug 13, 2014 at 8:49 PM, Peter Wortmann > wrote: At this point I have a bit more time on my hands again (modulo post-thesis vacations), but we are basically still in ?review hell?. I think ?just? for perf_events support we?d need the following patches[1]: 1. Source notes (Core support) 2. Source notes (CorePrep & Stg support) 3. Source notes (Cmm support) 4. Tick scopes 5. Debug data extraction (NCG support) 6. Generate .loc/.file directives We have a basic ?okay? from the Simons up to number 2 (conditional on better documentation). Number 4 sticks out because Simon Marlow wanted to have a closer look at it - this is basically about how to maintain source ticks in a robust fashion on the Cmm level (see also section 5.5 of my thesis[2]). Meanwhile I have ported NCG DWARF generation over to Mac Os, and am working on reviving LLVM support. My plan was to check that I didn?t accidentally break Linux support, then push for review again in a week or so (Phab?). Greetings, Peter [1] https://github.com/scpmw/ghc/commits/profiling-import [2] http://www.personal.leeds.ac.uk/~scpmw/static/thesis.pdf On 13 Aug 2014, at 20:01, Johan Tibell >> wrote: What's the minimal amount of work we need to do to just get the dwarf data in the codegen by 7.10 (RC late december) so we can start using e.g. linux perf events to profile Haskell programs? On Wed, Aug 13, 2014 at 7:31 PM, Arash Rouhani >> wrote: Hi Johan! I haven't done much (just been lazy) lately, I've tried to benchmark my results but I don't get any sensible results at all yet. Last time Peter said he's working on a more portable way to read dwarf information that doesn't require Linux. But I'm sure he'll give a more acurate update than me soon in this mail thread. As for stack traces, I don't think there's any big tasks left, but I summarize what I have in mind: * The haskell interface is done and I've iterated on it a bit, so it's in a decent shape at least. Some parts still need testing. * I wish I could implement the `forceCaseContinuation` that I've described in my thesis. If someone is good with code generation (I just suck at it, it's probably simple) and is willing to assist me a bit, please say so. :) * I tried benchmarking, I gave up after not getting any useful results. * I'm unfortunately totally incapable to help out with dwarf debug data generation, only Peter knows that part, particularly I never grasped his theoretical framework of causality in Haskell. * Peter and I have finally agreed on a simple and sensible way to implement `catchWithStack` that have all most good properties you would like. I just need to implement it and test it. I can definitely man up and implement this. :) Here's my master thesis btw [1], it should answer ?mer's question of how we retrieve a stack from a language you think won't have a stack. :) Cheers, Arash [1]: http://arashrouhani.com/papers/master-thesis.pdf On 2014-08-13 17:02, Johan Tibell wrote: Hi, How's the integration of DWARF support coming along? It's probably one of the most important improvements to the runtime in quite some time since unlocks *two* important features, namely * trustworthy profiling (using e.g. Linux perf events and other low-overhead, code preserving, sampling profilers), and * stack traces. The former is really important to move our core libraries performance up a notch. Right now -prof is too invasive for it to be useful when evaluating the hotspots in these libraries (which are already often heavily tuned). The latter one is really important for real life Haskell on the server, where you can sometimes can get some crash that only happens once a day under very specific conditions. Knowing where the crash happens is then *very* useful. -- Johan _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org> http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org> http://www.haskell.org/mailman/listinfo/ghc-devs From nikita at karetnikov.org Wed Aug 13 21:30:00 2014 From: nikita at karetnikov.org (Nikita Karetnikov) Date: Thu, 14 Aug 2014 01:30:00 +0400 Subject: Building HEAD (e83e873d) on mips64el: unknown package: old-locale-1.0.0.6 Message-ID: <87d2c4krhj.fsf@karetnikov.org> $ git clone git://github.com/ghc/ghc.git ghc-github $ cd ghc-github $ ./sync-all get $ perl boot $ ./configure $ make [?] "inplace/bin/ghc-stage1" -this-package-key rts -shared -dynamic -dynload deploy -no-auto-link-packages -Lrts/dist/build -lffi -optl-Wl,-rpath -optl-Wl,'$ORIGIN' -optl-Wl,-zorigin `cat rts/dist/libs.depend` rts/dist/build/Adjustor.dyn_o rts/dist/build/Arena.dyn_o rts/dist/build/Capability.dyn_o rts/dist/build/CheckUnload.dyn_o rts/dist/build/ClosureFlags.dyn_o rts/dist/build/Disassembler.dyn_o rts/dist/build/FileLock.dyn_o rts/dist/build/Globals.dyn_o rts/dist/build/Hash.dyn_o rts/dist/build/Hpc.dyn_o rts/dist/build/HsFFI.dyn_o rts/dist/build/Inlines.dyn_o rts/dist/build/Interpreter.dyn_o rts/dist/build/LdvProfile.dyn_o rts/dist/build/Linker.dyn_o rts/dist/build/Messages.dyn_o rts/dist/build/OldARMAtomic.dyn_o rts/dist/build/Papi.dyn_o rts/dist/build/Printer.dyn_o rts/dist/build/ProfHeap.dyn_o rts/dist/build/Profiling.dyn_o rts/dist/build/Proftimer.dyn_o rts/dist/build/RaiseAsync.dyn_o rts/dist/build/RetainerProfile.dyn_o rts/dist/build/RetainerSet.dyn_o rts/dist/build/RtsAPI.dyn_o rts/dist/build/RtsDllMain.dyn_o rts/dist/build/RtsFlags.dyn_o rts/dist/build/RtsMain.dyn_o rts/dist/build/RtsMessages.dyn_o rts/dist/build/RtsStartup.dyn_o rts/dist/build/RtsUtils.dyn_o rts/dist/build/STM.dyn_o rts/dist/build/Schedule.dyn_o rts/dist/build/Sparks.dyn_o rts/dist/build/Stable.dyn_o rts/dist/build/Stats.dyn_o rts/dist/build/StgCRun.dyn_o rts/dist/build/StgPrimFloat.dyn_o rts/dist/build/Task.dyn_o rts/dist/build/ThreadLabels.dyn_o rts/dist/build/ThreadPaused.dyn_o rts/dist/build/Threads.dyn_o rts/dist/build/Ticky.dyn_o rts/dist/build/Timer.dyn_o rts/dist/build/Trace.dyn_o rts/dist/build/WSDeque.dyn_o rts/dist/build/Weak.dyn_o rts/dist/build/hooks/FlagDefaults.dyn_o rts/dist/build/hooks/MallocFail.dyn_o rts/dist/build/hooks/OnExit.dyn_o rts/dist/build/hooks/OutOfHeap.dyn_o rts/dist/build/hooks/StackOverflow.dyn_o rts/dist/build/sm/BlockAlloc.dyn_o rts/dist/build/sm/Compact.dyn_o rts/dist/build/sm/Evac.dyn_o rts/dist/build/sm/GC.dyn_o rts/dist/build/sm/GCAux.dyn_o rts/dist/build/sm/GCUtils.dyn_o rts/dist/build/sm/MBlock.dyn_o rts/dist/build/sm/MarkWeak.dyn_o rts/dist/build/sm/Sanity.dyn_o rts/dist/build/sm/Scav.dyn_o rts/dist/build/sm/Storage.dyn_o rts/dist/build/sm/Sweep.dyn_o rts/dist/build/eventlog/EventLog.dyn_o rts/dist/build/posix/GetEnv.dyn_o rts/dist/build/posix/GetTime.dyn_o rts/dist/build/posix/Itimer.dyn_o rts/dist/build/posix/OSMem.dyn_o rts/dist/build/posix/OSThreads.dyn_o rts/dist/build/posix/Select.dyn_o rts/dist/build/posix/Signals.dyn_o rts/dist/build/posix/TTY.dyn_o rts/dist/build/Apply.dyn_o rts/dist/build/Exception.dyn_o rts/dist/build/HeapStackCheck.dyn_o rts/dist/build/PrimOps.dyn_o rts/dist/build/StgMiscClosures.dyn_o rts/dist/build/StgStartup.dyn_o rts/dist/build/StgStdThunks.dyn_o rts/dist/build/Updates.dyn_o rts/dist/build/AutoApply.dyn_o -fPIC -dynamic -H32m -O -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -optc-DNOSMP -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -fno-use-rpaths -optl-Wl,-zorigin -o rts/dist/build/libHSrts-ghc7.9.20140809.so /usr/bin/ld: rts/dist/build/Adjustor.dyn_o: relocation R_MIPS_HI16 against `__gnu_local_gp' can not be used when making a shared object; recompile with -fPIC rts/dist/build/Adjustor.dyn_o: could not read symbols: Bad value collect2: ld returned 1 exit status make[1]: *** [rts/dist/build/libHSrts-ghc7.9.20140809.so] Error 1 make[1]: *** Waiting for unfinished jobs.... make: *** [all] Error 2 After making this change (see #8857) $ diff -Nru config.mk.in-orig config.mk.in --- config.mk.in-orig 2014-08-11 04:39:24.257232224 +0000 +++ config.mk.in 2014-08-11 04:41:50.666057938 +0000 @@ -99,7 +99,8 @@ x86_64-unknown-mingw32 \ i386-unknown-mingw32 \ sparc-sun-solaris2 \ - sparc-unknown-linux + sparc-unknown-linux \ + mipsel-unknown-linux ifeq "$(SOLARIS_BROKEN_SHLD)" "YES" NoSharedLibsPlatformList += i386-unknown-solaris2 and running $ make distclean $ ./configure $ make it failed with a different error: "inplace/bin/ghc-stage1" -hisuf hi -osuf o -hcsuf hc -static -H32m -O -this-package-key time_KUji6QoLFw0LtcZkg4b7t4 -hide-all-packages -i -ilibraries/time/. -ilibraries/time/dist-install/build -ilibraries/time/dist-install/build/autogen -Ilibraries/time/dist-install/build -Ilibraries/time/dist-install/build/autogen -Ilibraries/time/include -optP-DLANGUAGE_Rank2Types -optP-DLANGUAGE_DeriveDataTypeable -optP-DLANGUAGE_StandaloneDeriving -optP-include -optPlibraries/time/dist-install/build/autogen/cabal_macros.h -package-key base_DiPQ1siqG3SBjHauL3L03p -package-key deeps_L0rJEVU1Zgn8x0Qs5aTOsU -package-key oldlo_EJWcQwUgW2gEwNtIuJl2P8 -Wall -XHaskell2010 -XCPP -XRank2Types -XDeriveDataTypeable -XStandaloneDeriving -O2 -no-user-package-db -rtsopts -odir libraries/time/dist-install/build -hidir libraries/time/dist-install/build -stubdir libraries/time/dist-install/build -c libraries/time/./Data/Time/Clock/CTimeval.hs -o libraries/time/dist-install/build/Data/Time/Clock/CTimeval.o : unknown package: old-locale-1.0.0.6 make[1]: *** [libraries/time/dist-install/build/Data/Time/Clock/CTimeval.o] Error 1 make[1]: *** Waiting for unfinished jobs.... make: *** [all] Error 2 ?libraries/old-locale? is present in the tree and is not empty. Is it necessary to run anything else after distclean? According to ?MAKEHELP?, it is not needed, so I?m puzzled. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 835 bytes Desc: not available URL: From fuuzetsu at fuuzetsu.co.uk Wed Aug 13 22:09:40 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Wed, 13 Aug 2014 23:09:40 +0100 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53E45F2D.9000806@fuuzetsu.co.uk> References: <53E45F2D.9000806@fuuzetsu.co.uk> Message-ID: <53EBE224.1060103@fuuzetsu.co.uk> On 08/08/2014 06:25 AM, Mateusz Kowalczyk wrote: > Hello, > > [snip] > > Transition from current setup: > If I receive some patches I was promised then I will then make a 2.14.4 > bugfix/compat release make sure that master is up to date and then > create something like GHC-tracking branch from master and track that. I > will then abandon that branch and not push to it unless it is GHC > release time. The next commit in master will bring Haddock to a state > where it works with 7.8.3: yes, this means removing all new API stuff > until 7.10 or 7.8.4 or whatever. GHC API changes go onto GHC-tracking > while all the stuff I write goes master. When GHC makes a release or is > about to, I make master work with that and make GHC-tracking point to > that instead. > > > Thanks! > So it is now close to a week gone and I have received many positive replies and no negative ones. I will probably execute what I stated initially at about this time tomorrow. To reiterate in short: 1. I make sure what we have now compiles with GHC HEAD and I stick it in separate branch which GHC folk will now track and apply any API patches to. Unless something changes by tomorrow, this will most likely be what master is at right now, perhaps with a single change to the version in cabal file. 2. I make the master branch work with 7.8.3 (and possibly 7.8.x) and do development without worrying about any API changes in HEAD, releasing as often as I need to. 3. At GHC release time, I update master with API changes so that up-to-date Haddock is ready to be used to generate the docs and ship with the compiler. I don't know what the GHC branch name will be yet. ?ghc-head? makes most sense but IIRC Herbert had some objections as it had been used in the past for something else, but maybe he can pitch in. The only thing I require from GHC folk is to simply use that branch and not push/pull to/from master unless contributing feature patches or trying to port some fixes into HEAD version for whatever reason. Thanks! -- Mateusz K. From hvriedel at gmail.com Wed Aug 13 22:30:39 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 14 Aug 2014 00:30:39 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53EBE224.1060103@fuuzetsu.co.uk> (Mateusz Kowalczyk's message of "Wed, 13 Aug 2014 23:09:40 +0100") References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> Message-ID: <87egwk6n00.fsf@gmail.com> On 2014-08-14 at 00:09:40 +0200, Mateusz Kowalczyk wrote: [...] > I don't know what the GHC branch name will be yet. ?ghc-head? makes most > sense but IIRC Herbert had some objections as it had been used in the > past for something else, but maybe he can pitch in. I had no objections at all to that name, 'ghc-head' is fine with me :-) From lukexipd at gmail.com Thu Aug 14 00:26:00 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Wed, 13 Aug 2014 17:26:00 -0700 Subject: ARM64 Task Force In-Reply-To: References: <53E466F1.90201@centrum.cz> <53E60463.2080608@centrum.cz> <53EA5BCC.3060406@centrum.cz> Message-ID: Indeed, the float register stuff was a red herring -- restoring it caused no problems and all my tests are working great. So yahoo!! We've got ARM64 support. I'll tidy up the patches and create a ticket for review and merge. Luke On Tue, Aug 12, 2014 at 4:47 PM, Luke Iannini wrote: > Hi all, > Yahoo, happy news -- I think I've got it. Studying enough of the > non-handwritten ASM that I was stepping through led me to make this change: > > https://github.com/lukexi/ghc/commit/1140e11db07817fcfc12446c74cd5a2bcdf92781 > (I think disabling the floating point registers was just a red herring; > I'll confirm that next) > > And I can now call this fib code happily via the FFI: > fibs :: [Int] > fibs = 1:1:zipWith (+) fibs (tail fibs) > > foreign export ccall fib :: Int -> Int > fib :: Int -> Int > fib = (fibs !!) > > For posterity, more detail on the crashing case earlier (this is fixed now > with proper storage and updating of the frame pointer): > Calling fib(1) or fib(2) worked, but calling fib(3) triggered the crash. > This was the backtrace, where you can see the errant 0x0000000100f05110 > frame values. > (lldb) bt > * thread #1: tid = 0xac6ed, 0x0000000100f05110, queue = > 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, > address=0x100f05110) > frame #0: 0x0000000100f05110 > frame #1: 0x0000000100f05110 > * frame #2: 0x00000001000ffc9c HelloHaskell`-[SPJViewController > viewDidLoad](self=0x0000000144e0cf10, _cmd=0x0000000186ae429a) + 76 at > SPJViewController.m:22 > frame #3: 0x00000001862f8b70 UIKit`-[UIViewController > loadViewIfRequired] + 692 > frame #4: 0x00000001862f8880 UIKit`-[UIViewController view] + 32 > frame #5: 0x00000001862feeb0 UIKit`-[UIWindow > addRootViewControllerViewIfPossible] + 72 > frame #6: 0x00000001862fc6d4 UIKit`-[UIWindow _setHidden:forced:] + 296 > frame #7: 0x000000018636d2bc UIKit`-[UIWindow makeKeyAndVisible] + 56 > frame #8: 0x000000018657ff74 UIKit`-[UIApplication > _callInitializationDelegatesForMainScene:transitionContext:] + 2804 > frame #9: 0x00000001865824ec UIKit`-[UIApplication > _runWithMainScene:transitionContext:completion:] + 1480 > frame #10: 0x0000000186580b84 UIKit`-[UIApplication > workspaceDidEndTransaction:] + 184 > frame #11: 0x0000000189d846ac FrontBoardServices`__31-[FBSSerialQueue > performAsync:]_block_invoke + 28 > frame #12: 0x0000000181c7a360 > CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_BLOCK__ + 20 > frame #13: 0x0000000181c79468 CoreFoundation`__CFRunLoopDoBlocks + 312 > frame #14: 0x0000000181c77a8c CoreFoundation`__CFRunLoopRun + 1756 > frame #15: 0x0000000181ba5664 CoreFoundation`CFRunLoopRunSpecific + 396 > frame #16: 0x0000000186363140 UIKit`-[UIApplication _run] + 552 > frame #17: 0x000000018635e164 UIKit`UIApplicationMain + 1488 > frame #18: 0x0000000100100268 HelloHaskell`main(argc=1, > argv=0x000000016fd07a58) + 204 at main.m:24 > frame #19: 0x00000001921eea08 libdyld.dylib`start + 4 > > > > On Tue, Aug 12, 2014 at 11:24 AM, Karel Gardas > wrote: > >> On 08/12/14 11:03 AM, Luke Iannini wrote: >> >>> It looks like it's jumping somewhere strange; lldb tells me it's to >>> 0x100e05110: .long 0x00000000 ; unknown opcode >>> 0x100e05114: .long 0x00000000 ; unknown opcode >>> 0x100e05118: .long 0x00000000 ; unknown opcode >>> 0x100e0511c: .long 0x00000000 ; unknown opcode >>> 0x100e05120: .long 0x00000000 ; unknown opcode >>> 0x100e05124: .long 0x00000000 ; unknown opcode >>> 0x100e05128: .long 0x00000000 ; unknown opcode >>> 0x100e0512c: .long 0x00000000 ; unknown opcode >>> >>> If I put a breakpoint on StgRun and step by instruction, I seem to make >>> it to about: >>> https://github.com/lukexi/ghc/blob/e99b7a41e64f3ddb9bb420c0d5583f >>> 0e302e321e/rts/StgCRun.c#L770 >>> (give or take a line) >>> >> >> strange that it's in the middle of the stp isns block. Anyway, this looks >> like a cpu exception doesn't it? You will need to find out the reg which >> holds the "exception reason" value and then look on it in your debugger to >> find out what's going wrong there. >> >> Karel >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Aug 14 00:43:50 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 13 Aug 2014 20:43:50 -0400 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <87egwk6n00.fsf@gmail.com> References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> <87egwk6n00.fsf@gmail.com> Message-ID: one thing I wonder about is how should we approach noting "theres a new language constructor, we should figure out a good way to present it in haddock" in this work flow? because the initial haddocks presentation might just be a strawman till someone thinks about it carefully right? On Wed, Aug 13, 2014 at 6:30 PM, Herbert Valerio Riedel wrote: > On 2014-08-14 at 00:09:40 +0200, Mateusz Kowalczyk wrote: > > [...] > > > I don't know what the GHC branch name will be yet. ?ghc-head? makes most > > sense but IIRC Herbert had some objections as it had been used in the > > past for something else, but maybe he can pitch in. > > I had no objections at all to that name, 'ghc-head' is fine with me :-) > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Thu Aug 14 00:52:14 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 14 Aug 2014 01:52:14 +0100 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> <87egwk6n00.fsf@gmail.com> Message-ID: <1407977390-sup-9313@sabre> In an ideal world, all GHC developers would also think about how to add Haddock support for the wonderful features they are adding, and code them up themselves. In practice, Haddock support has never stopped a feature from getting into GHC, but I think people who do add features should also be willing to roll up their sleeves and help the Haddock folks support them, though maybe at a later point in time... Edward Excerpts from Carter Schonwald's message of 2014-08-14 01:43:50 +0100: > one thing I wonder about is how should we approach noting > "theres a new language constructor, we should figure out a good way to > present it in haddock" in this work flow? > because the initial haddocks presentation might just be a strawman till > someone thinks about it carefully right? > > > On Wed, Aug 13, 2014 at 6:30 PM, Herbert Valerio Riedel > wrote: > > > On 2014-08-14 at 00:09:40 +0200, Mateusz Kowalczyk wrote: > > > > [...] > > > > > I don't know what the GHC branch name will be yet. ?ghc-head? makes most > > > sense but IIRC Herbert had some objections as it had been used in the > > > past for something else, but maybe he can pitch in. > > > > I had no objections at all to that name, 'ghc-head' is fine with me :-) > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > From bgamari.foss at gmail.com Thu Aug 14 03:20:10 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Wed, 13 Aug 2014 23:20:10 -0400 Subject: ARM64 Task Force In-Reply-To: References: <53E466F1.90201@centrum.cz> <53E60463.2080608@centrum.cz> <53EA5BCC.3060406@centrum.cz> Message-ID: <87egwjdafp.fsf@gmail.com> Luke Iannini writes: > Indeed, the float register stuff was a red herring -- restoring it caused no > problems and all my tests are working great. So yahoo!! We've got ARM64 > support. > Yay! Awesome work! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From chak at cse.unsw.edu.au Thu Aug 14 03:22:03 2014 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Thu, 14 Aug 2014 13:22:03 +1000 Subject: ARM64 Task Force In-Reply-To: References: <53E466F1.90201@centrum.cz> <53E60463.2080608@centrum.cz> <53EA5BCC.3060406@centrum.cz> Message-ID: That?s awesome ? great work! Manuel Luke Iannini : > Indeed, the float register stuff was a red herring ? restoring it caused no problems and all my tests are working great. So yahoo!! We've got ARM64 support. > > I'll tidy up the patches and create a ticket for review and merge. > > Luke > > > On Tue, Aug 12, 2014 at 4:47 PM, Luke Iannini wrote: > Hi all, > Yahoo, happy news ? I think I've got it. Studying enough of the non-handwritten ASM that I was stepping through led me to make this change: > https://github.com/lukexi/ghc/commit/1140e11db07817fcfc12446c74cd5a2bcdf92781 > (I think disabling the floating point registers was just a red herring; I'll confirm that next) > > And I can now call this fib code happily via the FFI: > fibs :: [Int] > fibs = 1:1:zipWith (+) fibs (tail fibs) > > foreign export ccall fib :: Int -> Int > fib :: Int -> Int > fib = (fibs !!) > > For posterity, more detail on the crashing case earlier (this is fixed now with proper storage and updating of the frame pointer): > Calling fib(1) or fib(2) worked, but calling fib(3) triggered the crash. > This was the backtrace, where you can see the errant 0x0000000100f05110 frame values. > (lldb) bt > * thread #1: tid = 0xac6ed, 0x0000000100f05110, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=2, address=0x100f05110) > frame #0: 0x0000000100f05110 > frame #1: 0x0000000100f05110 > * frame #2: 0x00000001000ffc9c HelloHaskell`-[SPJViewController viewDidLoad](self=0x0000000144e0cf10, _cmd=0x0000000186ae429a) + 76 at SPJViewController.m:22 > frame #3: 0x00000001862f8b70 UIKit`-[UIViewController loadViewIfRequired] + 692 > frame #4: 0x00000001862f8880 UIKit`-[UIViewController view] + 32 > frame #5: 0x00000001862feeb0 UIKit`-[UIWindow addRootViewControllerViewIfPossible] + 72 > frame #6: 0x00000001862fc6d4 UIKit`-[UIWindow _setHidden:forced:] + 296 > frame #7: 0x000000018636d2bc UIKit`-[UIWindow makeKeyAndVisible] + 56 > frame #8: 0x000000018657ff74 UIKit`-[UIApplication _callInitializationDelegatesForMainScene:transitionContext:] + 2804 > frame #9: 0x00000001865824ec UIKit`-[UIApplication _runWithMainScene:transitionContext:completion:] + 1480 > frame #10: 0x0000000186580b84 UIKit`-[UIApplication workspaceDidEndTransaction:] + 184 > frame #11: 0x0000000189d846ac FrontBoardServices`__31-[FBSSerialQueue performAsync:]_block_invoke + 28 > frame #12: 0x0000000181c7a360 CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_BLOCK__ + 20 > frame #13: 0x0000000181c79468 CoreFoundation`__CFRunLoopDoBlocks + 312 > frame #14: 0x0000000181c77a8c CoreFoundation`__CFRunLoopRun + 1756 > frame #15: 0x0000000181ba5664 CoreFoundation`CFRunLoopRunSpecific + 396 > frame #16: 0x0000000186363140 UIKit`-[UIApplication _run] + 552 > frame #17: 0x000000018635e164 UIKit`UIApplicationMain + 1488 > frame #18: 0x0000000100100268 HelloHaskell`main(argc=1, argv=0x000000016fd07a58) + 204 at main.m:24 > frame #19: 0x00000001921eea08 libdyld.dylib`start + 4 > > > > On Tue, Aug 12, 2014 at 11:24 AM, Karel Gardas wrote: > On 08/12/14 11:03 AM, Luke Iannini wrote: > It looks like it's jumping somewhere strange; lldb tells me it's to > 0x100e05110: .long 0x00000000 ; unknown opcode > 0x100e05114: .long 0x00000000 ; unknown opcode > 0x100e05118: .long 0x00000000 ; unknown opcode > 0x100e0511c: .long 0x00000000 ; unknown opcode > 0x100e05120: .long 0x00000000 ; unknown opcode > 0x100e05124: .long 0x00000000 ; unknown opcode > 0x100e05128: .long 0x00000000 ; unknown opcode > 0x100e0512c: .long 0x00000000 ; unknown opcode > > If I put a breakpoint on StgRun and step by instruction, I seem to make > it to about: > https://github.com/lukexi/ghc/blob/e99b7a41e64f3ddb9bb420c0d5583f0e302e321e/rts/StgCRun.c#L770 > (give or take a line) > > strange that it's in the middle of the stp isns block. Anyway, this looks like a cpu exception doesn't it? You will need to find out the reg which holds the "exception reason" value and then look on it in your debugger to find out what's going wrong there. > > Karel > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Thu Aug 14 16:48:39 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Thu, 14 Aug 2014 17:48:39 +0100 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> <87egwk6n00.fsf@gmail.com> Message-ID: <53ECE867.2040005@fuuzetsu.co.uk> On 08/14/2014 01:43 AM, Carter Schonwald wrote: > one thing I wonder about is how should we approach noting > "theres a new language constructor, we should figure out a good way to > present it in haddock" in this work flow? > because the initial haddocks presentation might just be a strawman till > someone thinks about it carefully right? > > > On Wed, Aug 13, 2014 at 6:30 PM, Herbert Valerio Riedel > wrote: > >> On 2014-08-14 at 00:09:40 +0200, Mateusz Kowalczyk wrote: >> >> [...] >> >>> I don't know what the GHC branch name will be yet. ?ghc-head? makes most >>> sense but IIRC Herbert had some objections as it had been used in the >>> past for something else, but maybe he can pitch in. >> >> I had no objections at all to that name, 'ghc-head' is fine with me :-) >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > If there's more than one reasonable way then there's going to be strawman along the way somewhere anyway but we can at least delegate that until later. As I mention in the OP, there's at least no need for me to worry about it until it's finished on the GHC side although I'll no doubt be aware of it sooner than that. The PatternSynonyms stuff is an example where the implementor also stepped up to putting in support into Haddock for rendering. At the same time, the implementation has changed multiple times along the way creating hassle for both parties so perhaps in the future it's better to simply make sure Haddock still compiles and works but perhaps delegate everything else to closer to the release. In the end, it does not matter if Haddock can't display a bleeding edge feature until it's going out as a release. -- Mateusz K. From carter.schonwald at gmail.com Thu Aug 14 16:51:43 2014 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 14 Aug 2014 12:51:43 -0400 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53ECE867.2040005@fuuzetsu.co.uk> References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> <87egwk6n00.fsf@gmail.com> <53ECE867.2040005@fuuzetsu.co.uk> Message-ID: good points by all :) On Thu, Aug 14, 2014 at 12:48 PM, Mateusz Kowalczyk wrote: > On 08/14/2014 01:43 AM, Carter Schonwald wrote: > > one thing I wonder about is how should we approach noting > > "theres a new language constructor, we should figure out a good way to > > present it in haddock" in this work flow? > > because the initial haddocks presentation might just be a strawman till > > someone thinks about it carefully right? > > > > > > On Wed, Aug 13, 2014 at 6:30 PM, Herbert Valerio Riedel < > hvriedel at gmail.com> > > wrote: > > > >> On 2014-08-14 at 00:09:40 +0200, Mateusz Kowalczyk wrote: > >> > >> [...] > >> > >>> I don't know what the GHC branch name will be yet. ?ghc-head? makes > most > >>> sense but IIRC Herbert had some objections as it had been used in the > >>> past for something else, but maybe he can pitch in. > >> > >> I had no objections at all to that name, 'ghc-head' is fine with me :-) > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > >> > > > > If there's more than one reasonable way then there's going to be > strawman along the way somewhere anyway but we can at least delegate > that until later. As I mention in the OP, there's at least no need for > me to worry about it until it's finished on the GHC side although I'll > no doubt be aware of it sooner than that. > > The PatternSynonyms stuff is an example where the implementor also > stepped up to putting in support into Haddock for rendering. At the same > time, the implementation has changed multiple times along the way > creating hassle for both parties so perhaps in the future it's better to > simply make sure Haddock still compiles and works but perhaps delegate > everything else to closer to the release. > > In the end, it does not matter if Haddock can't display a bleeding edge > feature until it's going out as a release. > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Thu Aug 14 17:46:39 2014 From: david.feuer at gmail.com (David Feuer) Date: Thu, 14 Aug 2014 13:46:39 -0400 Subject: Help with fusion rules and such Message-ID: I've worked out the basics of how to make more functions from GHC.Base, GHC.List, and Data.List participate in foldr/build fusion, but I could really use some help figuring out how to write the RULES to accompany them. I have too little experience with GHC's simplification process to manage this on my own, it seems. One complication I recognize already: REVERSE reverse xs = build $ \c n -> foldl (\a x -> x `c` a) n xs works as well as we can probably expect for fusion (it fuses nicely with map and a modified unfoldr), but when it doesn't fuse, it ends up duplicating its "worker" at the top level, potentially multiple times. I tried to write a rule to rewrite that to a simpler version, but that's complicated by the fact that foldl is INLINEd unconditionally. I'm thinking maybe the "right" thing is to allow all this duplication to happen, and then clean it up at the end ( see https://ghc.haskell.org/trac/ghc/ticket/9441 ), but that will not happen soon if it does at all. From fuuzetsu at fuuzetsu.co.uk Thu Aug 14 21:30:43 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Thu, 14 Aug 2014 22:30:43 +0100 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53EBE224.1060103@fuuzetsu.co.uk> References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> Message-ID: <53ED2A83.1020607@fuuzetsu.co.uk> On 08/13/2014 11:09 PM, Mateusz Kowalczyk wrote: > On 08/08/2014 06:25 AM, Mateusz Kowalczyk wrote: >> Hello, >> >> [snip] >> >> Transition from current setup: >> If I receive some patches I was promised then I will then make a 2.14.4 >> bugfix/compat release make sure that master is up to date and then >> create something like GHC-tracking branch from master and track that. I >> will then abandon that branch and not push to it unless it is GHC >> release time. The next commit in master will bring Haddock to a state >> where it works with 7.8.3: yes, this means removing all new API stuff >> until 7.10 or 7.8.4 or whatever. GHC API changes go onto GHC-tracking >> while all the stuff I write goes master. When GHC makes a release or is >> about to, I make master work with that and make GHC-tracking point to >> that instead. >> >> >> Thanks! >> > > So it is now close to a week gone and I have received many positive > replies and no negative ones. I will probably execute what I stated > initially at about this time tomorrow. > > To reiterate in short: > > 1. I make sure what we have now compiles with GHC HEAD and I stick it in > separate branch which GHC folk will now track and apply any API patches > to. Unless something changes by tomorrow, this will most likely be what > master is at right now, perhaps with a single change to the version in > cabal file. > > 2. I make the master branch work with 7.8.3 (and possibly 7.8.x) and do > development without worrying about any API changes in HEAD, releasing as > often as I need to. > > 3. At GHC release time, I update master with API changes so that > up-to-date Haddock is ready to be used to generate the docs and ship > with the compiler. > > I don't know what the GHC branch name will be yet. ?ghc-head? makes most > sense but IIRC Herbert had some objections as it had been used in the > past for something else, but maybe he can pitch in. > > The only thing I require from GHC folk is to simply use that branch and > not push/pull to/from master unless contributing feature patches or > trying to port some fixes into HEAD version for whatever reason. > > Thanks! > The deed is done, the branch to pull/push to/from if you're doing GHC API work is ?ghc-head?. ?master? is now a development branch against 7.8.3. When the time comes for 7.10, I can simply re-apply the fixes + anything from ?ghc-head? at that time. You only need to concern yourself with this if you ever push to Haddock. -- Mateusz K. From p.k.f.holzenspies at utwente.nl Fri Aug 15 10:52:47 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Fri, 15 Aug 2014 10:52:47 +0000 Subject: Unique as special boxing type & hidden constructors Message-ID: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl> Dear all, I'm working with Alan to instantiate everything for Data.Data, so that we can do better SYB-traversals (which should also help newcomers significantly to get into the GHC code base). Alan's looking at the AST types, I'm looking at the basic types in the compiler. Right now, I'm looking at Unique and two questions come up: > data Unique = MkUnique FastInt 1) As someone already commented: Is there a specific reason (other than history) that this isn't simply a newtype around an Int? If we're boxing anyway, we may as well use the default Int boxing and newtype-coerce to the specific purpose of Unique, no? 2) As a general question for GHC hacking style; what is the reason for hiding the constructors in the first place? I understand about abstraction and there are reasons for hiding, but there's a "public GHC API" and then there are all these modules that people can import at their own peril. Nothing is guaranteed about their consistency from version to version of GHC. I don't really see the point about hiding constructors (getting in the way of automatically deriving things) and then giving extra functions like (in the case of Unique): > getKeyFastInt (MkUnique x) = x > mkUniqueGrimily x = MkUnique (iUnbox x) I would propose to just make Unique a newtype for an Int and making the constructor visible. Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Fri Aug 15 12:01:07 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 15 Aug 2014 13:01:07 +0100 Subject: Unique as special boxing type & hidden constructors In-Reply-To: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl> References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl> Message-ID: <1408104042-sup-2896@sabre> The definition dates back to 1996, so it seems plausible that newtype is the way to go now. Edward Excerpts from p.k.f.holzenspies's message of 2014-08-15 11:52:47 +0100: > Dear all, > > > I'm working with Alan to instantiate everything for Data.Data, so that we can do better SYB-traversals (which should also help newcomers significantly to get into the GHC code base). Alan's looking at the AST types, I'm looking at the basic types in the compiler. > > Right now, I'm looking at Unique and two questions come up: > > > data Unique = MkUnique FastInt > > > 1) As someone already commented: Is there a specific reason (other than history) that this isn't simply a newtype around an Int? If we're boxing anyway, we may as well use the default Int boxing and newtype-coerce to the specific purpose of Unique, no? > > > 2) As a general question for GHC hacking style; what is the reason for hiding the constructors in the first place? > > I understand about abstraction and there are reasons for hiding, but there's a "public GHC API" and then there are all these modules that people can import at their own peril. Nothing is guaranteed about their consistency from version to version of GHC. I don't really see the point about hiding constructors (getting in the way of automatically deriving things) and then giving extra functions like (in the case of Unique): > > > getKeyFastInt (MkUnique x) = x > > > mkUniqueGrimily x = MkUnique (iUnbox x) > > > I would propose to just make Unique a newtype for an Int and making the constructor visible. > > > Regards, > > Philip From simonpj at microsoft.com Fri Aug 15 15:17:31 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 15 Aug 2014 15:17:31 +0000 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221AE08D@DB3PRD3001MB020.064d.mgd.msft.net> Eek. Glancing at this I see that every single data type has an extra type parameter. To me this feels like a sledgehammer to crack a nut. What is wrong with the type-function approach? Simon From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 13 August 2014 07:50 To: Philip K.F. H?lzenspies Cc: Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org Subject: Re: Broken Data.Data instances And I dipped my toes into the phabricator water, and uploaded a diff to https://phabricator.haskell.org/D153 I left the lines long for now, so that it is clear that I simply added parameters to existing type signatures. On Tue, Aug 12, 2014 at 10:51 PM, Alan & Kim Zimmerman > wrote: Status update I have worked through a proof of concept update to the GHC AST whereby the type is provided as a parameter to each data type. This was basically a mechanical process of changing type signatures, and required very little actual code changes, being only to initialise the placeholder types. The enabling types are type PostTcType = Type -- Used for slots in the abstract syntax -- where we want to keep slot for a type -- to be added by the type checker...but -- [before typechecking it's just bogus] type PreTcType = () -- used before typechecking class PlaceHolderType a where placeHolderType :: a instance PlaceHolderType PostTcType where placeHolderType = panic "Evaluated the place holder for a PostTcType" instance PlaceHolderType PreTcType where placeHolderType = () These are used to replace all instances of PostTcType in the hsSyn types. The change was applied against HEAD as of last friday, and can be found here https://github.com/alanz/ghc/tree/wip/landmine-param https://github.com/alanz/haddock/tree/wip/landmine-param They pass 'sh validate' with GHC 7.6.3, and compile against GHC 7.8.3. I have not tried to validate that yet, have no reason to expect failure. Can I please get some feedback as to whether this is a worthwhile change? It is the first step to getting a generic traversal safe AST Regards Alan On Mon, Jul 28, 2014 at 5:45 PM, Alan & Kim Zimmerman > wrote: FYI I edited the paste at http://lpaste.net/108262 to show the problem On Mon, Jul 28, 2014 at 5:41 PM, Alan & Kim Zimmerman > wrote: I already tried that, the syntax does not seem to allow it. I suspect some higher form of sorcery will be required, as alluded to here http://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family Alan On Mon, Jul 28, 2014 at 4:55 PM, > wrote: Dear Alan, I would think you would want to constrain the result, i.e. type family (Data (PostTcType a)) => PostTcType a where ? The Data-instance of ?a? doesn?t give you much if you have a ?PostTcType a?. Your point about SYB-recognition of WrongPhase is, of course, a good one ;) Regards, Philip From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: maandag 28 juli 2014 14:10 To: Holzenspies, P.K.F. (EWI) Cc: Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Philip I think the main reason for the WrongPhase thing is to have something that explicitly has a Data and Typeable instance, to allow generic (SYB) traversal. If we can get by without this so much the better. On a related note, is there any way to constrain the 'a' in type family PostTcType a where PostTcType Id = TcType PostTcType other = WrongPhaseTyp to have an instance of Data? I am experimenting with traversals over my earlier paste, and got stuck here (which is the reason the Show instances were commentet out in the original). Alan On Mon, Jul 28, 2014 at 12:30 PM, > wrote: Sorry about that? I?m having it out with my terminal server and the server seems to be winning. Here?s another go: I always read the () as ?there?s nothing meaningful to stick in here, but I have to stick in something? so I don?t necessarily want the WrongPhase-thing. There is very old commentary stating it would be lovely if someone could expose the PostTcType as a parameter of the AST-types, but that there are so many types and constructors, that it?s a boring chore to do. Actually, I was hoping haRe would come up to speed to be able to do this. That being said, I think Simon?s idea to turn PostTcType into a type-family is a better way altogether; it also documents intent, i.e. () may not say so much, but PostTcType RdrName says quite a lot. Simon commented that a lot of the internal structures aren?t trees, but cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just and Nothing, which again refer to the TyCon for Maybe. I was wondering whether it would be possible to make stateful lenses for this. Of course, for specific cases, we could do this, but I wonder if it is also possible to have lenses remember the things they visited and not visit them twice. Any ideas on this, Edward? Regards, Philip From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: maandag 28 juli 2014 11:14 To: Simon Peyton Jones Cc: Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs Subject: Re: Broken Data.Data instances I have made a conceptual example of this here http://lpaste.net/108262 Alan On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman > wrote: What about creating a specific type with a single constructor for the "not relevant to this phase" type to be used instead of () above? That would also clearly document what was going on. Alan On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones > wrote: I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. If the ?hand grenades? are the PostTcTypes, etc, then I can explain why they are there. There simply is no sensible type you can put before the type checker runs. For example one of the constructors in HsExpr is | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] After type checking we know what type the thing has, but before we have no clue. We could get around this by saying type PostTcType = Maybe TcType but that would mean that every post-typechecking consumer would need a redundant pattern-match on a Just that would always succeed. It?s nothing deeper than that. Adding Maybes everywhere would be possible, just clunky. However we now have type functions, and HsExpr is parameterised by an ?id? parameter, which changes from RdrName (after parsing) to Name (after renaming) to Id (after typechecking). So we could do this: | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] and define PostTcType as a closed type family thus type family PostTcType a where PostTcType Id = TcType PostTcType other = () That would be better than filling it with bottoms. But it might not help with generic programming, because there?d be a component whose type wasn?t fixed. I have no idea how generics and type functions interact. Simon From: Edward Kmett [mailto:ekmett at gmail.com] Sent: 27 July 2014 18:27 To: p.k.f.holzenspies at utwente.nl Cc: alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs Subject: Re: Broken Data.Data instances Philip, Alan, If you need a hand, I'm happy to pitch in guidance. I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. This works far better for users of the API than just randomly throwing them a live hand grenade. As I recall, these little grenades in generic programming over the GHC API have been a constant source of pain for libraries like haddock. Simon, It seems to me that regarding circular data structures, nothing prevents you from walking a circular data structure with Data.Data. You can generate a new one productively that looks just like the old with the contents swapped out, it is indistinguishable to an observer if the fixed point is lost, and a clever observer can use observable sharing to get it back, supposing that they are allowed to try. Alternately, we could use the 'virtual constructor' trick there to break the cycle and reintroduce it, but I'm less enthusiastic about that idea, even if it is simpler in many ways. -Edward On Sun, Jul 27, 2014 at 10:17 AM, > wrote: Alan, In that case, let's have a short feedback-loop between the two of us. It seems many of these files (Name.lhs, for example) are really stable through the repo-history. It would be nice to have one bigger refactoring all in one go (some of the code could use a polish, a lot of code seems removable). Regards, Philip ________________________________ Van: Alan & Kim Zimmerman [alan.zimm at gmail.com] Verzonden: vrijdag 25 juli 2014 13:44 Aan: Simon Peyton Jones CC: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: Re: Broken Data.Data instances By the way, I would be happy to attempt this task, if the concept is viable. On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman > wrote: While we are talking about fixing traversals, how about getting rid of the phase specific panic initialisers for placeHolderType, placeHolderKind and friends? In order to safely traverse with SYB, the following needs to be inserted into all the SYB schemes (see https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs) -- Check the Typeable items checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` fixity `SYB.extQ` nameSet) x where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: GHC.NameSet -> Bool postTcType = const (stage < SYB.TypeChecker ) :: GHC.PostTcType -> Bool fixity = const (stage < SYB.Renamer ) :: GHC.Fixity -> Bool And in addition HsCmdTop and ParStmtBlock are initialised with explicit 'undefined values. Perhaps use an initialiser that can have its panic turned off when called via the GHC API? Regards Alan On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones > wrote: So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? That?s fine with me provided (a) the default behaviour is not immediate divergence (which it might well be), and (b) the pitfalls are documented. Simon From: "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] Sent: 24 July 2014 18:42 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Dear Simon, et al, These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? Regards, Philip [cid:image001.jpg at 01CFB8A4.699B3380] Simon Peyton Jones 24 Jul 2014 18:22 GHC?s data structures are often mutually recursive. e.g. ? The TyCon for Maybe contains the DataCon for Just ? The DataCon For just contains Just?s type ? Just?s type contains the TyCon for Maybe So any attempt to recursively walk over all these structures, as you would a tree, will fail. Also there?s a lot of sharing. For example, every occurrence of ?map? is a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. Maybe that?s it; I?m not certain since I did not write the Data instances for any of GHC?s types Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 24 July 2014 16:42 To: ghc-devs at haskell.org Subject: Broken Data.Data instances Dear GHC-ers, Is there a reason for explicitly broken Data.Data instances? Case in point: > instance Data Var where > -- don't traverse? > toConstr _ = abstractConstr "Var" > gunfold _ _ = error "gunfold" > dataTypeOf _ = mkNoRepType "Var" I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > collect = everything mplus $ mkQ mzero return > > allTypes :: CoreExpr -> [Type] > allTypes = collect Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB?s ?everything? being broken by these instances, not so much. Would a patch ?fixing? these instances be acceptable? Regards, Philip _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: image001.jpg URL: From eir at cis.upenn.edu Fri Aug 15 15:27:20 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Fri, 15 Aug 2014 11:27:20 -0400 Subject: Broken Data.Data instances In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221AE08D@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221AE08D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <5CB0F7CB-5774-4A2C-9BBA-9BF386874D2E@cis.upenn.edu> Simon, I've been encouraging the type family approach. See https://phabricator.haskell.org/D157 Thanks, Richard On Aug 15, 2014, at 11:17 AM, Simon Peyton Jones wrote: > Eek. Glancing at this I see that every single data type has an extra type parameter. To me this feels like a sledgehammer to crack a nut. What is wrong with the type-function approach? > > Simon > > From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > Sent: 13 August 2014 07:50 > To: Philip K.F. H?lzenspies > Cc: Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org > Subject: Re: Broken Data.Data instances > > And I dipped my toes into the phabricator water, and uploaded a diff to https://phabricator.haskell.org/D153 > > I left the lines long for now, so that it is clear that I simply added parameters to existing type signatures. > > > > On Tue, Aug 12, 2014 at 10:51 PM, Alan & Kim Zimmerman wrote: > > Status update > > I have worked through a proof of concept update to the GHC AST whereby the type is provided as a parameter to each data type. This was basically a mechanical process of changing type signatures, and required very little actual code changes, being only to initialise the placeholder types. > > The enabling types are > > > type PostTcType = Type -- Used for slots in the abstract syntax > -- where we want to keep slot for a type > -- to be added by the type checker...but > -- [before typechecking it's just bogus] > > type PreTcType = () -- used before typechecking > > > class PlaceHolderType a where > placeHolderType :: a > > instance PlaceHolderType PostTcType where > > > placeHolderType = panic "Evaluated the place holder for a PostTcType" > > instance PlaceHolderType PreTcType where > placeHolderType = () > > These are used to replace all instances of PostTcType in the hsSyn types. > > The change was applied against HEAD as of last friday, and can be found here > > https://github.com/alanz/ghc/tree/wip/landmine-param > https://github.com/alanz/haddock/tree/wip/landmine-param > > They pass 'sh validate' with GHC 7.6.3, and compile against GHC 7.8.3. I have not tried to validate that yet, have no reason to expect failure. > > > Can I please get some feedback as to whether this is a worthwhile change? > > > It is the first step to getting a generic traversal safe AST > > Regards > > Alan > > > > On Mon, Jul 28, 2014 at 5:45 PM, Alan & Kim Zimmerman wrote: > > FYI I edited the paste at http://lpaste.net/108262 to show the problem > > > > On Mon, Jul 28, 2014 at 5:41 PM, Alan & Kim Zimmerman wrote: > > I already tried that, the syntax does not seem to allow it. > > I suspect some higher form of sorcery will be required, as alluded to herehttp://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family > > Alan > > > > On Mon, Jul 28, 2014 at 4:55 PM, wrote: > > Dear Alan, > > I would think you would want to constrain the result, i.e. > > type family (Data (PostTcType a)) => PostTcType a where ? > > The Data-instance of ?a? doesn?t give you much if you have a ?PostTcType a?. > > Your point about SYB-recognition of WrongPhase is, of course, a good one ;) > > Regards, > Philip > > > > From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > Sent: maandag 28 juli 2014 14:10 > To: Holzenspies, P.K.F. (EWI) > Cc: Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org > > Subject: Re: Broken Data.Data instances > > Philip > > I think the main reason for the WrongPhase thing is to have something that explicitly has a Data and Typeable instance, to allow generic (SYB) traversal. If we can get by without this so much the better. > > On a related note, is there any way to constrain the 'a' in > > type family PostTcType a where > PostTcType Id = TcType > PostTcType other = WrongPhaseTyp > > to have an instance of Data? > > I am experimenting with traversals over my earlier paste, and got stuck here (which is the reason the Show instances were commentet out in the original). > > Alan > > > > On Mon, Jul 28, 2014 at 12:30 PM, wrote: > Sorry about that? I?m having it out with my terminal server and the server seems to be winning. Here?s another go: > > I always read the () as ?there?s nothing meaningful to stick in here, but I have to stick in something? so I don?t necessarily want the WrongPhase-thing. There is very old commentary stating it would be lovely if someone could expose the PostTcType as a parameter of the AST-types, but that there are so many types and constructors, that it?s a boring chore to do. Actually, I was hoping haRe would come up to speed to be able to do this. That being said, I think Simon?s idea to turn PostTcType into a type-family is a better way altogether; it also documents intent, i.e. () may not say so much, but PostTcType RdrName says quite a lot. > > Simon commented that a lot of the internal structures aren?t trees, but cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just and Nothing, which again refer to the TyCon for Maybe. I was wondering whether it would be possible to make stateful lenses for this. Of course, for specific cases, we could do this, but I wonder if it is also possible to have lenses remember the things they visited and not visit them twice. Any ideas on this, Edward? > > Regards, > Philip > > > > > > From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > Sent: maandag 28 juli 2014 11:14 > To: Simon Peyton Jones > Cc: Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs > > Subject: Re: Broken Data.Data instances > > I have made a conceptual example of this here http://lpaste.net/108262 > > Alan > > > On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman wrote: > What about creating a specific type with a single constructor for the "not relevant to this phase" type to be used instead of () above? That would also clearly document what was going on. > > Alan > > > On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones wrote: > I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. > > > If the ?hand grenades? are the PostTcTypes, etc, then I can explain why they are there. > > There simply is no sensible type you can put before the type checker runs. For example one of the constructors in HsExpr is > | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] > > After type checking we know what type the thing has, but before we have no clue. > > We could get around this by saying > type PostTcType = Maybe TcType > but that would mean that every post-typechecking consumer would need a redundant pattern-match on a Just that would always succeed. > > It?s nothing deeper than that. Adding Maybes everywhere would be possible, just clunky. > > > However we now have type functions, and HsExpr is parameterised by an ?id? parameter, which changes from RdrName (after parsing) to Name (after renaming) to Id (after typechecking). So we could do this: > | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] > > and define PostTcType as a closed type family thus > > type family PostTcType a where > > PostTcType Id = TcType > > PostTcType other = () > > > That would be better than filling it with bottoms. But it might not help with generic programming, because there?d be a component whose type wasn?t fixed. I have no idea how generics and type functions interact. > > Simon > > From: Edward Kmett [mailto:ekmett at gmail.com] > Sent: 27 July 2014 18:27 > To: p.k.f.holzenspies at utwente.nl > Cc: alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs > > Subject: Re: Broken Data.Data instances > > Philip, Alan, > > > > If you need a hand, I'm happy to pitch in guidance. > > > > I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. > > > > This works far better for users of the API than just randomly throwing them a live hand grenade. As I recall, these little grenades in generic programming over the GHC API have been a constant source of pain for libraries like haddock. > > > > Simon, > > > > It seems to me that regarding circular data structures, nothing prevents you from walking a circular data structure with Data.Data. You can generate a new one productively that looks just like the old with the contents swapped out, it is indistinguishable to an observer if the fixed point is lost, and a clever observer can use observable sharing to get it back, supposing that they are allowed to try. > > > > Alternately, we could use the 'virtual constructor' trick there to break the cycle and reintroduce it, but I'm less enthusiastic about that idea, even if it is simpler in many ways. > > > > -Edward > > > > On Sun, Jul 27, 2014 at 10:17 AM, wrote: > > Alan, > > In that case, let's have a short feedback-loop between the two of us. It seems many of these files (Name.lhs, for example) are really stable through the repo-history. It would be nice to have one bigger refactoring all in one go (some of the code could use a polish, a lot of code seems removable). > > Regards, > Philip > > Van: Alan & Kim Zimmerman [alan.zimm at gmail.com] > Verzonden: vrijdag 25 juli 2014 13:44 > Aan: Simon Peyton Jones > CC: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > Onderwerp: Re: Broken Data.Data instances > > By the way, I would be happy to attempt this task, if the concept is viable. > > > > On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman wrote: > > While we are talking about fixing traversals, how about getting rid of the phase specific panic initialisers for placeHolderType, placeHolderKind and friends? > > In order to safely traverse with SYB, the following needs to be inserted into all the SYB schemes (see > https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs) > > -- Check the Typeable items > checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool > checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` fixity `SYB.extQ` nameSet) x > where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: GHC.NameSet -> Bool > postTcType = const (stage < SYB.TypeChecker ) :: GHC.PostTcType -> Bool > fixity = const (stage < SYB.Renamer ) :: GHC.Fixity -> Bool > > And in addition HsCmdTop and ParStmtBlock are initialised with explicit 'undefined values. > > Perhaps use an initialiser that can have its panic turned off when called via the GHC API? > > Regards > > Alan > > > > > > On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones wrote: > > So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? > > That?s fine with me provided (a) the default behaviour is not immediate divergence (which it might well be), and (b) the pitfalls are documented. > > Simon > > From: "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] > Sent: 24 July 2014 18:42 > To: Simon Peyton Jones > Cc: ghc-devs at haskell.org > Subject: Re: Broken Data.Data instances > > Dear Simon, et al, > > These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). > > So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? > > Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? > > Regards, > Philip > > > Simon Peyton Jones > 24 Jul 2014 18:22 > GHC?s data structures are often mutually recursive. e.g. > ? The TyCon for Maybe contains the DataCon for Just > > ? The DataCon For just contains Just?s type > > ? Just?s type contains the TyCon for Maybe > > > So any attempt to recursively walk over all these structures, as you would a tree, will fail. > > Also there?s a lot of sharing. For example, every occurrence of ?map? is a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. > > Maybe that?s it; I?m not certain since I did not write the Data instances for any of GHC?s types > > Simon > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Ofp.k.f.holzenspies at utwente.nl > Sent: 24 July 2014 16:42 > To: ghc-devs at haskell.org > Subject: Broken Data.Data instances > > Dear GHC-ers, > > Is there a reason for explicitly broken Data.Data instances? Case in point: > > > instance Data Var where > > -- don't traverse? > > toConstr _ = abstractConstr "Var" > > gunfold _ _ = error "gunfold" > > dataTypeOf _ = mkNoRepType "Var" > > I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > > > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > > collect = everything mplus $ mkQ mzero return > > > > allTypes :: CoreExpr -> [Type] > > allTypes = collect > > Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB?s ?everything? being broken by these instances, not so much. > > Would a patch ?fixing? these instances be acceptable? > > Regards, > Philip > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Aug 15 15:32:35 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 15 Aug 2014 15:32:35 +0000 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53EBE224.1060103@fuuzetsu.co.uk> References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221AE385@DB3PRD3001MB020.064d.mgd.msft.net> Great. Please can what you do be documented clearly somewhere, with a link to that documentation from here https://ghc.haskell.org/trac/ghc/wiki/Repositories, and/or somewhere else suitable? Thanks Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Mateusz Kowalczyk | Sent: 13 August 2014 23:10 | To: ghc-devs at haskell.org | Subject: Re: Moving Haddock *development* out of GHC tree | | On 08/08/2014 06:25 AM, Mateusz Kowalczyk wrote: | > Hello, | > | > [snip] | > | > Transition from current setup: | > If I receive some patches I was promised then I will then make a | > 2.14.4 bugfix/compat release make sure that master is up to date and | > then create something like GHC-tracking branch from master and track | > that. I will then abandon that branch and not push to it unless it is | > GHC release time. The next commit in master will bring Haddock to a | > state where it works with 7.8.3: yes, this means removing all new API | > stuff until 7.10 or 7.8.4 or whatever. GHC API changes go onto | > GHC-tracking while all the stuff I write goes master. When GHC makes | a | > release or is about to, I make master work with that and make | > GHC-tracking point to that instead. | > | > | > Thanks! | > | | So it is now close to a week gone and I have received many positive | replies and no negative ones. I will probably execute what I stated | initially at about this time tomorrow. | | To reiterate in short: | | 1. I make sure what we have now compiles with GHC HEAD and I stick it | in separate branch which GHC folk will now track and apply any API | patches to. Unless something changes by tomorrow, this will most likely | be what master is at right now, perhaps with a single change to the | version in cabal file. | | 2. I make the master branch work with 7.8.3 (and possibly 7.8.x) and do | development without worrying about any API changes in HEAD, releasing | as often as I need to. | | 3. At GHC release time, I update master with API changes so that up-to- | date Haddock is ready to be used to generate the docs and ship with the | compiler. | | I don't know what the GHC branch name will be yet. 'ghc-head' makes | most sense but IIRC Herbert had some objections as it had been used in | the past for something else, but maybe he can pitch in. | | The only thing I require from GHC folk is to simply use that branch and | not push/pull to/from master unless contributing feature patches or | trying to port some fixes into HEAD version for whatever reason. | | Thanks! | | -- | Mateusz K. | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From alan.zimm at gmail.com Fri Aug 15 15:36:15 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Fri, 15 Aug 2014 17:36:15 +0200 Subject: Broken Data.Data instances In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221AE08D@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221AE08D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Did you look at https://phabricator.haskell.org/D157? It superseded https://phabricator.haskell.org/D153 On Fri, Aug 15, 2014 at 5:17 PM, Simon Peyton Jones wrote: > Eek. Glancing at this I see that every single data type has an extra > type parameter. To me this feels like a sledgehammer to crack a nut. What > is wrong with the type-function approach? > > > > Simon > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* 13 August 2014 07:50 > *To:* Philip K.F. H?lzenspies > > *Cc:* Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org > *Subject:* Re: Broken Data.Data instances > > > > And I dipped my toes into the phabricator water, and uploaded a diff to > https://phabricator.haskell.org/D153 > > I left the lines long for now, so that it is clear that I simply added > parameters to existing type signatures. > > > > On Tue, Aug 12, 2014 at 10:51 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> wrote: > > Status update > > I have worked through a proof of concept update to the GHC AST whereby the > type is provided as a parameter to each data type. This was basically a > mechanical process of changing type signatures, and required very little > actual code changes, being only to initialise the placeholder types. > > The enabling types are > > type PostTcType = Type -- Used for slots in the abstract > syntax > -- where we want to keep slot for a type > -- to be added by the type checker...but > -- [before typechecking it's just bogus] > > type PreTcType = () -- used before typechecking > > > class PlaceHolderType a where > placeHolderType :: a > > instance PlaceHolderType PostTcType where > > > placeHolderType = panic "Evaluated the place holder for a > PostTcType" > > instance PlaceHolderType PreTcType where > placeHolderType = () > > These are used to replace all instances of PostTcType in the hsSyn types. > > The change was applied against HEAD as of last friday, and can be found > here > > https://github.com/alanz/ghc/tree/wip/landmine-param > https://github.com/alanz/haddock/tree/wip/landmine-param > > They pass 'sh validate' with GHC 7.6.3, and compile against GHC 7.8.3. I > have not tried to validate that yet, have no reason to expect failure. > > Can I please get some feedback as to whether this is a worthwhile > change? > > > It is the first step to getting a generic traversal safe AST > > Regards > > Alan > > > > On Mon, Jul 28, 2014 at 5:45 PM, Alan & Kim Zimmerman > wrote: > > FYI I edited the paste at http://lpaste.net/108262 to show the problem > > > > On Mon, Jul 28, 2014 at 5:41 PM, Alan & Kim Zimmerman > wrote: > > I already tried that, the syntax does not seem to allow it. > > I suspect some higher form of sorcery will be required, as alluded to here > http://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family > > Alan > > > > On Mon, Jul 28, 2014 at 4:55 PM, wrote: > > Dear Alan, > > > > I would think you would want to constrain the result, i.e. > > > > type family (Data (PostTcType a)) => PostTcType a where ? > > > > The Data-instance of ?a? doesn?t give you much if you have a ?PostTcType > a?. > > > > Your point about SYB-recognition of WrongPhase is, of course, a good one ;) > > > > Regards, > > Philip > > > > > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* maandag 28 juli 2014 14:10 > *To:* Holzenspies, P.K.F. (EWI) > *Cc:* Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org > > > *Subject:* Re: Broken Data.Data instances > > > > Philip > > I think the main reason for the WrongPhase thing is to have something that > explicitly has a Data and Typeable instance, to allow generic (SYB) > traversal. If we can get by without this so much the better. > > On a related note, is there any way to constrain the 'a' in > > type family PostTcType a where > PostTcType Id = TcType > PostTcType other = WrongPhaseTyp > > to have an instance of Data? > > I am experimenting with traversals over my earlier paste, and got stuck > here (which is the reason the Show instances were commentet out in the > original). > > Alan > > > > > > On Mon, Jul 28, 2014 at 12:30 PM, wrote: > > Sorry about that? I?m having it out with my terminal server and the server > seems to be winning. Here?s another go: > > > > I always read the () as ?there?s nothing meaningful to stick in here, but > I have to stick in something? so I don?t necessarily want the > WrongPhase-thing. There is very old commentary stating it would be lovely > if someone could expose the PostTcType as a parameter of the AST-types, but > that there are so many types and constructors, that it?s a boring chore to > do. Actually, I was hoping haRe would come up to speed to be able to do > this. That being said, I think Simon?s idea to turn PostTcType into a > type-family is a better way altogether; it also documents intent, i.e. () > may not say so much, but PostTcType RdrName says quite a lot. > > > > Simon commented that a lot of the internal structures aren?t trees, but > cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just > and Nothing, which again refer to the TyCon for Maybe. I was wondering > whether it would be possible to make stateful lenses for this. Of course, > for specific cases, we could do this, but I wonder if it is also possible > to have lenses remember the things they visited and not visit them twice. > Any ideas on this, Edward? > > > > Regards, > > Philip > > > > > > > > > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > > *Sent:* maandag 28 juli 2014 11:14 > > *To:* Simon Peyton Jones > *Cc:* Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs > > > *Subject:* Re: Broken Data.Data instances > > > > I have made a conceptual example of this here http://lpaste.net/108262 > > Alan > > > > On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman > wrote: > > What about creating a specific type with a single constructor for the "not > relevant to this phase" type to be used instead of () above? That would > also clearly document what was going on. > > Alan > > > > On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones > wrote: > > I've had to mangle a bunch of hand-written Data instances and push out > patches to a dozen packages that used to be built this way before I > convinced the authors to switch to safer versions of Data. Using virtual > smart constructors like we do now in containers and Text where needed can > be used to preserve internal invariants, etc. > > > > If the ?hand grenades? are the PostTcTypes, etc, then I can explain why > they are there. > > > > There simply is no sensible type you can put before the type checker > runs. For example one of the constructors in HsExpr is > > | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] > > After type checking we know what type the thing has, but before we have no > clue. > > > > We could get around this by saying > > type PostTcType = Maybe TcType > > but that would mean that every post-typechecking consumer would need a > redundant pattern-match on a Just that would always succeed. > > > > It?s nothing deeper than that. Adding Maybes everywhere would be > possible, just clunky. > > > > > > However we now have type functions, and HsExpr is parameterised by an ?id? > parameter, which changes from RdrName (after parsing) to Name (after > renaming) to Id (after typechecking). So we could do this: > > | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] > > and define PostTcType as a closed type family thus > > > > type family PostTcType a where > > PostTcType Id = TcType > > PostTcType other = () > > > > That would be better than filling it with bottoms. But it might not help > with generic programming, because there?d be a component whose type wasn?t > fixed. I have no idea how generics and type functions interact. > > > > Simon > > > > *From:* Edward Kmett [mailto:ekmett at gmail.com] > *Sent:* 27 July 2014 18:27 > *To:* p.k.f.holzenspies at utwente.nl > *Cc:* alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs > > > *Subject:* Re: Broken Data.Data instances > > > > Philip, Alan, > > > > If you need a hand, I'm happy to pitch in guidance. > > > > I've had to mangle a bunch of hand-written Data instances and push out > patches to a dozen packages that used to be built this way before I > convinced the authors to switch to safer versions of Data. Using virtual > smart constructors like we do now in containers and Text where needed can > be used to preserve internal invariants, etc. > > > > This works far better for users of the API than just randomly throwing > them a live hand grenade. As I recall, these little grenades in generic > programming over the GHC API have been a constant source of pain for > libraries like haddock. > > > > Simon, > > > > It seems to me that regarding circular data structures, nothing prevents > you from walking a circular data structure with Data.Data. You can generate > a new one productively that looks just like the old with the contents > swapped out, it is indistinguishable to an observer if the fixed point is > lost, and a clever observer can use observable sharing to get it back, > supposing that they are allowed to try. > > > > Alternately, we could use the 'virtual constructor' trick there to break > the cycle and reintroduce it, but I'm less enthusiastic about that idea, > even if it is simpler in many ways. > > > > -Edward > > > > On Sun, Jul 27, 2014 at 10:17 AM, wrote: > > Alan, > > In that case, let's have a short feedback-loop between the two of us. It > seems many of these files (Name.lhs, for example) are really stable through > the repo-history. It would be nice to have one bigger refactoring all in > one go (some of the code could use a polish, a lot of code seems removable). > > Regards, > Philip > ------------------------------ > > *Van:* Alan & Kim Zimmerman [alan.zimm at gmail.com] > *Verzonden:* vrijdag 25 juli 2014 13:44 > *Aan:* Simon Peyton Jones > *CC:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > *Onderwerp:* Re: Broken Data.Data instances > > By the way, I would be happy to attempt this task, if the concept is > viable. > > > > On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman < > alan.zimm at gmail.com> wrote: > > While we are talking about fixing traversals, how about getting rid of > the phase specific panic initialisers for placeHolderType, placeHolderKind > and friends? > > In order to safely traverse with SYB, the following needs to be inserted > into all the SYB schemes (see > > https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs > ) > > -- Check the Typeable items > checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool > checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` > fixity `SYB.extQ` nameSet) x > where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: > GHC.NameSet -> Bool > postTcType = const (stage < SYB.TypeChecker ) :: > GHC.PostTcType -> Bool > fixity = const (stage < SYB.Renamer ) :: > GHC.Fixity -> Bool > > And in addition HsCmdTop and ParStmtBlock are initialised with explicit > 'undefined values. > > Perhaps use an initialiser that can have its panic turned off when called > via the GHC API? > > Regards > > Alan > > > > > > On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > > So... does anyone object to me changing these "broken" instances with > the ones given by DeriveDataTypeable? > > That?s fine with me provided (a) the default behaviour is not immediate > divergence (which it might well be), and (b) the pitfalls are documented. > > > > Simon > > > > *From:* "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] > *Sent:* 24 July 2014 18:42 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: Broken Data.Data instances > > > > Dear Simon, et al, > > These are very good points to make for people writing such traversals and > queries. I would be more than happy to write a page on the pitfalls etc. on > the wiki, but in my experience so far, exploring the innards of GHC is > tremendously helped by trying small things out and showing (bits of) the > intermediate structures. For me, personally, this has always been hindered > by the absence of good instances of Data and/or Show (not having to bring > DynFlags and not just visualising with the pretty printer are very helpful). > > So... does anyone object to me changing these "broken" instances with the > ones given by DeriveDataTypeable? > > Also, many of these internal data structures could be provided with useful > lenses to improve such traversals further. Anyone ever go at that? Would be > people be interested? > > Regards, > Philip > > *Simon Peyton Jones* > > 24 Jul 2014 18:22 > > GHC?s data structures are often mutually recursive. e.g. > > ? The TyCon for Maybe contains the DataCon for Just > > ? The DataCon For just contains Just?s type > > ? Just?s type contains the TyCon for Maybe > > > > So any attempt to recursively walk over all these structures, as you would > a tree, will fail. > > > > Also there?s a lot of sharing. For example, every occurrence of ?map? is > a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, > etc etc. In walking over a term you may not want to walk over all that > stuff at every occurrence of map. > > > > Maybe that?s it; I?m not certain since I did not write the Data instances > for any of GHC?s types > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org > ] *On Behalf Of * > p.k.f.holzenspies at utwente.nl > *Sent:* 24 July 2014 16:42 > *To:* ghc-devs at haskell.org > *Subject:* Broken Data.Data instances > > > > Dear GHC-ers, > > > > Is there a reason for explicitly broken Data.Data instances? Case in point: > > > > > instance Data Var where > > > -- don't traverse? > > > toConstr _ = abstractConstr "Var" > > > gunfold _ _ = error "gunfold" > > > dataTypeOf _ = mkNoRepType "Var" > > > > I understand (vaguely) arguments about abstract data types, but this also > excludes convenient queries that can, e.g. extract all types from a > CoreExpr. I had hoped to do stuff like this: > > > > > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > > > collect = everything mplus $ mkQ mzero return > > > > > > allTypes :: CoreExpr -> [Type] > > > allTypes = collect > > > > Especially when still exploring (parts of) the GHC API, being able to > extract things in this fashion is very helpful. SYB?s ?everything? being > broken by these instances, not so much. > > > > Would a patch ?fixing? these instances be acceptable? > > > > Regards, > > Philip > > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: not available URL: From david.feuer at gmail.com Fri Aug 15 15:41:29 2014 From: david.feuer at gmail.com (David Feuer) Date: Fri, 15 Aug 2014 11:41:29 -0400 Subject: [GHC] #9434: GHC.List.reverse does not fuse In-Reply-To: <060.32e78b021f87c91f01e47a23bccbf564@haskell.org> References: <045.307e264233b9ba83858f4f16f33fa96f@haskell.org> <060.32e78b021f87c91f01e47a23bccbf564@haskell.org> Message-ID: I'm having trouble when it doesn't fuse?it ends up with duplicate bindings at the top level, because build gets inlined n times, and the result lifted out. Nothing's *wrong* with the code, except that there are multiple copies of it. On Aug 15, 2014 10:58 AM, "GHC" wrote: > #9434: GHC.List.reverse does not fuse > -------------------------------------+------------------------------------- > Reporter: dfeuer | Owner: > Type: bug | Status: new > Priority: normal | Milestone: > Component: | Version: 7.9 > libraries/base | Keywords: > Resolution: | Architecture: Unknown/Multiple > Operating System: | Difficulty: Easy (less than 1 > Unknown/Multiple | hour) > Type of failure: Runtime | Blocked By: > performance bug | Related Tickets: > Test Case: | > Blocking: | > Differential Revisions: | > -------------------------------------+------------------------------------- > > Comment (by simonpj): > > Great. Just check that when fusion ''doesn't'' take place, the result is > good. And do a `nofib` comparison for good luck. Then submit a patch. > > Thanks for doing all this work on fusion, David. > > Simon > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Aug 15 16:11:56 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 15 Aug 2014 16:11:56 +0000 Subject: Broken Data.Data instances In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF10438545@DB3PRD3001MB020.064d.mgd.msft.net> <53D14576.4060503@utwente.nl> <618BE556AADD624C9C918AA5D5911BEF104387F2@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF20E414F3@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221AE08D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221AEAEF@DB3PRD3001MB020.064d.mgd.msft.net> Ah, I see. Is there some way for D153 to be retired, then, to avoid inattentive people looking at it? (I?m wading through a week?s worth of email backlog.) I?ll look at D157 S From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 15 August 2014 16:36 To: Simon Peyton Jones Cc: Philip K.F. H?lzenspies; Edward Kmett; ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Did you look at https://phabricator.haskell.org/D157? It superseded https://phabricator.haskell.org/D153 On Fri, Aug 15, 2014 at 5:17 PM, Simon Peyton Jones > wrote: Eek. Glancing at this I see that every single data type has an extra type parameter. To me this feels like a sledgehammer to crack a nut. What is wrong with the type-function approach? Simon From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 13 August 2014 07:50 To: Philip K.F. H?lzenspies Cc: Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org Subject: Re: Broken Data.Data instances And I dipped my toes into the phabricator water, and uploaded a diff to https://phabricator.haskell.org/D153 I left the lines long for now, so that it is clear that I simply added parameters to existing type signatures. On Tue, Aug 12, 2014 at 10:51 PM, Alan & Kim Zimmerman > wrote: Status update I have worked through a proof of concept update to the GHC AST whereby the type is provided as a parameter to each data type. This was basically a mechanical process of changing type signatures, and required very little actual code changes, being only to initialise the placeholder types. The enabling types are type PostTcType = Type -- Used for slots in the abstract syntax -- where we want to keep slot for a type -- to be added by the type checker...but -- [before typechecking it's just bogus] type PreTcType = () -- used before typechecking class PlaceHolderType a where placeHolderType :: a instance PlaceHolderType PostTcType where placeHolderType = panic "Evaluated the place holder for a PostTcType" instance PlaceHolderType PreTcType where placeHolderType = () These are used to replace all instances of PostTcType in the hsSyn types. The change was applied against HEAD as of last friday, and can be found here https://github.com/alanz/ghc/tree/wip/landmine-param https://github.com/alanz/haddock/tree/wip/landmine-param They pass 'sh validate' with GHC 7.6.3, and compile against GHC 7.8.3. I have not tried to validate that yet, have no reason to expect failure. Can I please get some feedback as to whether this is a worthwhile change? It is the first step to getting a generic traversal safe AST Regards Alan On Mon, Jul 28, 2014 at 5:45 PM, Alan & Kim Zimmerman > wrote: FYI I edited the paste at http://lpaste.net/108262 to show the problem On Mon, Jul 28, 2014 at 5:41 PM, Alan & Kim Zimmerman > wrote: I already tried that, the syntax does not seem to allow it. I suspect some higher form of sorcery will be required, as alluded to here http://stackoverflow.com/questions/14133121/can-i-constrain-a-type-family Alan On Mon, Jul 28, 2014 at 4:55 PM, > wrote: Dear Alan, I would think you would want to constrain the result, i.e. type family (Data (PostTcType a)) => PostTcType a where ? The Data-instance of ?a? doesn?t give you much if you have a ?PostTcType a?. Your point about SYB-recognition of WrongPhase is, of course, a good one ;) Regards, Philip From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: maandag 28 juli 2014 14:10 To: Holzenspies, P.K.F. (EWI) Cc: Simon Peyton Jones; Edward Kmett; ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Philip I think the main reason for the WrongPhase thing is to have something that explicitly has a Data and Typeable instance, to allow generic (SYB) traversal. If we can get by without this so much the better. On a related note, is there any way to constrain the 'a' in type family PostTcType a where PostTcType Id = TcType PostTcType other = WrongPhaseTyp to have an instance of Data? I am experimenting with traversals over my earlier paste, and got stuck here (which is the reason the Show instances were commentet out in the original). Alan On Mon, Jul 28, 2014 at 12:30 PM, > wrote: Sorry about that? I?m having it out with my terminal server and the server seems to be winning. Here?s another go: I always read the () as ?there?s nothing meaningful to stick in here, but I have to stick in something? so I don?t necessarily want the WrongPhase-thing. There is very old commentary stating it would be lovely if someone could expose the PostTcType as a parameter of the AST-types, but that there are so many types and constructors, that it?s a boring chore to do. Actually, I was hoping haRe would come up to speed to be able to do this. That being said, I think Simon?s idea to turn PostTcType into a type-family is a better way altogether; it also documents intent, i.e. () may not say so much, but PostTcType RdrName says quite a lot. Simon commented that a lot of the internal structures aren?t trees, but cyclic graphs, e.g. the TyCon for Maybe references the DataCons for Just and Nothing, which again refer to the TyCon for Maybe. I was wondering whether it would be possible to make stateful lenses for this. Of course, for specific cases, we could do this, but I wonder if it is also possible to have lenses remember the things they visited and not visit them twice. Any ideas on this, Edward? Regards, Philip From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: maandag 28 juli 2014 11:14 To: Simon Peyton Jones Cc: Edward Kmett; Holzenspies, P.K.F. (EWI); ghc-devs Subject: Re: Broken Data.Data instances I have made a conceptual example of this here http://lpaste.net/108262 Alan On Mon, Jul 28, 2014 at 9:50 AM, Alan & Kim Zimmerman > wrote: What about creating a specific type with a single constructor for the "not relevant to this phase" type to be used instead of () above? That would also clearly document what was going on. Alan On Mon, Jul 28, 2014 at 9:14 AM, Simon Peyton Jones > wrote: I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. If the ?hand grenades? are the PostTcTypes, etc, then I can explain why they are there. There simply is no sensible type you can put before the type checker runs. For example one of the constructors in HsExpr is | HsMultiIf PostTcType [LGRHS id (LHsExpr id)] After type checking we know what type the thing has, but before we have no clue. We could get around this by saying type PostTcType = Maybe TcType but that would mean that every post-typechecking consumer would need a redundant pattern-match on a Just that would always succeed. It?s nothing deeper than that. Adding Maybes everywhere would be possible, just clunky. However we now have type functions, and HsExpr is parameterised by an ?id? parameter, which changes from RdrName (after parsing) to Name (after renaming) to Id (after typechecking). So we could do this: | HsMultiIf (PostTcType id) [LGRHS id (LHsExpr id)] and define PostTcType as a closed type family thus type family PostTcType a where PostTcType Id = TcType PostTcType other = () That would be better than filling it with bottoms. But it might not help with generic programming, because there?d be a component whose type wasn?t fixed. I have no idea how generics and type functions interact. Simon From: Edward Kmett [mailto:ekmett at gmail.com] Sent: 27 July 2014 18:27 To: p.k.f.holzenspies at utwente.nl Cc: alan.zimm at gmail.com; Simon Peyton Jones; ghc-devs Subject: Re: Broken Data.Data instances Philip, Alan, If you need a hand, I'm happy to pitch in guidance. I've had to mangle a bunch of hand-written Data instances and push out patches to a dozen packages that used to be built this way before I convinced the authors to switch to safer versions of Data. Using virtual smart constructors like we do now in containers and Text where needed can be used to preserve internal invariants, etc. This works far better for users of the API than just randomly throwing them a live hand grenade. As I recall, these little grenades in generic programming over the GHC API have been a constant source of pain for libraries like haddock. Simon, It seems to me that regarding circular data structures, nothing prevents you from walking a circular data structure with Data.Data. You can generate a new one productively that looks just like the old with the contents swapped out, it is indistinguishable to an observer if the fixed point is lost, and a clever observer can use observable sharing to get it back, supposing that they are allowed to try. Alternately, we could use the 'virtual constructor' trick there to break the cycle and reintroduce it, but I'm less enthusiastic about that idea, even if it is simpler in many ways. -Edward On Sun, Jul 27, 2014 at 10:17 AM, > wrote: Alan, In that case, let's have a short feedback-loop between the two of us. It seems many of these files (Name.lhs, for example) are really stable through the repo-history. It would be nice to have one bigger refactoring all in one go (some of the code could use a polish, a lot of code seems removable). Regards, Philip ________________________________ Van: Alan & Kim Zimmerman [alan.zimm at gmail.com] Verzonden: vrijdag 25 juli 2014 13:44 Aan: Simon Peyton Jones CC: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: Re: Broken Data.Data instances By the way, I would be happy to attempt this task, if the concept is viable. On Thu, Jul 24, 2014 at 11:23 PM, Alan & Kim Zimmerman > wrote: While we are talking about fixing traversals, how about getting rid of the phase specific panic initialisers for placeHolderType, placeHolderKind and friends? In order to safely traverse with SYB, the following needs to be inserted into all the SYB schemes (see https://github.com/alanz/HaRe/blob/master/src/Language/Haskell/Refact/Utils/GhcUtils.hs) -- Check the Typeable items checkItemStage1 :: (Typeable a) => SYB.Stage -> a -> Bool checkItemStage1 stage x = (const False `SYB.extQ` postTcType `SYB.extQ` fixity `SYB.extQ` nameSet) x where nameSet = const (stage `elem` [SYB.Parser,SYB.TypeChecker]) :: GHC.NameSet -> Bool postTcType = const (stage < SYB.TypeChecker ) :: GHC.PostTcType -> Bool fixity = const (stage < SYB.Renamer ) :: GHC.Fixity -> Bool And in addition HsCmdTop and ParStmtBlock are initialised with explicit 'undefined values. Perhaps use an initialiser that can have its panic turned off when called via the GHC API? Regards Alan On Thu, Jul 24, 2014 at 11:06 PM, Simon Peyton Jones > wrote: So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? That?s fine with me provided (a) the default behaviour is not immediate divergence (which it might well be), and (b) the pitfalls are documented. Simon From: "Philip K.F. H?lzenspies" [mailto:p.k.f.holzenspies at utwente.nl] Sent: 24 July 2014 18:42 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Broken Data.Data instances Dear Simon, et al, These are very good points to make for people writing such traversals and queries. I would be more than happy to write a page on the pitfalls etc. on the wiki, but in my experience so far, exploring the innards of GHC is tremendously helped by trying small things out and showing (bits of) the intermediate structures. For me, personally, this has always been hindered by the absence of good instances of Data and/or Show (not having to bring DynFlags and not just visualising with the pretty printer are very helpful). So... does anyone object to me changing these "broken" instances with the ones given by DeriveDataTypeable? Also, many of these internal data structures could be provided with useful lenses to improve such traversals further. Anyone ever go at that? Would be people be interested? Regards, Philip [cid:image001.jpg at 01CFB8AC.03F07650] Simon Peyton Jones 24 Jul 2014 18:22 GHC?s data structures are often mutually recursive. e.g. ? The TyCon for Maybe contains the DataCon for Just ? The DataCon For just contains Just?s type ? Just?s type contains the TyCon for Maybe So any attempt to recursively walk over all these structures, as you would a tree, will fail. Also there?s a lot of sharing. For example, every occurrence of ?map? is a Var, and inside that Var is map?s type, its strictness, its rewrite RULE, etc etc. In walking over a term you may not want to walk over all that stuff at every occurrence of map. Maybe that?s it; I?m not certain since I did not write the Data instances for any of GHC?s types Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 24 July 2014 16:42 To: ghc-devs at haskell.org Subject: Broken Data.Data instances Dear GHC-ers, Is there a reason for explicitly broken Data.Data instances? Case in point: > instance Data Var where > -- don't traverse? > toConstr _ = abstractConstr "Var" > gunfold _ _ = error "gunfold" > dataTypeOf _ = mkNoRepType "Var" I understand (vaguely) arguments about abstract data types, but this also excludes convenient queries that can, e.g. extract all types from a CoreExpr. I had hoped to do stuff like this: > collect :: (Typeable b, Data a, MonadPlus m) => a -> m b > collect = everything mplus $ mkQ mzero return > > allTypes :: CoreExpr -> [Type] > allTypes = collect Especially when still exploring (parts of) the GHC API, being able to extract things in this fashion is very helpful. SYB?s ?everything? being broken by these instances, not so much. Would a patch ?fixing? these instances be acceptable? Regards, Philip _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1247 bytes Desc: image001.jpg URL: From dan.doel at gmail.com Fri Aug 15 16:45:12 2014 From: dan.doel at gmail.com (Dan Doel) Date: Fri, 15 Aug 2014 12:45:12 -0400 Subject: [GHC] #9434: GHC.List.reverse does not fuse In-Reply-To: References: <045.307e264233b9ba83858f4f16f33fa96f@haskell.org> <060.32e78b021f87c91f01e47a23bccbf564@haskell.org> Message-ID: Isn't this kind of thing fixed for other functions by rewriting back into the direct recursive definition if no fusion happens? On Fri, Aug 15, 2014 at 11:41 AM, David Feuer wrote: > I'm having trouble when it doesn't fuse?it ends up with duplicate bindings > at the top level, because build gets inlined n times, and the result lifted > out. Nothing's *wrong* with the code, except that there are multiple copies > of it. > On Aug 15, 2014 10:58 AM, "GHC" wrote: > >> #9434: GHC.List.reverse does not fuse >> >> -------------------------------------+------------------------------------- >> Reporter: dfeuer | Owner: >> Type: bug | Status: new >> Priority: normal | Milestone: >> Component: | Version: 7.9 >> libraries/base | Keywords: >> Resolution: | Architecture: Unknown/Multiple >> Operating System: | Difficulty: Easy (less than >> 1 >> Unknown/Multiple | hour) >> Type of failure: Runtime | Blocked By: >> performance bug | Related Tickets: >> Test Case: | >> Blocking: | >> Differential Revisions: | >> >> -------------------------------------+------------------------------------- >> >> Comment (by simonpj): >> >> Great. Just check that when fusion ''doesn't'' take place, the result is >> good. And do a `nofib` comparison for good luck. Then submit a patch. >> >> Thanks for doing all this work on fusion, David. >> >> Simon >> >> -- >> Ticket URL: >> GHC >> The Glasgow Haskell Compiler >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Fri Aug 15 16:46:56 2014 From: david.feuer at gmail.com (David Feuer) Date: Fri, 15 Aug 2014 12:46:56 -0400 Subject: [GHC] #9434: GHC.List.reverse does not fuse In-Reply-To: References: <045.307e264233b9ba83858f4f16f33fa96f@haskell.org> <060.32e78b021f87c91f01e47a23bccbf564@haskell.org> Message-ID: Yes, but I'm not sure how to do that, especially because foldl doesn't have the phased NOINLINE that foldr does. On Aug 15, 2014 12:45 PM, "Dan Doel" wrote: > Isn't this kind of thing fixed for other functions by rewriting back into > the direct recursive definition if no fusion happens? > > > On Fri, Aug 15, 2014 at 11:41 AM, David Feuer > wrote: > >> I'm having trouble when it doesn't fuse?it ends up with duplicate >> bindings at the top level, because build gets inlined n times, and the >> result lifted out. Nothing's *wrong* with the code, except that there are >> multiple copies of it. >> On Aug 15, 2014 10:58 AM, "GHC" wrote: >> >>> #9434: GHC.List.reverse does not fuse >>> >>> -------------------------------------+------------------------------------- >>> Reporter: dfeuer | Owner: >>> Type: bug | Status: new >>> Priority: normal | Milestone: >>> Component: | Version: 7.9 >>> libraries/base | Keywords: >>> Resolution: | Architecture: >>> Unknown/Multiple >>> Operating System: | Difficulty: Easy (less >>> than 1 >>> Unknown/Multiple | hour) >>> Type of failure: Runtime | Blocked By: >>> performance bug | Related Tickets: >>> Test Case: | >>> Blocking: | >>> Differential Revisions: | >>> >>> -------------------------------------+------------------------------------- >>> >>> Comment (by simonpj): >>> >>> Great. Just check that when fusion ''doesn't'' take place, the result >>> is >>> good. And do a `nofib` comparison for good luck. Then submit a patch. >>> >>> Thanks for doing all this work on fusion, David. >>> >>> Simon >>> >>> -- >>> Ticket URL: >>> GHC >>> The Glasgow Haskell Compiler >>> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.doel at gmail.com Fri Aug 15 16:57:02 2014 From: dan.doel at gmail.com (Dan Doel) Date: Fri, 15 Aug 2014 12:57:02 -0400 Subject: [GHC] #9434: GHC.List.reverse does not fuse In-Reply-To: References: <045.307e264233b9ba83858f4f16f33fa96f@haskell.org> <060.32e78b021f87c91f01e47a23bccbf564@haskell.org> Message-ID: Make foldl's inline phased, and see what happens? Presumably the reason it doesn't have a phase limit yet is that it never participated in any fusion before, so there was never a reason to not just inline. Other than that it seems like: reverse xs => rewrite build (\c n -> foldl (noinlineFlip c) n xs) => inline foldl (noinlineFlip (:)) [] xs => rewrite reverse xs where I assume you need a special flip which may or may not exist in these modules already. On Fri, Aug 15, 2014 at 12:46 PM, David Feuer wrote: > Yes, but I'm not sure how to do that, especially because foldl doesn't > have the phased NOINLINE that foldr does. > On Aug 15, 2014 12:45 PM, "Dan Doel" wrote: > >> Isn't this kind of thing fixed for other functions by rewriting back into >> the direct recursive definition if no fusion happens? >> >> >> On Fri, Aug 15, 2014 at 11:41 AM, David Feuer >> wrote: >> >>> I'm having trouble when it doesn't fuse?it ends up with duplicate >>> bindings at the top level, because build gets inlined n times, and the >>> result lifted out. Nothing's *wrong* with the code, except that there are >>> multiple copies of it. >>> On Aug 15, 2014 10:58 AM, "GHC" wrote: >>> >>>> #9434: GHC.List.reverse does not fuse >>>> >>>> -------------------------------------+------------------------------------- >>>> Reporter: dfeuer | Owner: >>>> Type: bug | Status: new >>>> Priority: normal | Milestone: >>>> Component: | Version: 7.9 >>>> libraries/base | Keywords: >>>> Resolution: | Architecture: >>>> Unknown/Multiple >>>> Operating System: | Difficulty: Easy (less >>>> than 1 >>>> Unknown/Multiple | hour) >>>> Type of failure: Runtime | Blocked By: >>>> performance bug | Related Tickets: >>>> Test Case: | >>>> Blocking: | >>>> Differential Revisions: | >>>> >>>> -------------------------------------+------------------------------------- >>>> >>>> Comment (by simonpj): >>>> >>>> Great. Just check that when fusion ''doesn't'' take place, the result >>>> is >>>> good. And do a `nofib` comparison for good luck. Then submit a patch. >>>> >>>> Thanks for doing all this work on fusion, David. >>>> >>>> Simon >>>> >>>> -- >>>> Ticket URL: >>>> GHC >>>> The Glasgow Haskell Compiler >>>> >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergo at erdi.hu Sat Aug 16 08:29:14 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Sat, 16 Aug 2014 16:29:14 +0800 (SGT) Subject: Unifying inferred and declared *existential* type variables Message-ID: Hi, Background: Type signatures for pattern synonyms are / can be explicit about the existentially-bound type variables of the pattern. For example, given the following definitions: data T where C :: (Eq a) => [a] -> (a, Bool) -> T pattern P x y = C x y the inferred type of P (with explicit foralls printed) is pattern type forall a. Eq a => P [a] (a, Bool) :: T My problem: Ticket #8968 is a good example of a situation where we need this pattern type signature to be entered by the user. So continuing with the previous example, the user should be able to write, e.g. pattern type forall b. Eq b => P [b] (b, Bool) : T So in this case, I have to unify the argument types [b] ~ [a] and (b, Bool) ~ (a, Bool), and then use the resulting coercions of the existentially-bound variables before calling the success continuation. So I generate a pattern synonym matcher as such (going with the previous example) (I've pushed my code to wip/T8584): $mP{v r0} :: forall t [sk]. T -> (forall b [sk]. Eq b [sk] => [b [sk]] -> (b [sk], Bool) -> t [sk]) -> t [sk] -> t [sk] $mP{v r0} = /\(@ t [sk]). \ ((scrut [lid] :: T)) ((cont [lid] :: forall b [sk]. Eq b [sk] => [b [sk]] -> (b [sk], Bool) -> t [sk])) ((fail [lid] :: t [sk])) -> case scrut of { C {@ a [ssk] ($dEq_aCt [lid] :: Eq a [ssk]) EvBindsVar} (x [lid] :: [a [ssk]]) (y [lid] :: (a [ssk], Bool)) -> cont b $dEq_aCr x y |> (cobox{v} [lid], _N)_N |> [cobox{v} [lid]]_N } <>} The two 'cobox'es are the results of unifyType'ing [a] with [b] and (a, Bool) with (b, Bool). So basically what I hoped to do was to pattern-match on 'C{@ a $dEqA} x y' and pass that to cont as 'b' and '$dEqB' by rewriting them with the coercions. (It's unfortunate that even with full -dppr-debug output, I can't see what's inside the 'cobox'es). However, when I try doing this, I end up with the error message SigGADT2.hs:10:9: Couldn't match type ?a [ssk]? with ?b [sk]? because type variable ?b [sk]? would escape its scope This (rigid, skolem) type variable is bound by the type signature for P :: [b [sk]] -> (b [sk], Bool) -> T at SigGADT2.hs:10:9 Expected type: [b [sk]] Actual type: [a [ssk]] Also, while the result of unifying '[b]' ~ '[a]' and '(b, Bool)' ~ '(a, Bool)' should take care of turning the 'a' bound by the constructor into the 'b' expected by the continuation function, it seems to me I'll need to do some extra magic to also turn the bound 'Eq a' evidence variable into the 'Eq b'. Obviously, I am missing a ton of stuff here. Can someone help me out? Thanks, Gergo -- .--= ULLA! =-----------------. \ http://gergo.erdi.hu \ `---= gergo at erdi.hu =-------' I love vegetarians - some of my favorite foods are vegetarians. From fuuzetsu at fuuzetsu.co.uk Sat Aug 16 14:59:51 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Sat, 16 Aug 2014 15:59:51 +0100 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221AE385@DB3PRD3001MB020.064d.mgd.msft.net> References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF221AE385@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <53EF71E7.5090804@fuuzetsu.co.uk> On 08/15/2014 04:32 PM, Simon Peyton Jones wrote: > Great. Please can what you do be documented clearly somewhere, with a link to that documentation from here https://ghc.haskell.org/trac/ghc/wiki/Repositories, and/or somewhere else suitable? > > Thanks > > Simon > Nothing on that page needs to change. The only thing that needs documenting is than any GHC dev pushing to Haddock needs to do so on the ?ghc-head? branch. I have made a change to the table at [1] and added a note but perhaps there's another place that I need to make a change at that's not immediately obvious. Herbert kindly updated the sync-all script that defaults to the new branch so I think we're covered. Please don't hesitate to ask if you (plural) need help with something here. [1]: https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git/Submodules -- Mateusz K. From hvriedel at gmail.com Sat Aug 16 15:34:46 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sat, 16 Aug 2014 17:34:46 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53EF71E7.5090804@fuuzetsu.co.uk> (Mateusz Kowalczyk's message of "Sat, 16 Aug 2014 15:59:51 +0100") References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF221AE385@DB3PRD3001MB020.064d.mgd.msft.net> <53EF71E7.5090804@fuuzetsu.co.uk> Message-ID: <87ha1ca1nt.fsf@gmail.com> On 2014-08-16 at 16:59:51 +0200, Mateusz Kowalczyk wrote: [...] > Herbert kindly updated the sync-all script that > defaults to the new branch so I think we're covered. Minor correction: I did not touch the sync-all script at all. I merely declared a default branch in the .gitmodules file: http://git.haskell.org/ghc.git/commitdiff/03a8003e5d3aec97b3a14b2d3c774aad43e0456e From hvriedel at gmail.com Sun Aug 17 13:16:03 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 17 Aug 2014 15:16:03 +0200 Subject: Wired-in data-constructors with UNPACKed fields Message-ID: <87a973z27g.fsf@gnu.org> Hello *, I'm a bit stuck with the wired-in type aspect of integer-gmp2 and was hoping someone with more experience in this area could provide direction on how to properly register the data definition data Integer = SI# Int# | Jp# {-# UNPACK #-} !BigNat | Jn# {-# UNPACK #-} !BigNat data BigNat = BN# ByteArray# with compiler/prelude/TysWiredIn.lhs Right now I'm getting the Lint-failure Unfolding of sqrInteger : Warning: In the expression: $wsqrBigNat dt Argument value doesn't match argument type: Fun type: ByteArray# -> BigNat Arg type: BigNat Arg: dt where sqrBigNat :: BigNat -> BigNat which seems to be caused by the UNPACK property not being handled correctly. The full error message can be found at http://git.haskell.org/ghc.git/commitdiff/13cb42bc8b6b26d3893d4ddcc22eeab36d39a0c7 and the other half of the integer-gmp2 patch can be found at http://git.haskell.org/ghc.git/commitdiff/b5ed2f277e551dcaade5837568e4cbb7dd811c04 or alternatively https://phabricator.haskell.org/D82 Thanks in advance, hvr From simonpj at microsoft.com Sun Aug 17 21:56:32 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 17 Aug 2014 21:56:32 +0000 Subject: Wired-in data-constructors with UNPACKed fields In-Reply-To: <87a973z27g.fsf@gnu.org> References: <87a973z27g.fsf@gnu.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221BCDCC@DB3PRD3001MB020.064d.mgd.msft.net> Herbert You'll see that 'pcDataCon' in TysWiredIn ultimately calls pcDataConWithFixity'. And that builds a data constructor with a NoDataConRep field, comment "Wired-in types are too simple to need wrappers". But your wired-in type is NOT too simply to need a wrapper! You'll need to build a suitable DCR record (see DataCon.lhs), which will be something of a nuisance for you, although you can doubtless re-use utility functions that are currently used to build a DCR record. Alternatively, just put a ByteArray# as the argument of JP# and JN#. After all, you have Int# as the argument of SI#! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Herbert | Valerio Riedel | Sent: 17 August 2014 14:16 | To: ghc-devs | Subject: Wired-in data-constructors with UNPACKed fields | | Hello *, | | I'm a bit stuck with the wired-in type aspect of integer-gmp2 and was | hoping someone with more experience in this area could provide direction | on how to properly register the data definition | | data Integer = SI# Int# | | Jp# {-# UNPACK #-} !BigNat | | Jn# {-# UNPACK #-} !BigNat | | data BigNat = BN# ByteArray# | | with compiler/prelude/TysWiredIn.lhs | | Right now I'm getting the Lint-failure | | Unfolding of sqrInteger | : Warning: | In the expression: $wsqrBigNat dt | Argument value doesn't match argument type: | Fun type: ByteArray# -> BigNat | Arg type: BigNat | Arg: dt | | where | | sqrBigNat :: BigNat -> BigNat | | which seems to be caused by the UNPACK property not being handled | correctly. | | | | The full error message can be found at | | | http://git.haskell.org/ghc.git/commitdiff/13cb42bc8b6b26d3893d4ddcc22eeab | 36d39a0c7 | | and the other half of the integer-gmp2 patch can be found at | | | http://git.haskell.org/ghc.git/commitdiff/b5ed2f277e551dcaade5837568e4cbb | 7dd811c04 | | or alternatively | | https://phabricator.haskell.org/D82 | | | Thanks in advance, | hvr | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Sun Aug 17 22:11:48 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 17 Aug 2014 22:11:48 +0000 Subject: Unique as special boxing type & hidden constructors In-Reply-To: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl> References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net> Re (1) I think this is a historical. A newtype wrapping an Int should be fine. I'd be ok with that change. Re (2), I think your question is: why does module Unique export the data type Unique abstractly, rather than exporting both the data type and its constructor. No deep reason here, but it guarantees that you can only *make* a unique from an Int by calling 'mkUniqueGrimily', which signals clearly that something fishy is going on. And rightly so! Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 15 August 2014 11:53 To: ghc-devs at haskell.org Subject: Unique as special boxing type & hidden constructors Dear all, I'm working with Alan to instantiate everything for Data.Data, so that we can do better SYB-traversals (which should also help newcomers significantly to get into the GHC code base). Alan's looking at the AST types, I'm looking at the basic types in the compiler. Right now, I'm looking at Unique and two questions come up: > data Unique = MkUnique FastInt 1) As someone already commented: Is there a specific reason (other than history) that this isn't simply a newtype around an Int? If we're boxing anyway, we may as well use the default Int boxing and newtype-coerce to the specific purpose of Unique, no? 2) As a general question for GHC hacking style; what is the reason for hiding the constructors in the first place? I understand about abstraction and there are reasons for hiding, but there's a "public GHC API" and then there are all these modules that people can import at their own peril. Nothing is guaranteed about their consistency from version to version of GHC. I don't really see the point about hiding constructors (getting in the way of automatically deriving things) and then giving extra functions like (in the case of Unique): > getKeyFastInt (MkUnique x) = x > mkUniqueGrimily x = MkUnique (iUnbox x) I would propose to just make Unique a newtype for an Int and making the constructor visible. Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sun Aug 17 22:13:02 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 17 Aug 2014 22:13:02 +0000 Subject: [GHC] #9434: GHC.List.reverse does not fuse In-Reply-To: References: <045.307e264233b9ba83858f4f16f33fa96f@haskell.org> <060.32e78b021f87c91f01e47a23bccbf564@haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221BCE2F@DB3PRD3001MB020.064d.mgd.msft.net> Well, I?d much rather avoid creating the duplication in the first place, than to create and try to CSE it away. Others have suggested ways of doing so, following the pattern of existing RULES. Simon From: David Feuer [mailto:david.feuer at gmail.com] Sent: 15 August 2014 16:41 To: ghc-devs; Simon Peyton Jones Subject: Re: [GHC] #9434: GHC.List.reverse does not fuse I'm having trouble when it doesn't fuse?it ends up with duplicate bindings at the top level, because build gets inlined n times, and the result lifted out. Nothing's *wrong* with the code, except that there are multiple copies of it. On Aug 15, 2014 10:58 AM, "GHC" > wrote: #9434: GHC.List.reverse does not fuse -------------------------------------+------------------------------------- Reporter: dfeuer | Owner: Type: bug | Status: new Priority: normal | Milestone: Component: | Version: 7.9 libraries/base | Keywords: Resolution: | Architecture: Unknown/Multiple Operating System: | Difficulty: Easy (less than 1 Unknown/Multiple | hour) Type of failure: Runtime | Blocked By: performance bug | Related Tickets: Test Case: | Blocking: | Differential Revisions: | -------------------------------------+------------------------------------- Comment (by simonpj): Great. Just check that when fusion ''doesn't'' take place, the result is good. And do a `nofib` comparison for good luck. Then submit a patch. Thanks for doing all this work on fusion, David. Simon -- Ticket URL: GHC The Glasgow Haskell Compiler -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Sun Aug 17 23:10:46 2014 From: david.feuer at gmail.com (David Feuer) Date: Sun, 17 Aug 2014 19:10:46 -0400 Subject: [GHC] #9434: GHC.List.reverse does not fuse In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221BCE2F@DB3PRD3001MB020.064d.mgd.msft.net> References: <045.307e264233b9ba83858f4f16f33fa96f@haskell.org> <060.32e78b021f87c91f01e47a23bccbf564@haskell.org> <618BE556AADD624C9C918AA5D5911BEF221BCE2F@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I'm working on it, based on a discussion with Dan Doel. That said, Haskell's supposed to be anti-pattern, and rewrite/transform/write-back is definitely a pattern?and a somewhat painful one. Aside from having to use forms that can be matched on (intentionally blinding the inliner) there's the unfortunate fact that the written-back forms have to be hand-written recursive definitions. That's what made me think about a CSE-like cleanup pass, despite not knowing nearly enough to be able to write it myself just yet. I wouldn't want full CSE, but rather just to merge identical top-level lambda forms, which I *think* should avoid potential performance issues caused by full CSE. Some challenges relating to the idea: 1. It would be very nice if named forms were given preference. So if there are two copies of \foo -> bar, and the programmer has named one of them baz, then they should ideally be merged to baz, rather than to quux17. No, I have no idea what might be involved. 2. In a sufficiently large module, compilation speed could theoretically be a problem. I don't think this is likely, however, especially since distinct expressions usually diverge fairly high in their syntax trees. 3. If there are a *lot* of copies of some functions, the copies could make the Core harder to read. I would conjecture that this will not happen often. On Aug 17, 2014 6:13 PM, "Simon Peyton Jones" wrote: > Well, I?d *much* rather avoid creating the duplication in the first > place, than to create and try to CSE it away. Others have suggested ways > of doing so, following the pattern of existing RULES. > > > > Simon > > > > *From:* David Feuer [mailto:david.feuer at gmail.com] > *Sent:* 15 August 2014 16:41 > *To:* ghc-devs; Simon Peyton Jones > *Subject:* Re: [GHC] #9434: GHC.List.reverse does not fuse > > > > I'm having trouble when it doesn't fuse?it ends up with duplicate bindings > at the top level, because build gets inlined n times, and the result lifted > out. Nothing's *wrong* with the code, except that there are multiple copies > of it. > > On Aug 15, 2014 10:58 AM, "GHC" wrote: > > #9434: GHC.List.reverse does not fuse > -------------------------------------+------------------------------------- > Reporter: dfeuer | Owner: > Type: bug | Status: new > Priority: normal | Milestone: > Component: | Version: 7.9 > libraries/base | Keywords: > Resolution: | Architecture: Unknown/Multiple > Operating System: | Difficulty: Easy (less than 1 > Unknown/Multiple | hour) > Type of failure: Runtime | Blocked By: > performance bug | Related Tickets: > Test Case: | > Blocking: | > Differential Revisions: | > -------------------------------------+------------------------------------- > > Comment (by simonpj): > > Great. Just check that when fusion ''doesn't'' take place, the result is > good. And do a `nofib` comparison for good luck. Then submit a patch. > > Thanks for doing all this work on fusion, David. > > Simon > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreas.voellmy at gmail.com Mon Aug 18 03:41:10 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Sun, 17 Aug 2014 23:41:10 -0400 Subject: Trouble committing Message-ID: Hi GHCers, I just fixed a bug (#9423) and went through the Phab workflow. Then I did a fresh checkout from git and ran: $ git checkout master $ arc patch --nobranch D129 $ git push origin master as explained on https://ghc.haskell.org/trac/ghc/wiki/Phabricator, but on the last command I get this error: fatal: remote error: access denied or repository not exported: /ghc.git Maybe I just no longer have commit access to ghc? If so, could someone restore my access? Or I'd be happy if someone else can push the patch. Thanks, Andi -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Aug 18 08:20:06 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Aug 2014 08:20:06 +0000 Subject: [Haskell-cafe] Wish list for GHC API tooling support In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF221BD634@DB3PRD3001MB020.064d.mgd.msft.net> Based on my experiences with HaRe, I have started putting together a wish list of features I would like to see in it, which is here https://github.com/fpco/haskell-ide/wiki/GHC-API Just to say: I really support having this discussion. The GHC API is not driven enough by the needs of its clients. I think that a client can actually access pretty much any function inside GHC, so no wonder the API feels unstable! Steps forward might be: ? Be very clear which functions are part of the official API. I think it?s the ones exported by GHC.lhs ? Review them to check they all make sense ? Add good Haddock documentation for each of them ? Make sure that each is marked, at its definition site, as part of the GHC API. At the moment it is far from clear when one is modifying a function that is part of the GHC API, since most of these functions aren?t in GHC.lhs I?m happy to help, but not as a driving force. Thanks! Simon From: Haskell-Cafe [mailto:haskell-cafe-bounces at haskell.org] On Behalf Of Alan & Kim Zimmerman Sent: 18 August 2014 09:06 To: haskell Subject: [Haskell-cafe] Wish list for GHC API tooling support At the moment the GHC API is a sort of poor relation in the haskell world, where it could be a significantly useful resource for the growing list of haskell tool providers. Based on my experiences with HaRe, I have started putting together a wish list of features I would like to see in it, which is here https://github.com/fpco/haskell-ide/wiki/GHC-API I welcome feedback / discussion on this. Regards Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From omeragacan at gmail.com Mon Aug 18 09:34:10 2014 From: omeragacan at gmail.com (=?UTF-8?Q?=C3=96mer_Sinan_A=C4=9Facan?=) Date: Mon, 18 Aug 2014 12:34:10 +0300 Subject: is this a bug: when <> happens stack trace is reported twice Message-ID: Hi all, I just realized that when `+RTS -xc` is used and a <> error is happened, stack trace is reported twice. This is not the case with `error` calls, in that case stack traces are reported only once. Here's a demonstration: ? haskell cat loop.hs myFun :: Int myFun = let g = g + 1 in g + 10 myFun2 :: Int myFun2 = error "unexpected happened" main = print myFun ? haskell ./loop +RTS -xc *** Exception (reporting due to +RTS -xc): (THUNK_STATIC), stack trace: Main.myFun.g, called from Main.myFun, called from Main.CAF *** Exception (reporting due to +RTS -xc): (THUNK_STATIC), stack trace: Main.myFun.g, called from Main.myFun, called from Main.CAF loop: <> Here the stack trace is reported twice. If I use `myFun2` instead of `myFun`: ? haskell ./loop +RTS -xc *** Exception (reporting due to +RTS -xc): (THUNK_1_0), stack trace: Main.myFun2, called from Main.CAF --> evaluated by: Main.main, called from Main.CAF loop: unexpected happened Why is this happening in the case of <> error? Is this expected or is this a bug? I'm willing to trace the code through the RTS and fix it but I just want to make sure that this really is a bug. Thanks. --- ?mer Sinan A?acan http://osa1.net From p.k.f.holzenspies at utwente.nl Mon Aug 18 13:49:43 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Mon, 18 Aug 2014 13:49:43 +0000 Subject: Unique as special boxing type & hidden constructors In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net> References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl>, <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl> Dear Simon, et al, Looking at Unique, there are a few more design choices that may be outdated, and since I'm polishing things now, anyway, I figured I could update it on more fronts. 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me somewhat. Similar things occur elsewhere in the code. Isn't the assumption that GHC is being used? Is this old portability stuff that may be removed? 2) Uniques are produced from a Char and an Int. The function to build Uniques (mkUnique) is not exported, according to the comments, so as to see all characters used. Access to these different "classes" of Uniques is given through specialised mkXXXUnique functions. Does anyone have a problem with something like: > data UniqueClass > = UniqDesugarer > | UniqAbsCFlattener > | UniqSimplStg > | UniqNativeCodeGen > ... and a public (i.e. exported) function: > mkUnique :: UniqueClass -> Int -> Unique ? The benefit of this would be to have more (to my taste) self-documenting code and a greater chance that documentation is updated (the list of "unique supply characters" in the comments is currently outdated). 3) Is there a reason for having functions implementing class-methods to be exported? In the case of Unique, there is pprUnique and: > instance Outputable Unique where > ppr = pprUnique Here pprUnique is exported and it is used in quite a few places where it's argument is unambiguously a Unique (so it's not to force the type) *and* "ppr" is used for all kinds of other types. I'm assuming this is an old choice making things marginally faster, but I would say cleaning up the API / namespace would now outweigh this margin. ? I will also be adding Haddock-comments, so when this is done, a review would be most welcome (I'll also be doing some similar transformations to other long-since-untouched-code). Regards, Philip ________________________________ Van: Simon Peyton Jones Verzonden: maandag 18 augustus 2014 00:11 Aan: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: RE: Unique as special boxing type & hidden constructors Re (1) I think this is a historical. A newtype wrapping an Int should be fine. I?d be ok with that change. Re (2), I think your question is: why does module Unique export the data type Unique abstractly, rather than exporting both the data type and its constructor. No deep reason here, but it guarantees that you can only *make* a unique from an Int by calling ?mkUniqueGrimily?, which signals clearly that something fishy is going on. And rightly so! Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 15 August 2014 11:53 To: ghc-devs at haskell.org Subject: Unique as special boxing type & hidden constructors Dear all, I'm working with Alan to instantiate everything for Data.Data, so that we can do better SYB-traversals (which should also help newcomers significantly to get into the GHC code base). Alan's looking at the AST types, I'm looking at the basic types in the compiler. Right now, I'm looking at Unique and two questions come up: > data Unique = MkUnique FastInt 1) As someone already commented: Is there a specific reason (other than history) that this isn't simply a newtype around an Int? If we're boxing anyway, we may as well use the default Int boxing and newtype-coerce to the specific purpose of Unique, no? 2) As a general question for GHC hacking style; what is the reason for hiding the constructors in the first place? I understand about abstraction and there are reasons for hiding, but there's a "public GHC API" and then there are all these modules that people can import at their own peril. Nothing is guaranteed about their consistency from version to version of GHC. I don't really see the point about hiding constructors (getting in the way of automatically deriving things) and then giving extra functions like (in the case of Unique): > getKeyFastInt (MkUnique x) = x > mkUniqueGrimily x = MkUnique (iUnbox x) I would propose to just make Unique a newtype for an Int and making the constructor visible. Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Mon Aug 18 13:52:21 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Mon, 18 Aug 2014 13:52:21 +0000 Subject: Unique as special boxing type & hidden constructors In-Reply-To: <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl> References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl>, <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net>, <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl> Message-ID: <3ea4010057b04d41a6954ae11d36a3d8@EXMBX31.ad.utwente.nl> PS. Unique also looks like a case where Ints are used and (>= 0) is asserted. Can these cases be converted to Word as per earlier discussions? ________________________________ Van: p.k.f.holzenspies at utwente.nl Verzonden: maandag 18 augustus 2014 15:49 Aan: simonpj at microsoft.com; ghc-devs at haskell.org Onderwerp: RE: Unique as special boxing type & hidden constructors Dear Simon, et al, Looking at Unique, there are a few more design choices that may be outdated, and since I'm polishing things now, anyway, I figured I could update it on more fronts. 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me somewhat. Similar things occur elsewhere in the code. Isn't the assumption that GHC is being used? Is this old portability stuff that may be removed? 2) Uniques are produced from a Char and an Int. The function to build Uniques (mkUnique) is not exported, according to the comments, so as to see all characters used. Access to these different "classes" of Uniques is given through specialised mkXXXUnique functions. Does anyone have a problem with something like: > data UniqueClass > = UniqDesugarer > | UniqAbsCFlattener > | UniqSimplStg > | UniqNativeCodeGen > ... and a public (i.e. exported) function: > mkUnique :: UniqueClass -> Int -> Unique ? The benefit of this would be to have more (to my taste) self-documenting code and a greater chance that documentation is updated (the list of "unique supply characters" in the comments is currently outdated). 3) Is there a reason for having functions implementing class-methods to be exported? In the case of Unique, there is pprUnique and: > instance Outputable Unique where > ppr = pprUnique Here pprUnique is exported and it is used in quite a few places where it's argument is unambiguously a Unique (so it's not to force the type) *and* "ppr" is used for all kinds of other types. I'm assuming this is an old choice making things marginally faster, but I would say cleaning up the API / namespace would now outweigh this margin. ? I will also be adding Haddock-comments, so when this is done, a review would be most welcome (I'll also be doing some similar transformations to other long-since-untouched-code). Regards, Philip ________________________________ Van: Simon Peyton Jones Verzonden: maandag 18 augustus 2014 00:11 Aan: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: RE: Unique as special boxing type & hidden constructors Re (1) I think this is a historical. A newtype wrapping an Int should be fine. I?d be ok with that change. Re (2), I think your question is: why does module Unique export the data type Unique abstractly, rather than exporting both the data type and its constructor. No deep reason here, but it guarantees that you can only *make* a unique from an Int by calling ?mkUniqueGrimily?, which signals clearly that something fishy is going on. And rightly so! Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 15 August 2014 11:53 To: ghc-devs at haskell.org Subject: Unique as special boxing type & hidden constructors Dear all, I'm working with Alan to instantiate everything for Data.Data, so that we can do better SYB-traversals (which should also help newcomers significantly to get into the GHC code base). Alan's looking at the AST types, I'm looking at the basic types in the compiler. Right now, I'm looking at Unique and two questions come up: > data Unique = MkUnique FastInt 1) As someone already commented: Is there a specific reason (other than history) that this isn't simply a newtype around an Int? If we're boxing anyway, we may as well use the default Int boxing and newtype-coerce to the specific purpose of Unique, no? 2) As a general question for GHC hacking style; what is the reason for hiding the constructors in the first place? I understand about abstraction and there are reasons for hiding, but there's a "public GHC API" and then there are all these modules that people can import at their own peril. Nothing is guaranteed about their consistency from version to version of GHC. I don't really see the point about hiding constructors (getting in the way of automatically deriving things) and then giving extra functions like (in the case of Unique): > getKeyFastInt (MkUnique x) = x > mkUniqueGrimily x = MkUnique (iUnbox x) I would propose to just make Unique a newtype for an Int and making the constructor visible. Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Aug 18 21:29:59 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Aug 2014 21:29:59 +0000 Subject: Unique as special boxing type & hidden constructors In-Reply-To: <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl> References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl>, <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net> <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221C4977@DBXPRD3001MB024.064d.mgd.msft.net> 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me somewhat. Similar things occur elsewhere in the code. Isn't the assumption that GHC is being used? Is this old portability stuff that may be removed? I think so, unless others yell to the contrary. 2) Uniques are produced from a Char and an Int. The function to build Uniques (mkUnique) is not exported, according to the comments, so as to see all characters used. Access to these different "classes" of Uniques is given through specialised mkXXXUnique functions. Does anyone have a problem with something like: > data UniqueClass > = UniqDesugarer > | UniqAbsCFlattener > | UniqSimplStg > | UniqNativeCodeGen > ... OK by me 3) Is there a reason for having functions implementing class-methods to be exported? In the case of Unique, there is pprUnique and: > instance Outputable Unique where > ppr = pprUnique Please don?t change this. If you want to change how pretty-printing of uniques works, and want to find all the call sites of pprUnique, it?s FAR easier to grep for pprUnique than to search for all calls of ppr, and work out which are at type Unique! (In my view) it?s usually much better not to use type classes unless you actually need overloading. Simon From: p.k.f.holzenspies at utwente.nl [mailto:p.k.f.holzenspies at utwente.nl] Sent: 18 August 2014 14:50 To: Simon Peyton Jones; ghc-devs at haskell.org Subject: RE: Unique as special boxing type & hidden constructors Dear Simon, et al, Looking at Unique, there are a few more design choices that may be outdated, and since I'm polishing things now, anyway, I figured I could update it on more fronts. 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me somewhat. Similar things occur elsewhere in the code. Isn't the assumption that GHC is being used? Is this old portability stuff that may be removed? 2) Uniques are produced from a Char and an Int. The function to build Uniques (mkUnique) is not exported, according to the comments, so as to see all characters used. Access to these different "classes" of Uniques is given through specialised mkXXXUnique functions. Does anyone have a problem with something like: > data UniqueClass > = UniqDesugarer > | UniqAbsCFlattener > | UniqSimplStg > | UniqNativeCodeGen > ... and a public (i.e. exported) function: > mkUnique :: UniqueClass -> Int -> Unique ? The benefit of this would be to have more (to my taste) self-documenting code and a greater chance that documentation is updated (the list of "unique supply characters" in the comments is currently outdated). 3) Is there a reason for having functions implementing class-methods to be exported? In the case of Unique, there is pprUnique and: > instance Outputable Unique where > ppr = pprUnique Here pprUnique is exported and it is used in quite a few places where it's argument is unambiguously a Unique (so it's not to force the type) *and* "ppr" is used for all kinds of other types. I'm assuming this is an old choice making things marginally faster, but I would say cleaning up the API / namespace would now outweigh this margin. ? I will also be adding Haddock-comments, so when this is done, a review would be most welcome (I'll also be doing some similar transformations to other long-since-untouched-code). Regards, Philip ________________________________ Van: Simon Peyton Jones > Verzonden: maandag 18 augustus 2014 00:11 Aan: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: RE: Unique as special boxing type & hidden constructors Re (1) I think this is a historical. A newtype wrapping an Int should be fine. I?d be ok with that change. Re (2), I think your question is: why does module Unique export the data type Unique abstractly, rather than exporting both the data type and its constructor. No deep reason here, but it guarantees that you can only *make* a unique from an Int by calling ?mkUniqueGrimily?, which signals clearly that something fishy is going on. And rightly so! Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 15 August 2014 11:53 To: ghc-devs at haskell.org Subject: Unique as special boxing type & hidden constructors Dear all, I'm working with Alan to instantiate everything for Data.Data, so that we can do better SYB-traversals (which should also help newcomers significantly to get into the GHC code base). Alan's looking at the AST types, I'm looking at the basic types in the compiler. Right now, I'm looking at Unique and two questions come up: > data Unique = MkUnique FastInt 1) As someone already commented: Is there a specific reason (other than history) that this isn't simply a newtype around an Int? If we're boxing anyway, we may as well use the default Int boxing and newtype-coerce to the specific purpose of Unique, no? 2) As a general question for GHC hacking style; what is the reason for hiding the constructors in the first place? I understand about abstraction and there are reasons for hiding, but there's a "public GHC API" and then there are all these modules that people can import at their own peril. Nothing is guaranteed about their consistency from version to version of GHC. I don't really see the point about hiding constructors (getting in the way of automatically deriving things) and then giving extra functions like (in the case of Unique): > getKeyFastInt (MkUnique x) = x > mkUniqueGrimily x = MkUnique (iUnbox x) I would propose to just make Unique a newtype for an Int and making the constructor visible. Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Aug 18 21:42:14 2014 From: david.feuer at gmail.com (David Feuer) Date: Mon, 18 Aug 2014 17:42:14 -0400 Subject: The definition of cseProgram In-Reply-To: References: Message-ID: Currently, it's defined like this: cseProgram :: CoreProgram -> CoreProgram cseProgram binds = cseBinds emptyCSEnv binds cseBinds :: CSEnv -> [CoreBind] -> [CoreBind] cseBinds _ [] = [] cseBinds env (b:bs) = (b':bs') where (env1, b') = cseBind env b bs' = cseBinds env1 bs Couldn't we replace all that with the following? (Thanks to Cale for suggesting mapAccumL?I was using scanl because I knew it, but it was not a great fit.) cseProgram = snd . mapAccumL cseBind emptyCSEnv David Feuer -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Aug 18 22:01:17 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Aug 2014 22:01:17 +0000 Subject: Wired-in data-constructors with UNPACKed fields In-Reply-To: <87y4um1b9w.fsf@gmail.com> References: <87a973z27g.fsf@gnu.org> <618BE556AADD624C9C918AA5D5911BEF221BCDCC@DB3PRD3001MB020.064d.mgd.msft.net> <87y4um1b9w.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221C5A6A@DBXPRD3001MB024.064d.mgd.msft.net> I see three alternatives. 1. Flatten out the BigNat thing. You give good reasons why this would be bad. 2. Take care to build a DCR that really does match the one you get when you compile the source module that declares the data type. In principle, the representation does indeed depend on dynflags, so you need to know the flags with which the source module will be compiled. And that's reasonable: if we generate code for an unpacked constructor, GHC's wired-in knowledge must reflect that, and vice versa. But you can probably write the code in such a way as to be mostly independent (eg explicit UNPACK rather than rely on -funbox-strict-fields), or assume that some things won't happen (e.g. souce module will not be compiled with -fomit-interface-pragmas). See MkId.mkDataConRep. 3. Stop having Integer as a wired-in type. For the most part it doesn't need to be; you won't see any mentions of 'integerTy' or 'integerTyCon' scattered about the compiler. I believe that that the sole use is in CorePrep.cvtLitInteger, which wants to use the data constructor for small integers. What is odd here is that for non-small integers we are careful to look up mkInteger in the environment (precisely so that it is not wired in). Then we stash it in the CorePrepEnv, and pass it to cvtLitInteger. What I don't understand is why we don't do exactly the same thing for S#, the data constructor for small integers. (Add a new field to CorePrepEnv for the S# data constructor.) If we did that, then the Integer type and the data constructor, would become "known-key" things, rather than "wired-in" things; and the former are MUCH easier to handle. My recommendation would be to try (3) first. Ian Lynagh (cc'd) may be able to comment about why the inconsistency above arose in the first place, and why we can't simply fix it. Simon | -----Original Message----- | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] | Sent: 18 August 2014 08:56 | To: Simon Peyton Jones | Subject: Re: Wired-in data-constructors with UNPACKed fields | | Hello Simon, | | On 2014-08-17 at 23:56:32 +0200, Simon Peyton Jones wrote: | > You'll see that 'pcDataCon' in TysWiredIn ultimately calls | > pcDataConWithFixity'. And that builds a data constructor with a | > NoDataConRep field, comment "Wired-in types are too simple to need | > wrappers". | > | > But your wired-in type is NOT too simply to need a wrapper! You'll | > need to build a suitable DCR record (see DataCon.lhs), which will be | > something of a nuisance for you, although you can doubtless re-use | > utility functions that are currently used to build a DCR record. | | Wouldn't I need to access to the current dynamic flags in order to be | able to construct the effective DCR record? If so, I'm not sure I can | access the dynflags while constructing a CAF (which I seem to need doing) | | > Alternatively, just put a ByteArray# as the argument of JP# and JN#. | > After all, you have Int# as the argument of SI#! | | Well, there's a big difference between the Int# use and the BigNat use: | | Int and Int# are isomorphic to each other. However, BigNat is a subset | of what can be represented in a ByteArray#. | | Also, BigNat is meant to be available as an abstract data type in its | own right for user to use it as building-block for other data-types | (Like e.g. a more efficient multi-constructor rational type in the style | of 'Integer' or also an optimized 'Either Word BigNat'-isomorphic | 'Natural' type I've got queued up for when integer-gmp2 is done). For | instance, there's a function for creating a BigNat out of a ByteArray# | which makes sure all internal invariants are satisfied. | | | However, should the task to wire-in BigNat turn out to be more pain than | bearable: Since we now have explicitly bidirectional pattern synonyms, I | have been considering to express the user-facing low-level interface to | the 'Integer' type via such pattern synonyms (and hide the "real" | 'data Integer = SI# Int# | Jp# ByteArray# | ..' type deeper, or maybe | not even export it at all). | | From a practical point, I'd like to get to a situation where code | requiring to access the "medium-level" Integer representation (like some | of Edward Kmett's packages, or some of the crypto-packages using | 'Integer's to perform RSA calculations) doesn't need to know it's using | integer-simple, integer-gmp2, or integer-xyz, as they'd all provide the | same abstracted API. | | | [...] | | > | data Integer = SI# Int# | > | | Jp# {-# UNPACK #-} !BigNat | > | | Jn# {-# UNPACK #-} !BigNat | > | | > | data BigNat = BN# ByteArray# | | Cheers, | hvr From simonpj at microsoft.com Mon Aug 18 22:02:08 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 18 Aug 2014 22:02:08 +0000 Subject: The definition of cseProgram In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF221C5A7C@DBXPRD3001MB024.064d.mgd.msft.net> Yes, we could From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David Feuer Sent: 18 August 2014 22:42 To: ghc-devs Subject: The definition of cseProgram Currently, it's defined like this: cseProgram :: CoreProgram -> CoreProgram cseProgram binds = cseBinds emptyCSEnv binds cseBinds :: CSEnv -> [CoreBind] -> [CoreBind] cseBinds _ [] = [] cseBinds env (b:bs) = (b':bs') where (env1, b') = cseBind env b bs' = cseBinds env1 bs Couldn't we replace all that with the following? (Thanks to Cale for suggesting mapAccumL?I was using scanl because I knew it, but it was not a great fit.) cseProgram = snd . mapAccumL cseBind emptyCSEnv David Feuer -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Tue Aug 19 02:21:49 2014 From: david.feuer at gmail.com (David Feuer) Date: Mon, 18 Aug 2014 22:21:49 -0400 Subject: Partial recompilation of libraries In-Reply-To: References: Message-ID: I'd like to try out a bunch of little changes to the list stuff in base and get some nofib results for each change. Is there a way to do this without recompiling all of GHC each time? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Tue Aug 19 08:55:15 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 19 Aug 2014 09:55:15 +0100 Subject: Partial recompilation of libraries In-Reply-To: References: Message-ID: <1408438418-sup-1073@sabre> Probably, building an optimized stage1, skip building stage2 and get nofib to be compiled with the stage1 compiler. I'm not sure off the top of my head how to do the last step. Edward Excerpts from David Feuer's message of 2014-08-19 03:21:49 +0100: > I'd like to try out a bunch of little changes to the list stuff in base and > get some nofib results for each change. Is there a way to do this without > recompiling all of GHC each time? From hvriedel at gmail.com Tue Aug 19 09:23:46 2014 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 19 Aug 2014 11:23:46 +0200 Subject: Wired-in data-constructors with UNPACKed fields In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221C5A6A@DBXPRD3001MB024.064d.mgd.msft.net> (Simon Peyton Jones's message of "Mon, 18 Aug 2014 22:01:17 +0000") References: <87a973z27g.fsf@gnu.org> <618BE556AADD624C9C918AA5D5911BEF221BCDCC@DB3PRD3001MB020.064d.mgd.msft.net> <87y4um1b9w.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF221C5A6A@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: <87wqa4est9.fsf@gmail.com> Hello Simon, On 2014-08-19 at 00:01:17 +0200, Simon Peyton Jones wrote: [...] > But you can probably write the code in such a way as to be mostly > independent (eg explicit UNPACK rather than rely on > -funbox-strict-fields), or assume that some things won't happen > (e.g. souce module will not be compiled with > -fomit-interface-pragmas). See MkId.mkDataConRep. I was under the impression that even -O0 vs -O1+ makes a huge difference: As given the following program, {-# LANGUAGE MagicHash #-} module M where import GHC.Exts data T0 = C0 ByteArray# data T1 = C1 {-# UNPACK #-} !T0 | C2 {-# UNPACK #-} !Int | C3 !Int | C4 Int compilation with $ ../inplace/bin/ghc-stage2 -fforce-recomp -ddump-types -O1 -c M.hs TYPE SIGNATURES TYPE CONSTRUCTORS data T0 = C0 ByteArray# data T1 = C1 {-# UNPACK #-}T0 | C2 {-# UNPACK #-}Int | C3 {-# UNPACK #-}Int | C4 Int COERCION AXIOMS Dependent modules: [] Dependent packages: [base, ghc-prim, integer-gmp2] has everything but C4 unpacked as expected, but when using -O0, nothing is UNPACKed at all: $ ../inplace/bin/ghc-stage2 -fforce-recomp -ddump-types -O0 -c M.hs TYPE SIGNATURES TYPE CONSTRUCTORS data T0 = C0 ByteArray# data T1 = C1 !T0 | C2 !Int | C3 !Int | C4 Int COERCION AXIOMS Dependent modules: [] Dependent packages: [base, ghc-prim, integer-gmp2] ...am I interpreting the output `-ddump-types` incorrectly? PS: adding a '!' in front of the 'ByteArray#' field in `T0` is not supposed to have any effect on primitive types, is it? If so, should GHC warn about the redundant '!'? Cheers, hvr From austin at well-typed.com Tue Aug 19 13:39:55 2014 From: austin at well-typed.com (Austin Seipp) Date: Tue, 19 Aug 2014 08:39:55 -0500 Subject: Status updates Message-ID: Hello *, Sorry for the scatter-brained-ness of my update. But without further ado, here are some things that have been going on: - I was going to land AMP, but I've gotten stuck again! It seems that now, Haddock infinite loops, but this one I really can't figure out. See my comment on the ticket[1] if you're adventurous - a small patch to Haddock and you should be able to build things, but it'll fail while Haddocking `ghc-prim`. Any help here would be much appreciated! - Phabricator now has significantly better build integration, as I'm sure many of you have seen. It is less noisy (and doesn't email you as much), has better logging support (that actually works), and it now builds commits AND patches! It's been in-production since last week and a lot more reliable than the crap I wrote before. I am thinking of changing the Phabricator GHC commit builder to validate all commits in `--slow` mode, which we can't do on Travis, and will catch more failures. I assume most people would approve of this. :) - The wiki has been updated in several spots: - The Phabricator page is now more detailed about Audits and more up to date about builds.[2] Yes, I know it looks long, but it's really just due to a lot of pictures. It's actually quite short still. - The Git workflow pages (starting from WorkingConventions/Git) have seen some minor updates[3], but nothing substantial. Some old pages still need to be deleted possibly, and some things (e.g. GitHub) might need further tweaks. - Yes, we're still seeing spam, but unfortunately nobody has had time to fix the CAPTCHAs yet. If you can write some python and want to, I'm sure Herbert would like to know. :) - There is a new status in Trac, primarily useful for patches, called 'upstream', which is very similar to 'patch', but says that the change goes to an upstream library that GHC must synchronize with. This difference is now much more important since many of our packages are tracked through submodules. See the working conventions page for some details and a pretty graph[4]. - I've gone ahead and split my old optimized-memcpy patch into two patches - D165 and D166 on Phabricator. Hopefully these will go in soon after some discussion with Johan about how we want to handle the flags is talked out. - All of the Phabricator patches from outstanding contributors have been merged, I think. There are some outstanding reviews still going on, and some accepted commits still waiting for other reviewers I believe. - Almost all of the current outstanding patches have been either merged, taken out of patch state (in case they were in limbo), or moved to 'upstream' status. Take a look at this wiki page: https://ghc.haskell.org/trac/ghc/query?status=patch&differential=&or&status=upstream&group=status&col=id&col=summary&col=owner&col=type&col=priority&col=milestone&col=component&order=priority These are all the patches and upstream tickets, specifically those which do NOT have ongoing Phabricator code reviews for them. I chose this so we don't see tickets that may already be going under review elsewhere. Notably I'm testing the fusion tickets before merging them. OK, I think that's it. Do let me know if you have any questions (or would like to help with D13 :) [1] https://phabricator.haskell.org/D13#26 [2] https://ghc.haskell.org/trac/ghc/wiki/Phabricator [3] https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git [4] https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/BugTracker -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Tue Aug 19 14:31:56 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 19 Aug 2014 14:31:56 +0000 Subject: Core libraries bug tracker In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2208A6B5@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2208A6B5@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221D5E42@DB3PRD3001MB020.064d.mgd.msft.net> Edward, and core library colleagues, Any views on this? It would be good to make progress. Thanks Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon Peyton Jones Sent: 04 August 2014 16:01 To: core-libraries-committee at haskell.org Cc: ghc-devs at haskell.org Subject: Core libraries bug tracker Edward, and core library colleagues, This came up in our weekly GHC discussion * Does the Core Libraries Committee have a Trac? Surely, surely you should, else you'll lose track of issues. * Would you like to use GHC's Trac for the purpose? Advantages: o People often report core library issues on GHC's Trac anyway, so telling them to move it somewhere else just creates busy-work --- and maybe they won't bother, which leaves it in our pile. o Several of these libraries are closely coupled to GHC, and you might want to milestone some library tickets with an upcoming GHC release * If so we'd need a canonical way to identify tickets as CLC issues. Perhaps by making "core-libraries" the owner? Or perhaps the "Component" field? * Some core libraries (e.g. random) have a maintainer that isn't the committee. So that maintainer should be the owner of the ticket. Or the CLC might like a particular member to own a ticket. Either way, that suggest using the "Component" field to identify CLC tickets * Or maybe you want a Trac of your own? The underlying issue from our end is that we'd like a way to * filter out tickets that you are dealing with * and be sure you are dealing with them * without losing track of milestones... i.e. when building a release we want to be sure that important tickets are indeed fixed before releasing Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Tue Aug 19 15:23:09 2014 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 19 Aug 2014 11:23:09 -0400 Subject: [core libraries] RE: Core libraries bug tracker In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221D5E42@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF2208A6B5@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221D5E42@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hi Simon, If you don't mind the extra traffic in the ghc trac, I'm open to the plan to work there. I was talking to Eric Mertens a few days ago about this and he agreed to take lead on getting us set up to actually build tickets for items that go into the libraries@ proposal process, so we have something helping to force us to come to a definitive conclusion rather than letting things trail off. -Edward On Tue, Aug 19, 2014 at 10:31 AM, Simon Peyton Jones wrote: > Edward, and core library colleagues, > > Any views on this? It would be good to make progress. > > Thanks > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Simon > Peyton Jones > *Sent:* 04 August 2014 16:01 > *To:* core-libraries-committee at haskell.org > *Cc:* ghc-devs at haskell.org > *Subject:* Core libraries bug tracker > > > > Edward, and core library colleagues, > > This came up in our weekly GHC discussion > > ? Does the Core Libraries Committee have a Trac? Surely, surely > you should, else you?ll lose track of issues. > > ? Would you like to use GHC?s Trac for the purpose? Advantages: > > o People often report core library issues on GHC?s Trac anyway, so > telling them to move it somewhere else just creates busy-work --- and maybe > they won?t bother, which leaves it in our pile. > > o Several of these libraries are closely coupled to GHC, and you might > want to milestone some library tickets with an upcoming GHC release > > ? If so we?d need a canonical way to identify tickets as CLC > issues. Perhaps by making ?core-libraries? the owner? Or perhaps the > ?Component? field? > > ? Some core libraries (e.g. random) have a maintainer that isn?t > the committee. So that maintainer should be the owner of the ticket. Or > the CLC might like a particular member to own a ticket. Either way, that > suggest using the ?Component? field to identify CLC tickets > > ? Or maybe you want a Trac of your own? > > The underlying issue from our end is that we?d like a way to > > ? filter out tickets that you are dealing with > > ? and be sure you are dealing with them > > ? without losing track of milestones? i.e. when building a > release we want to be sure that important tickets are indeed fixed before > releasing > > Simon > > -- > You received this message because you are subscribed to the Google Groups > "haskell-core-libraries" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to haskell-core-libraries+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Aug 19 16:13:24 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 19 Aug 2014 16:13:24 +0000 Subject: Wired-in data-constructors with UNPACKed fields In-Reply-To: <87wqa4est9.fsf@gmail.com> References: <87a973z27g.fsf@gnu.org> <618BE556AADD624C9C918AA5D5911BEF221BCDCC@DB3PRD3001MB020.064d.mgd.msft.net> <87y4um1b9w.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF221C5A6A@DBXPRD3001MB024.064d.mgd.msft.net> <87wqa4est9.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221D605E@DB3PRD3001MB020.064d.mgd.msft.net> Yes, -O0 implies -fomit-interface-pragmas. I still think that option 3 would a better avenue. Simon | -----Original Message----- | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] | Sent: 19 August 2014 10:24 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: Wired-in data-constructors with UNPACKed fields | | Hello Simon, | | On 2014-08-19 at 00:01:17 +0200, Simon Peyton Jones wrote: | | [...] | | > But you can probably write the code in such a way as to be mostly | > independent (eg explicit UNPACK rather than rely on | > -funbox-strict-fields), or assume that some things won't happen (e.g. | > souce module will not be compiled with -fomit-interface-pragmas). | See | > MkId.mkDataConRep. | | I was under the impression that even -O0 vs -O1+ makes a huge | difference: | | As given the following program, | | {-# LANGUAGE MagicHash #-} | module M where | import GHC.Exts | data T0 = C0 ByteArray# | data T1 = C1 {-# UNPACK #-} !T0 | | C2 {-# UNPACK #-} !Int | | C3 !Int | | C4 Int | | compilation with | | $ ../inplace/bin/ghc-stage2 -fforce-recomp -ddump-types -O1 -c M.hs | TYPE SIGNATURES | TYPE CONSTRUCTORS | data T0 = C0 ByteArray# | data T1 | = C1 {-# UNPACK #-}T0 | | C2 {-# UNPACK #-}Int | | C3 {-# UNPACK #-}Int | | C4 Int | COERCION AXIOMS | Dependent modules: [] | Dependent packages: [base, ghc-prim, integer-gmp2] | | has everything but C4 unpacked as expected, but when using -O0, nothing | is UNPACKed at all: | | $ ../inplace/bin/ghc-stage2 -fforce-recomp -ddump-types -O0 -c M.hs | TYPE SIGNATURES | TYPE CONSTRUCTORS | data T0 = C0 ByteArray# | data T1 = C1 !T0 | C2 !Int | C3 !Int | C4 Int | COERCION AXIOMS | Dependent modules: [] | Dependent packages: [base, ghc-prim, integer-gmp2] | | ...am I interpreting the output `-ddump-types` incorrectly? | | | PS: adding a '!' in front of the 'ByteArray#' field in `T0` is not | supposed to have any effect on primitive types, is it? If so, | should | GHC warn about the redundant '!'? | | Cheers, | hvr From simonpj at microsoft.com Tue Aug 19 21:55:23 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 19 Aug 2014 21:55:23 +0000 Subject: [core libraries] RE: Core libraries bug tracker In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF2208A6B5@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221D5E42@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221DC5B1@DBXPRD3001MB024.064d.mgd.msft.net> If you don't mind the extra traffic in the ghc trac, I'm open to the plan to work there. OK great. Let?s agree that: ? The ?owner? of a Core Libraries ticket is the person responsible for progressing it ? or ?Core Libraries Committee? as one possibility. ? The ?component? should identify the ticket as belonging to the core libraries committee, not GHC. We have a bunch of components like ?libraries/base?, ?libraries/directory?, etc, but I?m sure that doesn?t cover all the core libraries, and even if it did, it?s probably too fine grain. I suggest having just ?Core Libraries?. Actions: ? Edward: update the Core Libraries home page (where is that?) to point people to the Trac, tell them how to correctly submit a ticket, etc? ? Edward: send email to tell everyone about the new plan. ? Austin: add the same guidance to the GHC bug tracker. ? Austin: add ?core libraries committee? as something that can be an owner. ? Austin: change the ?components? list to replace all the ?libraires/*? stuff with ?Core Libraries?. Thanks Simon From: haskell-core-libraries at googlegroups.com [mailto:haskell-core-libraries at googlegroups.com] On Behalf Of Edward Kmett Sent: 19 August 2014 16:23 To: Simon Peyton Jones Cc: core-libraries-committee at haskell.org; ghc-devs at haskell.org Subject: Re: [core libraries] RE: Core libraries bug tracker Hi Simon, If you don't mind the extra traffic in the ghc trac, I'm open to the plan to work there. I was talking to Eric Mertens a few days ago about this and he agreed to take lead on getting us set up to actually build tickets for items that go into the libraries@ proposal process, so we have something helping to force us to come to a definitive conclusion rather than letting things trail off. -Edward On Tue, Aug 19, 2014 at 10:31 AM, Simon Peyton Jones > wrote: Edward, and core library colleagues, Any views on this? It would be good to make progress. Thanks Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon Peyton Jones Sent: 04 August 2014 16:01 To: core-libraries-committee at haskell.org Cc: ghc-devs at haskell.org Subject: Core libraries bug tracker Edward, and core library colleagues, This came up in our weekly GHC discussion ? Does the Core Libraries Committee have a Trac? Surely, surely you should, else you?ll lose track of issues. ? Would you like to use GHC?s Trac for the purpose? Advantages: o People often report core library issues on GHC?s Trac anyway, so telling them to move it somewhere else just creates busy-work --- and maybe they won?t bother, which leaves it in our pile. o Several of these libraries are closely coupled to GHC, and you might want to milestone some library tickets with an upcoming GHC release ? If so we?d need a canonical way to identify tickets as CLC issues. Perhaps by making ?core-libraries? the owner? Or perhaps the ?Component? field? ? Some core libraries (e.g. random) have a maintainer that isn?t the committee. So that maintainer should be the owner of the ticket. Or the CLC might like a particular member to own a ticket. Either way, that suggest using the ?Component? field to identify CLC tickets ? Or maybe you want a Trac of your own? The underlying issue from our end is that we?d like a way to ? filter out tickets that you are dealing with ? and be sure you are dealing with them ? without losing track of milestones? i.e. when building a release we want to be sure that important tickets are indeed fixed before releasing Simon -- You received this message because you are subscribed to the Google Groups "haskell-core-libraries" group. To unsubscribe from this group and stop receiving emails from it, send an email to haskell-core-libraries+unsubscribe at googlegroups.com. For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups "haskell-core-libraries" group. To unsubscribe from this group and stop receiving emails from it, send an email to haskell-core-libraries+unsubscribe at googlegroups.com. For more options, visit https://groups.google.com/d/optout. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Aug 19 22:16:52 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 19 Aug 2014 22:16:52 +0000 Subject: Windows build fails -- again! Message-ID: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> Aaargh! My windows build is broken, again. It's very painful that this keeps happening. Can anyone help? Simon "inplace/bin/ghc-stage1.exe" -optc-U__i686 -optc-march=i686 -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist -optc-Iincludes/dist-derivedconstants/header -optc-Iincludes/dist-ghcconstants/header -optc-Irts -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict-aliasing -optc-fno-common -optc-O2 -optc-fomit-frame-pointer -optc-DRtsWay=\"rts_v\" -static -H32m -O -Werror -Wall -H64m -O0 -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -c rts/Task.c -o rts/dist/build/Task.o cc1.exe: warnings being treated as errors rts\Capability.c:1080:6: error: no previous prototype for 'setIOManagerControlFd' rts/ghc.mk:236: recipe for target 'rts/dist/build/Capability.o' failed make[1]: *** [rts/dist/build/Capability.o] Error 1 make[1]: *** Waiting for unfinished jobs.... Makefile:71: recipe for target 'all' failed make: *** [all] Error 2 HEAD (master)$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Aug 20 05:42:54 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 20 Aug 2014 07:42:54 +0200 Subject: Windows build fails -- again! In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: f9f89b7884ccc8ee5047cf4fffdf2b36df6832df is probably to blame. Found by running `git log -SsetIOManagerControlFd`. The -S flag is a good way to find when a symbol is added/removed. On Wed, Aug 20, 2014 at 12:16 AM, Simon Peyton Jones wrote: > Aaargh! My windows build is broken, again. > > It?s very painful that this keeps happening. > > Can anyone help? > > Simon > > "inplace/bin/ghc-stage1.exe" -optc-U__i686 -optc-march=i686 > -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall -optc-Wextra > -optc-Wstrict-prototypes -optc-Wmissing-prototypes > -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return > -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs > -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist > -optc-Iincludes/dist-derivedconstants/header > -optc-Iincludes/dist-ghcconstants/header -optc-Irts -optc-Irts/dist/build > -optc-DCOMPILING_RTS -optc-fno-strict-aliasing -optc-fno-common -optc-O2 > -optc-fomit-frame-pointer -optc-DRtsWay=\"rts_v\" -static -H32m -O -Werror > -Wall -H64m -O0 -Iincludes -Iincludes/dist > -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header > -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts > -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen > -Irts/dist/build -Irts/dist/build/autogen -O2 -c rts/Task.c -o > rts/dist/build/Task.o > > cc1.exe: warnings being treated as errors > > > > rts\Capability.c:1080:6: > > error: no previous prototype for 'setIOManagerControlFd' > > rts/ghc.mk:236: recipe for target 'rts/dist/build/Capability.o' failed > > make[1]: *** [rts/dist/build/Capability.o] Error 1 > > make[1]: *** Waiting for unfinished jobs.... > > Makefile:71: recipe for target 'all' failed > > make: *** [all] Error 2 > > HEAD (master)$ > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From metaniklas at gmail.com Wed Aug 20 06:35:07 2014 From: metaniklas at gmail.com (Niklas Larsson) Date: Wed, 20 Aug 2014 08:35:07 +0200 Subject: Windows build fails -- again! In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: Hi! I think this isn't broken on just Windows. The error comes from the warning about no prototype (and -Werror), and it doesn't have a prototype on other OSes either. Niklas 2014-08-20 7:42 GMT+02:00 Johan Tibell : > f9f89b7884ccc8ee5047cf4fffdf2b36df6832df is probably to blame. > > Found by running `git log -SsetIOManagerControlFd`. The -S flag is a good > way to find when a symbol is added/removed. > > > On Wed, Aug 20, 2014 at 12:16 AM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > >> Aaargh! My windows build is broken, again. >> >> It?s very painful that this keeps happening. >> >> Can anyone help? >> >> Simon >> >> "inplace/bin/ghc-stage1.exe" -optc-U__i686 -optc-march=i686 >> -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall -optc-Wextra >> -optc-Wstrict-prototypes -optc-Wmissing-prototypes >> -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return >> -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs >> -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist >> -optc-Iincludes/dist-derivedconstants/header >> -optc-Iincludes/dist-ghcconstants/header -optc-Irts -optc-Irts/dist/build >> -optc-DCOMPILING_RTS -optc-fno-strict-aliasing -optc-fno-common -optc-O2 >> -optc-fomit-frame-pointer -optc-DRtsWay=\"rts_v\" -static -H32m -O -Werror >> -Wall -H64m -O0 -Iincludes -Iincludes/dist >> -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header >> -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts >> -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen >> -Irts/dist/build -Irts/dist/build/autogen -O2 -c rts/Task.c -o >> rts/dist/build/Task.o >> >> cc1.exe: warnings being treated as errors >> >> >> >> rts\Capability.c:1080:6: >> >> error: no previous prototype for 'setIOManagerControlFd' >> >> rts/ghc.mk:236: recipe for target 'rts/dist/build/Capability.o' failed >> >> make[1]: *** [rts/dist/build/Capability.o] Error 1 >> >> make[1]: *** Waiting for unfinished jobs.... >> >> Makefile:71: recipe for target 'all' failed >> >> make: *** [all] Error 2 >> >> HEAD (master)$ >> >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Aug 20 08:25:31 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 20 Aug 2014 08:25:31 +0000 Subject: Windows build fails -- again! In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221E3E02@DB3PRD3001MB020.064d.mgd.msft.net> Thanks Gabor. But it makes no difference. Your change is inside an #ifdef that checks for windows, and your change is in the no-windows branch only. Also there are two IOManager.h file includes/rts/IOManager.h rts/win32/IOManager.h Should there be? It seems terribly confusing, and I have no idea which will win when it is #included. Thanks Simon | -----Original Message----- | From: Gabor Greif [mailto:ggreif at gmail.com] | Sent: 19 August 2014 23:38 | To: Simon Peyton Jones | Subject: Re: Windows build fails -- again! | | Simon, | | try this (attached) patch: | | $ git am 0001-Make-sure-that-a-prototype-is-included-for- | setIOMana.patch | | Cheers, | | Gabor | | PS: on MacOS all is good, so I could not test it at all | | On 8/20/14, Simon Peyton Jones wrote: | > Aaargh! My windows build is broken, again. | > It's very painful that this keeps happening. | > Can anyone help? | > Simon | > | > "inplace/bin/ghc-stage1.exe" -optc-U__i686 -optc-march=i686 | > -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall | > -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes | > -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return | > -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs | > -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist | > -optc-Iincludes/dist-derivedconstants/header | > -optc-Iincludes/dist-ghcconstants/header -optc-Irts | > -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict-aliasing | > -optc-fno-common -optc-O2 -optc-fomit-frame-pointer | > -optc-DRtsWay=\"rts_v\" -static -H32m -O -Werror -Wall -H64m -O0 | > -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header | > -Iincludes/dist-ghcconstants/header | > -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts | > -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen - | Irts/dist/build | > -Irts/dist/build/autogen -O2 -c rts/Task.c -o | > rts/dist/build/Task.o | > | > cc1.exe: warnings being treated as errors | > | > | > | > rts\Capability.c:1080:6: | > | > error: no previous prototype for 'setIOManagerControlFd' | > | > rts/ghc.mk:236: recipe for target 'rts/dist/build/Capability.o' | failed | > | > make[1]: *** [rts/dist/build/Capability.o] Error 1 | > | > make[1]: *** Waiting for unfinished jobs.... | > | > Makefile:71: recipe for target 'all' failed | > | > make: *** [all] Error 2 | > | > HEAD (master)$ | > | > | > From p.k.f.holzenspies at utwente.nl Wed Aug 20 10:30:53 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Wed, 20 Aug 2014 10:30:53 +0000 Subject: Unique as special boxing type & hidden constructors In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221C4977@DBXPRD3001MB024.064d.mgd.msft.net> References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl>, <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net> <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl>, <618BE556AADD624C9C918AA5D5911BEF221C4977@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: Dear Simon, et al, I seem to recall that the Unique(Supply) was an issue in parallelising GHC itself. There's a comment in the code (signed JSM) that there aren't any 64-bit bugs, if we have at least 32-bits for Ints and Chars fit in 8 characters. Then, there's bitmasks like 0x00FFFFFF to separate the "Int-part" from the "Char-part". I was wondering; if we move Uniques to 64 bits, but use the top 16 (instead of the current 8) for *both* the tag (currently a Char, soon an sum-type) and the threadId of the supplying thread of a Unique, would that help? Regards, Philip ________________________________ From: Simon Peyton Jones Sent: 18 August 2014 23:29 To: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Subject: RE: Unique as special boxing type & hidden constructors 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me somewhat. Similar things occur elsewhere in the code. Isn't the assumption that GHC is being used? Is this old portability stuff that may be removed? I think so, unless others yell to the contrary. 2) Uniques are produced from a Char and an Int. The function to build Uniques (mkUnique) is not exported, according to the comments, so as to see all characters used. Access to these different "classes" of Uniques is given through specialised mkXXXUnique functions. Does anyone have a problem with something like: > data UniqueClass > = UniqDesugarer > | UniqAbsCFlattener > | UniqSimplStg > | UniqNativeCodeGen > ... OK by me 3) Is there a reason for having functions implementing class-methods to be exported? In the case of Unique, there is pprUnique and: > instance Outputable Unique where > ppr = pprUnique Please don?t change this. If you want to change how pretty-printing of uniques works, and want to find all the call sites of pprUnique, it?s FAR easier to grep for pprUnique than to search for all calls of ppr, and work out which are at type Unique! (In my view) it?s usually much better not to use type classes unless you actually need overloading. Simon From: p.k.f.holzenspies at utwente.nl [mailto:p.k.f.holzenspies at utwente.nl] Sent: 18 August 2014 14:50 To: Simon Peyton Jones; ghc-devs at haskell.org Subject: RE: Unique as special boxing type & hidden constructors Dear Simon, et al, Looking at Unique, there are a few more design choices that may be outdated, and since I'm polishing things now, anyway, I figured I could update it on more fronts. 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me somewhat. Similar things occur elsewhere in the code. Isn't the assumption that GHC is being used? Is this old portability stuff that may be removed? 2) Uniques are produced from a Char and an Int. The function to build Uniques (mkUnique) is not exported, according to the comments, so as to see all characters used. Access to these different "classes" of Uniques is given through specialised mkXXXUnique functions. Does anyone have a problem with something like: > data UniqueClass > = UniqDesugarer > | UniqAbsCFlattener > | UniqSimplStg > | UniqNativeCodeGen > ... and a public (i.e. exported) function: > mkUnique :: UniqueClass -> Int -> Unique ? The benefit of this would be to have more (to my taste) self-documenting code and a greater chance that documentation is updated (the list of "unique supply characters" in the comments is currently outdated). 3) Is there a reason for having functions implementing class-methods to be exported? In the case of Unique, there is pprUnique and: > instance Outputable Unique where > ppr = pprUnique Here pprUnique is exported and it is used in quite a few places where it's argument is unambiguously a Unique (so it's not to force the type) *and* "ppr" is used for all kinds of other types. I'm assuming this is an old choice making things marginally faster, but I would say cleaning up the API / namespace would now outweigh this margin. ? I will also be adding Haddock-comments, so when this is done, a review would be most welcome (I'll also be doing some similar transformations to other long-since-untouched-code). Regards, Philip ________________________________ Van: Simon Peyton Jones > Verzonden: maandag 18 augustus 2014 00:11 Aan: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: RE: Unique as special boxing type & hidden constructors Re (1) I think this is a historical. A newtype wrapping an Int should be fine. I?d be ok with that change. Re (2), I think your question is: why does module Unique export the data type Unique abstractly, rather than exporting both the data type and its constructor. No deep reason here, but it guarantees that you can only *make* a unique from an Int by calling ?mkUniqueGrimily?, which signals clearly that something fishy is going on. And rightly so! Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 15 August 2014 11:53 To: ghc-devs at haskell.org Subject: Unique as special boxing type & hidden constructors Dear all, I'm working with Alan to instantiate everything for Data.Data, so that we can do better SYB-traversals (which should also help newcomers significantly to get into the GHC code base). Alan's looking at the AST types, I'm looking at the basic types in the compiler. Right now, I'm looking at Unique and two questions come up: > data Unique = MkUnique FastInt 1) As someone already commented: Is there a specific reason (other than history) that this isn't simply a newtype around an Int? If we're boxing anyway, we may as well use the default Int boxing and newtype-coerce to the specific purpose of Unique, no? 2) As a general question for GHC hacking style; what is the reason for hiding the constructors in the first place? I understand about abstraction and there are reasons for hiding, but there's a "public GHC API" and then there are all these modules that people can import at their own peril. Nothing is guaranteed about their consistency from version to version of GHC. I don't really see the point about hiding constructors (getting in the way of automatically deriving things) and then giving extra functions like (in the case of Unique): > getKeyFastInt (MkUnique x) = x > mkUniqueGrimily x = MkUnique (iUnbox x) I would propose to just make Unique a newtype for an Int and making the constructor visible. Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Aug 20 11:01:02 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 20 Aug 2014 11:01:02 +0000 Subject: Unique as special boxing type & hidden constructors In-Reply-To: References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl>, <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net> <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl>, <618BE556AADD624C9C918AA5D5911BEF221C4977@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221E71FC@DB3PRD3001MB020.064d.mgd.msft.net> Sounds like a good idea to me. Would need to think about making sure that it all still worked, somehow, on 32 bit. S From: p.k.f.holzenspies at utwente.nl [mailto:p.k.f.holzenspies at utwente.nl] Sent: 20 August 2014 11:31 To: Simon Peyton Jones; ghc-devs at haskell.org Subject: RE: Unique as special boxing type & hidden constructors Dear Simon, et al, I seem to recall that the Unique(Supply) was an issue in parallelising GHC itself. There's a comment in the code (signed JSM) that there aren't any 64-bit bugs, if we have at least 32-bits for Ints and Chars fit in 8 characters. Then, there's bitmasks like 0x00FFFFFF to separate the "Int-part" from the "Char-part". I was wondering; if we move Uniques to 64 bits, but use the top 16 (instead of the current 8) for *both* the tag (currently a Char, soon an sum-type) and the threadId of the supplying thread of a Unique, would that help? Regards, Philip ________________________________ From: Simon Peyton Jones > Sent: 18 August 2014 23:29 To: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Subject: RE: Unique as special boxing type & hidden constructors 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me somewhat. Similar things occur elsewhere in the code. Isn't the assumption that GHC is being used? Is this old portability stuff that may be removed? I think so, unless others yell to the contrary. 2) Uniques are produced from a Char and an Int. The function to build Uniques (mkUnique) is not exported, according to the comments, so as to see all characters used. Access to these different "classes" of Uniques is given through specialised mkXXXUnique functions. Does anyone have a problem with something like: > data UniqueClass > = UniqDesugarer > | UniqAbsCFlattener > | UniqSimplStg > | UniqNativeCodeGen > ... OK by me 3) Is there a reason for having functions implementing class-methods to be exported? In the case of Unique, there is pprUnique and: > instance Outputable Unique where > ppr = pprUnique Please don?t change this. If you want to change how pretty-printing of uniques works, and want to find all the call sites of pprUnique, it?s FAR easier to grep for pprUnique than to search for all calls of ppr, and work out which are at type Unique! (In my view) it?s usually much better not to use type classes unless you actually need overloading. Simon From: p.k.f.holzenspies at utwente.nl [mailto:p.k.f.holzenspies at utwente.nl] Sent: 18 August 2014 14:50 To: Simon Peyton Jones; ghc-devs at haskell.org Subject: RE: Unique as special boxing type & hidden constructors Dear Simon, et al, Looking at Unique, there are a few more design choices that may be outdated, and since I'm polishing things now, anyway, I figured I could update it on more fronts. 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me somewhat. Similar things occur elsewhere in the code. Isn't the assumption that GHC is being used? Is this old portability stuff that may be removed? 2) Uniques are produced from a Char and an Int. The function to build Uniques (mkUnique) is not exported, according to the comments, so as to see all characters used. Access to these different "classes" of Uniques is given through specialised mkXXXUnique functions. Does anyone have a problem with something like: > data UniqueClass > = UniqDesugarer > | UniqAbsCFlattener > | UniqSimplStg > | UniqNativeCodeGen > ... and a public (i.e. exported) function: > mkUnique :: UniqueClass -> Int -> Unique ? The benefit of this would be to have more (to my taste) self-documenting code and a greater chance that documentation is updated (the list of "unique supply characters" in the comments is currently outdated). 3) Is there a reason for having functions implementing class-methods to be exported? In the case of Unique, there is pprUnique and: > instance Outputable Unique where > ppr = pprUnique Here pprUnique is exported and it is used in quite a few places where it's argument is unambiguously a Unique (so it's not to force the type) *and* "ppr" is used for all kinds of other types. I'm assuming this is an old choice making things marginally faster, but I would say cleaning up the API / namespace would now outweigh this margin. ? I will also be adding Haddock-comments, so when this is done, a review would be most welcome (I'll also be doing some similar transformations to other long-since-untouched-code). Regards, Philip ________________________________ Van: Simon Peyton Jones > Verzonden: maandag 18 augustus 2014 00:11 Aan: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: RE: Unique as special boxing type & hidden constructors Re (1) I think this is a historical. A newtype wrapping an Int should be fine. I?d be ok with that change. Re (2), I think your question is: why does module Unique export the data type Unique abstractly, rather than exporting both the data type and its constructor. No deep reason here, but it guarantees that you can only *make* a unique from an Int by calling ?mkUniqueGrimily?, which signals clearly that something fishy is going on. And rightly so! Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 15 August 2014 11:53 To: ghc-devs at haskell.org Subject: Unique as special boxing type & hidden constructors Dear all, I'm working with Alan to instantiate everything for Data.Data, so that we can do better SYB-traversals (which should also help newcomers significantly to get into the GHC code base). Alan's looking at the AST types, I'm looking at the basic types in the compiler. Right now, I'm looking at Unique and two questions come up: > data Unique = MkUnique FastInt 1) As someone already commented: Is there a specific reason (other than history) that this isn't simply a newtype around an Int? If we're boxing anyway, we may as well use the default Int boxing and newtype-coerce to the specific purpose of Unique, no? 2) As a general question for GHC hacking style; what is the reason for hiding the constructors in the first place? I understand about abstraction and there are reasons for hiding, but there's a "public GHC API" and then there are all these modules that people can import at their own peril. Nothing is guaranteed about their consistency from version to version of GHC. I don't really see the point about hiding constructors (getting in the way of automatically deriving things) and then giving extra functions like (in the case of Unique): > getKeyFastInt (MkUnique x) = x > mkUniqueGrimily x = MkUnique (iUnbox x) I would propose to just make Unique a newtype for an Int and making the constructor visible. Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Wed Aug 20 11:47:55 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Wed, 20 Aug 2014 11:47:55 +0000 Subject: Unique as special boxing type & hidden constructors In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221E71FC@DB3PRD3001MB020.064d.mgd.msft.net> References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl>, <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net> <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl>, <618BE556AADD624C9C918AA5D5911BEF221C4977@DBXPRD3001MB024.064d.mgd.msft.net> , <618BE556AADD624C9C918AA5D5911BEF221E71FC@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <92f3259cee3940febdd157aede4b423f@EXMBX31.ad.utwente.nl> Methinks a lot of the former performance considerations in Unique are out-dated (as per earlier discussion; direct use of unboxed ints etc.). An upside of using an ADT for the types of uniques is that we don't actually need to reserve 8 bits for a Char (which is committing to neither the actual number of classes, nor the "nature" of real Chars in Haskell). Instead, we can make a bitmask dependent on the number of classes that we actually use and stick the tag on the least-significant side of the Unique, as opposed to the most-significant (as we do now). We want to keep things working on 32-bits, but maybe a future of parallel builds is only for 64-bits. In this case, I would suggest that the 64-bit-case looks like this: whereas the 32-bit case simply has Where X is dependent on the size of the UniqueClass-sum-type (to be introduced). This would be CPP-magic'd using ?WORD_SIZE_IN_BITS. Ph. ________________________________ From: Simon Peyton Jones Sent: 20 August 2014 13:01 To: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Subject: RE: Unique as special boxing type & hidden constructors Sounds like a good idea to me. Would need to think about making sure that it all still worked, somehow, on 32 bit. S From: p.k.f.holzenspies at utwente.nl [mailto:p.k.f.holzenspies at utwente.nl] Sent: 20 August 2014 11:31 To: Simon Peyton Jones; ghc-devs at haskell.org Subject: RE: Unique as special boxing type & hidden constructors Dear Simon, et al, I seem to recall that the Unique(Supply) was an issue in parallelising GHC itself. There's a comment in the code (signed JSM) that there aren't any 64-bit bugs, if we have at least 32-bits for Ints and Chars fit in 8 characters. Then, there's bitmasks like 0x00FFFFFF to separate the "Int-part" from the "Char-part". I was wondering; if we move Uniques to 64 bits, but use the top 16 (instead of the current 8) for *both* the tag (currently a Char, soon an sum-type) and the threadId of the supplying thread of a Unique, would that help? Regards, Philip ________________________________ From: Simon Peyton Jones > Sent: 18 August 2014 23:29 To: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Subject: RE: Unique as special boxing type & hidden constructors 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me somewhat. Similar things occur elsewhere in the code. Isn't the assumption that GHC is being used? Is this old portability stuff that may be removed? I think so, unless others yell to the contrary. 2) Uniques are produced from a Char and an Int. The function to build Uniques (mkUnique) is not exported, according to the comments, so as to see all characters used. Access to these different "classes" of Uniques is given through specialised mkXXXUnique functions. Does anyone have a problem with something like: > data UniqueClass > = UniqDesugarer > | UniqAbsCFlattener > | UniqSimplStg > | UniqNativeCodeGen > ... OK by me 3) Is there a reason for having functions implementing class-methods to be exported? In the case of Unique, there is pprUnique and: > instance Outputable Unique where > ppr = pprUnique Please don?t change this. If you want to change how pretty-printing of uniques works, and want to find all the call sites of pprUnique, it?s FAR easier to grep for pprUnique than to search for all calls of ppr, and work out which are at type Unique! (In my view) it?s usually much better not to use type classes unless you actually need overloading. Simon From: p.k.f.holzenspies at utwente.nl [mailto:p.k.f.holzenspies at utwente.nl] Sent: 18 August 2014 14:50 To: Simon Peyton Jones; ghc-devs at haskell.org Subject: RE: Unique as special boxing type & hidden constructors Dear Simon, et al, Looking at Unique, there are a few more design choices that may be outdated, and since I'm polishing things now, anyway, I figured I could update it on more fronts. 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me somewhat. Similar things occur elsewhere in the code. Isn't the assumption that GHC is being used? Is this old portability stuff that may be removed? 2) Uniques are produced from a Char and an Int. The function to build Uniques (mkUnique) is not exported, according to the comments, so as to see all characters used. Access to these different "classes" of Uniques is given through specialised mkXXXUnique functions. Does anyone have a problem with something like: > data UniqueClass > = UniqDesugarer > | UniqAbsCFlattener > | UniqSimplStg > | UniqNativeCodeGen > ... and a public (i.e. exported) function: > mkUnique :: UniqueClass -> Int -> Unique ? The benefit of this would be to have more (to my taste) self-documenting code and a greater chance that documentation is updated (the list of "unique supply characters" in the comments is currently outdated). 3) Is there a reason for having functions implementing class-methods to be exported? In the case of Unique, there is pprUnique and: > instance Outputable Unique where > ppr = pprUnique Here pprUnique is exported and it is used in quite a few places where it's argument is unambiguously a Unique (so it's not to force the type) *and* "ppr" is used for all kinds of other types. I'm assuming this is an old choice making things marginally faster, but I would say cleaning up the API / namespace would now outweigh this margin. ? I will also be adding Haddock-comments, so when this is done, a review would be most welcome (I'll also be doing some similar transformations to other long-since-untouched-code). Regards, Philip ________________________________ Van: Simon Peyton Jones > Verzonden: maandag 18 augustus 2014 00:11 Aan: Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org Onderwerp: RE: Unique as special boxing type & hidden constructors Re (1) I think this is a historical. A newtype wrapping an Int should be fine. I?d be ok with that change. Re (2), I think your question is: why does module Unique export the data type Unique abstractly, rather than exporting both the data type and its constructor. No deep reason here, but it guarantees that you can only *make* a unique from an Int by calling ?mkUniqueGrimily?, which signals clearly that something fishy is going on. And rightly so! Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of p.k.f.holzenspies at utwente.nl Sent: 15 August 2014 11:53 To: ghc-devs at haskell.org Subject: Unique as special boxing type & hidden constructors Dear all, I'm working with Alan to instantiate everything for Data.Data, so that we can do better SYB-traversals (which should also help newcomers significantly to get into the GHC code base). Alan's looking at the AST types, I'm looking at the basic types in the compiler. Right now, I'm looking at Unique and two questions come up: > data Unique = MkUnique FastInt 1) As someone already commented: Is there a specific reason (other than history) that this isn't simply a newtype around an Int? If we're boxing anyway, we may as well use the default Int boxing and newtype-coerce to the specific purpose of Unique, no? 2) As a general question for GHC hacking style; what is the reason for hiding the constructors in the first place? I understand about abstraction and there are reasons for hiding, but there's a "public GHC API" and then there are all these modules that people can import at their own peril. Nothing is guaranteed about their consistency from version to version of GHC. I don't really see the point about hiding constructors (getting in the way of automatically deriving things) and then giving extra functions like (in the case of Unique): > getKeyFastInt (MkUnique x) = x > mkUniqueGrimily x = MkUnique (iUnbox x) I would propose to just make Unique a newtype for an Int and making the constructor visible. Regards, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.kjeldaas at gmail.com Wed Aug 20 12:07:57 2014 From: alexander.kjeldaas at gmail.com (Alexander Kjeldaas) Date: Wed, 20 Aug 2014 14:07:57 +0200 Subject: Unique as special boxing type & hidden constructors In-Reply-To: <92f3259cee3940febdd157aede4b423f@EXMBX31.ad.utwente.nl> References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl> <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net> <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl> <618BE556AADD624C9C918AA5D5911BEF221C4977@DBXPRD3001MB024.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E71FC@DB3PRD3001MB020.064d.mgd.msft.net> <92f3259cee3940febdd157aede4b423f@EXMBX31.ad.utwente.nl> Message-ID: On Wed, Aug 20, 2014 at 1:47 PM, wrote: > Methinks a lot of the former performance considerations in Unique are > out-dated (as per earlier discussion; direct use of unboxed ints etc.). > > > An upside of using an ADT for the types of uniques is that we don't > actually need to reserve 8 bits for a Char (which is committing to neither > the actual number of classes, nor the "nature" of real Chars in Haskell). > Instead, we can make a bitmask dependent on the number of classes that we > actually use and stick the tag on the least-significant side of the Unique, > as opposed to the most-significant (as we do now). > > > We want to keep things working on 32-bits, but maybe a future of > parallel builds is only for 64-bits. In this case, I would suggest that the > 64-bit-case looks like this: > > > > > > Is the thread id deterministic between runs? If not, please do not use this layout. I remember vaguely Unique being relevant to ghc not having deterministic builds, my most wanted ghc feature: https://ghc.haskell.org/trac/ghc/ticket/4012 Alexander > whereas the 32-bit case simply has > > > > > > Where X is dependent on the size of the UniqueClass-sum-type (to be > introduced). This would be CPP-magic'd using ?WORD_SIZE_IN_BITS. > > > Ph. > > > > > > > ------------------------------ > *From:* Simon Peyton Jones > *Sent:* 20 August 2014 13:01 > > *To:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > *Subject:* RE: Unique as special boxing type & hidden constructors > > > Sounds like a good idea to me. Would need to think about making sure > that it all still worked, somehow, on 32 bit. > > > > S > > > > *From:* p.k.f.holzenspies at utwente.nl [mailto:p.k.f.holzenspies at utwente.nl] > > *Sent:* 20 August 2014 11:31 > *To:* Simon Peyton Jones; ghc-devs at haskell.org > *Subject:* RE: Unique as special boxing type & hidden constructors > > > > Dear Simon, et al, > > > > I seem to recall that the Unique(Supply) was an issue in parallelising GHC > itself. There's a comment in the code (signed JSM) that there aren't any > 64-bit bugs, if we have at least 32-bits for Ints and Chars fit in 8 > characters. Then, there's bitmasks like 0x00FFFFFF to separate the > "Int-part" from the "Char-part". > > > > I was wondering; if we move Uniques to 64 bits, but use the top 16 > (instead of the current 8) for *both* the tag (currently a Char, soon an > sum-type) and the threadId of the supplying thread of a Unique, would that > help? > > > > Regards, > > Philip > > > > > > > > > ------------------------------ > > *From:* Simon Peyton Jones > *Sent:* 18 August 2014 23:29 > *To:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > *Subject:* RE: Unique as special boxing type & hidden constructors > > > > 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me > somewhat. Similar things occur elsewhere in the code. Isn't the assumption > that GHC is being used? Is this old portability stuff that may be removed? > > > > I think so, unless others yell to the contrary. > > > > 2) Uniques are produced from a Char and an Int. The function to build > Uniques (mkUnique) is not exported, according to the comments, so as to see > all characters used. Access to these different "classes" of Uniques is > given through specialised mkXXXUnique functions. Does anyone have a problem > with something like: > > > > > data UniqueClass > > > = UniqDesugarer > > > | UniqAbsCFlattener > > > | UniqSimplStg > > > | UniqNativeCodeGen > > > ... > > > > OK by me > > > > 3) Is there a reason for having functions implementing class-methods to be > exported? In the case of Unique, there is pprUnique and: > > > instance Outputable Unique where > > > ppr = pprUnique > > > > Please don?t change this. If you want to change how pretty-printing of > uniques works, and want to find all the call sites of pprUnique, it?s FAR > easier to grep for pprUnique than to search for all calls of ppr, and work > out which are at type Unique! > > > > (In my view) it?s usually much better not to use type classes unless you > actually need overloading. > > > > Simon > > > > *From:* p.k.f.holzenspies at utwente.nl [mailto:p.k.f.holzenspies at utwente.nl > ] > *Sent:* 18 August 2014 14:50 > *To:* Simon Peyton Jones; ghc-devs at haskell.org > *Subject:* RE: Unique as special boxing type & hidden constructors > > > > Dear Simon, et al, > > > > Looking at Unique, there are a few more design choices that may be > outdated, and since I'm polishing things now, anyway, I figured I could > update it on more fronts. > > > > 1) There is a #ifdef define(__GLASGOW_HASKELL__), which confused me > somewhat. Similar things occur elsewhere in the code. Isn't the assumption > that GHC is being used? Is this old portability stuff that may be removed? > > > > 2) Uniques are produced from a Char and an Int. The function to build > Uniques (mkUnique) is not exported, according to the comments, so as to see > all characters used. Access to these different "classes" of Uniques is > given through specialised mkXXXUnique functions. Does anyone have a problem > with something like: > > > > > data UniqueClass > > > = UniqDesugarer > > > | UniqAbsCFlattener > > > | UniqSimplStg > > > | UniqNativeCodeGen > > > ... > > > > and a public (i.e. exported) function: > > > > > mkUnique :: UniqueClass -> Int -> Unique > > > > ? The benefit of this would be to have more (to my taste) self-documenting > code and a greater chance that documentation is updated (the list of > "unique supply characters" in the comments is currently outdated). > > > > 3) Is there a reason for having functions implementing class-methods to be > exported? In the case of Unique, there is pprUnique and: > > > > > instance Outputable Unique where > > > ppr = pprUnique > > > > Here pprUnique is exported and it is used in quite a few places where it's > argument is unambiguously a Unique (so it's not to force the type) *and* > "ppr" is used for all kinds of other types. I'm assuming this is an old > choice making things marginally faster, but I would say cleaning up the API > / namespace would now outweigh this margin. > > ? > > I will also be adding Haddock-comments, so when this is done, a review > would be most welcome (I'll also be doing some similar transformations to > other long-since-untouched-code). > > > > Regards, > > Philip > > > > > > > > > > > > > > > > > ------------------------------ > > *Van:* Simon Peyton Jones > *Verzonden:* maandag 18 augustus 2014 00:11 > *Aan:* Holzenspies, P.K.F. (EWI); ghc-devs at haskell.org > *Onderwerp:* RE: Unique as special boxing type & hidden constructors > > > > Re (1) I think this is a historical. A newtype wrapping an Int should be > fine. I?d be ok with that change. > > > > Re (2), I think your question is: why does module Unique export the data > type Unique abstractly, rather than exporting both the data type and its > constructor. No deep reason here, but it guarantees that you can only * > *make** a unique from an Int by calling ?mkUniqueGrimily?, which signals > clearly that something fishy is going on. And rightly so! > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org > ] *On Behalf Of * > p.k.f.holzenspies at utwente.nl > *Sent:* 15 August 2014 11:53 > *To:* ghc-devs at haskell.org > *Subject:* Unique as special boxing type & hidden constructors > > > > Dear all, > > > > I'm working with Alan to instantiate everything for Data.Data, so that we > can do better SYB-traversals (which should also help newcomers > significantly to get into the GHC code base). Alan's looking at the AST > types, I'm looking at the basic types in the compiler. > > > > Right now, I'm looking at Unique and two questions come up: > > > > > data Unique = MkUnique FastInt > > > > 1) As someone already commented: Is there a specific reason (other than > history) that this isn't simply a newtype around an Int? If we're boxing > anyway, we may as well use the default Int boxing and newtype-coerce to the > specific purpose of Unique, no? > > > > 2) As a general question for GHC hacking style; what is the reason for > hiding the constructors in the first place? > > > > I understand about abstraction and there are reasons for hiding, but > there's a "public GHC API" and then there are all these modules that people > can import at their own peril. Nothing is guaranteed about their > consistency from version to version of GHC. I don't really see the point > about hiding constructors (getting in the way of automatically deriving > things) and then giving extra functions like (in the case of Unique): > > > > > getKeyFastInt (MkUnique x) = x > > > mkUniqueGrimily x = MkUnique (iUnbox x) > > > > I would propose to just make Unique a newtype for an Int and making the > constructor visible. > > Regards, > > Philip > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Wed Aug 20 12:08:50 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 20 Aug 2014 08:08:50 -0400 Subject: Unifying inferred and declared *existential* type variables In-Reply-To: References: Message-ID: <3557EA9A-A3CC-49F1-96FD-20399A40CA25@cis.upenn.edu> Hi Gergo, There is indeed something strange about that output. The good news is that I'm not convinced you need `unifyType`, but I'm not sure exactly what you do need. - There is nothing, in general, inside the `cobox`es. Those are just variables. In the Core you include, they're out of scope, so if that type error weren't reported, Core Lint would complain. Ids with the name "cobox" get created in two places: newEvVar and newEq. newEvVar is called when GHC needs to quantify over "given" equalities and class predicates. I don't think that's what's happening here though. newEq is called in the bowels of unifyType, when unifyType fails to unify two types. In this case, the TcCoercion that unifyType produces just wraps the cobox variable, and it is expected that the caller of unifyType will call the constraint simplifier (one of the functions in TcSimplify) with appropriate arguments to figure out what the cobox should be. Then, when desugaring to Core, the cobox is expanded, and we do not have the out-of-scope cobox seen in your output. It seems this call to the simplifier is not happening. (But, I don't think it needs to! Keep reading.) - Again, looking at the Core, `cont` can be called *without any coercions*. (Well, almost.) Currently, your Core has `cont b $dEq_aCr x y`. The first parameter to `cont` is the type variable. If you pass in `a` (instead of the out-of-scope `b`), `cont`'s type will be instantiated with `a` and the types will then line up. The "Well, almost" is because you pass in an out-of-scope `$dEq_aCr`, where I would expect the in-scope `dEq_aCt` (note the different Unique!). Not sure what's going on here. - The example code you're trying to process is easy, in that the pattern type signature and the datacon type signature are identical. When this is not the case, I realize further analysis will be required. But, my hunch (which could very well be wrong) is that you want the functions in types/Unify.lhs, not typecheck/TcUnify.lhs. The former walks over types and produces a substitution from one to the other instead of a coercion witnessing equality. Substitution may be all you need here. I hope this is helpful in spurring on progress! Richard On Aug 16, 2014, at 4:29 AM, Dr. ERDI Gergo wrote: > Hi, > > Background: > > Type signatures for pattern synonyms are / can be explicit about the existentially-bound type variables of the pattern. For example, given the following definitions: > > data T where > C :: (Eq a) => [a] -> (a, Bool) -> T > > pattern P x y = C x y > > the inferred type of P (with explicit foralls printed) is > > pattern type forall a. Eq a => P [a] (a, Bool) :: T > > > My problem: > > Ticket #8968 is a good example of a situation where we need this pattern type signature to be entered by the user. So continuing with the previous example, the user should be able to write, e.g. > > pattern type forall b. Eq b => P [b] (b, Bool) : T > > So in this case, I have to unify the argument types [b] ~ [a] and (b, Bool) ~ (a, Bool), and then use the resulting coercions of the existentially-bound variables before calling the success continuation. > > So I generate a pattern synonym matcher as such (going with the previous example) (I've pushed my code to wip/T8584): > > $mP{v r0} :: forall t [sk]. > T > -> (forall b [sk]. Eq b [sk] => [b [sk]] -> (b [sk], Bool) -> t [sk]) > -> t [sk] > -> t [sk] > $mP{v r0} > = /\(@ t [sk]). > \ ((scrut [lid] :: T)) > ((cont [lid] :: forall b [sk]. Eq b [sk] => [b [sk]] -> (b [sk], Bool) -> t [sk])) > ((fail [lid] :: t [sk])) > -> case scrut > of { > C {@ a [ssk] ($dEq_aCt [lid] :: Eq a [ssk]) EvBindsVar} > (x [lid] :: [a [ssk]]) > (y [lid] :: (a [ssk], Bool)) > -> cont b $dEq_aCr x y > |> (cobox{v} [lid], _N)_N > |> [cobox{v} [lid]]_N } > <>} > > The two 'cobox'es are the results of unifyType'ing [a] with [b] and (a, Bool) with (b, Bool). So basically what I hoped to do was to pattern-match on 'C{@ a $dEqA} x y' and pass that to cont as 'b' and '$dEqB' by rewriting them with the coercions. (It's unfortunate that even with full -dppr-debug output, I can't see what's inside the 'cobox'es). > > However, when I try doing this, I end up with the error message > > SigGADT2.hs:10:9: > Couldn't match type ?a [ssk]? with ?b [sk]? > because type variable ?b [sk]? would escape its scope > This (rigid, skolem) type variable is bound by > the type signature for > P :: [b [sk]] -> (b [sk], Bool) -> T > at SigGADT2.hs:10:9 > Expected type: [b [sk]] > Actual type: [a [ssk]] > > Also, while the result of unifying '[b]' ~ '[a]' and '(b, Bool)' ~ > '(a, Bool)' should take care of turning the 'a' bound by the constructor into the 'b' expected by the continuation function, it seems to me I'll need to do some extra magic to also turn the bound 'Eq a' evidence variable into the 'Eq b'. > > Obviously, I am missing a ton of stuff here. Can someone help me out? > > Thanks, > Gergo > > -- > > .--= ULLA! =-----------------. > \ http://gergo.erdi.hu \ > `---= gergo at erdi.hu =-------' > I love vegetarians - some of my favorite foods are vegetarians._______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From p.k.f.holzenspies at utwente.nl Wed Aug 20 13:07:12 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Wed, 20 Aug 2014 13:07:12 +0000 Subject: Unique as special boxing type & hidden constructors In-Reply-To: References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl> <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net> <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl> <618BE556AADD624C9C918AA5D5911BEF221C4977@DBXPRD3001MB024.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E71FC@DB3PRD3001MB020.064d.mgd.msft.net> <92f3259cee3940febdd157aede4b423f@EXMBX31.ad.utwente.nl>, Message-ID: <725e38c4b7c14474ae99fb75651b805f@EXMBX31.ad.utwente.nl> On Wed, Aug 20, 2014 at 1:47 PM, > wrote: Is the thread id deterministic between runs? If not, please do not use this layout. I remember vaguely Unique being relevant to ghc not having deterministic builds, my most wanted ghc feature: https://ghc.haskell.org/trac/ghc/ticket/4012 I think this depends on the policy GHC *will* have (there is not parallel build atm) wrt. the forking of threads. An actual Control.Concurrent.ThreadId might be as large as 64 bits, so, of course, we won't be using that, but rather the sequence number in which the UniqueSupply was "split off" for a new thread. In other words, if the decision to fork threads is deterministic, so are the Uniques with this layout. Mind you, I imagine a parallel GHC would still have at most one thread working on a single module. I don't know too much about what makes it into the interface file of a module (I can't imagine the exact Uniques end up there, because they would overlap with other modules - with per-module compilation - and conflict that way). Can you comment on how (the layout of) Uniques relate to #4012 in a little more detail? It seems that if the Uniques that somehow end up in the interface files could simply be stripped of the thread id, in which case, the problem reduces to the current one. Ph. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.kjeldaas at gmail.com Wed Aug 20 13:48:44 2014 From: alexander.kjeldaas at gmail.com (Alexander Kjeldaas) Date: Wed, 20 Aug 2014 15:48:44 +0200 Subject: Unique as special boxing type & hidden constructors In-Reply-To: <725e38c4b7c14474ae99fb75651b805f@EXMBX31.ad.utwente.nl> References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl> <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net> <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl> <618BE556AADD624C9C918AA5D5911BEF221C4977@DBXPRD3001MB024.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E71FC@DB3PRD3001MB020.064d.mgd.msft.net> <92f3259cee3940febdd157aede4b423f@EXMBX31.ad.utwente.nl> <725e38c4b7c14474ae99fb75651b805f@EXMBX31.ad.utwente.nl> Message-ID: On Wed, Aug 20, 2014 at 3:07 PM, wrote: > On Wed, Aug 20, 2014 at 1:47 PM, wrote: > >> >> >> > >> > Is the thread id deterministic between runs? If not, please do not > use this layout. I remember vaguely Unique being relevant to ghc not > having deterministic builds, my most wanted ghc feature: > > https://ghc.haskell.org/trac/ghc/ticket/4012 > > > I think this depends on the policy GHC *will* have (there is not > parallel build atm) wrt. the forking of threads. An actual > Control.Concurrent.ThreadId might be as large as 64 bits, so, of course, we > won't be using that, but rather the sequence number in which the > UniqueSupply was "split off" for a new thread. In other words, if the > decision to fork threads is deterministic, so are the Uniques with this > layout. > > Mind you, I imagine a parallel GHC would still have at most one thread > working on a single module. I don't know too much about what makes it into > the interface file of a module (I can't imagine the exact Uniques end up > there, because they would overlap with other modules - with per-module > compilation - and conflict that way). > > Can you comment on how (the layout of) Uniques relate to #4012 in a > little more detail? It seems that if the Uniques that somehow end up in the > interface files could simply be stripped of the thread id, in which case, > the problem reduces to the current one. > > I frankly don't know. I just think it's better to keep ThreadId out of data that can bleed into symbols and what not. As you can see, the thread id is just a counter, and as forkIO in a threaded runtime will be racy between threads, they aren't deterministic. http://stackoverflow.com/questions/24995262/how-can-i-build-a-threadid-given-that-i-know-the-actual-number Alexander -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Wed Aug 20 14:47:41 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Wed, 20 Aug 2014 14:47:41 +0000 Subject: Unique as special boxing type & hidden constructors In-Reply-To: References: <13aaa2dd98944a3e95cc03c5139fbbb7@EXMBX31.ad.utwente.nl> <618BE556AADD624C9C918AA5D5911BEF221BCE08@DB3PRD3001MB020.064d.mgd.msft.net> <276cee2de3a842faa3696d20646e23a2@EXMBX31.ad.utwente.nl> <618BE556AADD624C9C918AA5D5911BEF221C4977@DBXPRD3001MB024.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E71FC@DB3PRD3001MB020.064d.mgd.msft.net> <92f3259cee3940febdd157aede4b423f@EXMBX31.ad.utwente.nl> <725e38c4b7c14474ae99fb75651b805f@EXMBX31.ad.utwente.nl>, Message-ID: <88a65a9e33384010b3881d46816638a0@EXMBX31.ad.utwente.nl> Dear Max, et al, Here's hoping either you are still on the mailing list, or the address I found on your website (which says you're a Ph.D. student, so it's starting to smell) is still operational. I'm working on redoing some Unique-stuff in GHC. Mostly, the code uses Unique's API in a well-behaved fashion. The only awkward bit I found is in BinIface.getSymtabName, which git blames you for ;) I just wanted to ask: Why does this functions do all the bit-masking and shifting stuff directly and with different masks than anything in Unique? Is there a reason why this doesn't use unpkUnique? The comments in Unique state that mkUnique is NOT EXPORTED (the caps are in the comments, I'm not shouting), but they are, it seems, specifically for BinIface. I would like to get rid of this, but dare not hack away in the dark. Regards, Philip ________________________________ From: Alexander Kjeldaas Sent: 20 August 2014 15:48 To: Holzenspies, P.K.F. (EWI) Cc: Simon Peyton Jones; ghc-devs Subject: Re: Unique as special boxing type & hidden constructors On Wed, Aug 20, 2014 at 3:07 PM, > wrote: On Wed, Aug 20, 2014 at 1:47 PM, > wrote: Is the thread id deterministic between runs? If not, please do not use this layout. I remember vaguely Unique being relevant to ghc not having deterministic builds, my most wanted ghc feature: https://ghc.haskell.org/trac/ghc/ticket/4012 I think this depends on the policy GHC *will* have (there is not parallel build atm) wrt. the forking of threads. An actual Control.Concurrent.ThreadId might be as large as 64 bits, so, of course, we won't be using that, but rather the sequence number in which the UniqueSupply was "split off" for a new thread. In other words, if the decision to fork threads is deterministic, so are the Uniques with this layout. Mind you, I imagine a parallel GHC would still have at most one thread working on a single module. I don't know too much about what makes it into the interface file of a module (I can't imagine the exact Uniques end up there, because they would overlap with other modules - with per-module compilation - and conflict that way). Can you comment on how (the layout of) Uniques relate to #4012 in a little more detail? It seems that if the Uniques that somehow end up in the interface files could simply be stripped of the thread id, in which case, the problem reduces to the current one. I frankly don't know. I just think it's better to keep ThreadId out of data that can bleed into symbols and what not. As you can see, the thread id is just a counter, and as forkIO in a threaded runtime will be racy between threads, they aren't deterministic. http://stackoverflow.com/questions/24995262/how-can-i-build-a-threadid-given-that-i-know-the-actual-number Alexander -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Wed Aug 20 14:59:22 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 20 Aug 2014 07:59:22 -0700 Subject: Status updates In-Reply-To: References: Message-ID: <1408546762.2434.6.camel@joachim-breitner.de> Hi, Am Dienstag, den 19.08.2014, 08:39 -0500 schrieb Austin Seipp: > - Phabricator now has significantly better build integration, as I'm > sure many of you have seen. It is less noisy (and doesn't email you as > much), has better logging support (that actually works), and it now > builds commits AND patches! It's been in-production since last week > and a lot more reliable than the crap I wrote before. can this fully replace Travis (which would be nice, given the problems caused by Travis?s resource constraints)? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From iavor.diatchki at gmail.com Wed Aug 20 20:32:26 2014 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Wed, 20 Aug 2014 13:32:26 -0700 Subject: Status of GHC targetting ARM? Message-ID: Hello, Does anyone have information about the status of GHC cross-compiling to ARM architectures? More specifically, I am interested in finding out what works and what still needs to be done before we can get GHC to generate binaries that can run on the various mobile platforms (Android, iOS). -Iavor -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Aug 20 22:47:41 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 20 Aug 2014 22:47:41 +0000 Subject: Windows build fails -- again! In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221E3E02@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E3E02@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221E9E1C@DB3PRD3001MB020.064d.mgd.msft.net> Help! My Windows build is still falling over as below. Andreas, you seem to be the author of the commit that broke this. I'd really appreciate a fix. (From anyone!) thank you Simon | -----Original Message----- | From: Simon Peyton Jones | Sent: 20 August 2014 09:26 | To: Gabor Greif; ghc-devs at haskell.org | Subject: RE: Windows build fails -- again! | | Thanks Gabor. But it makes no difference. Your change is inside an | #ifdef that checks for windows, and your change is in the no-windows | branch only. | | Also there are two IOManager.h file | includes/rts/IOManager.h | rts/win32/IOManager.h | | Should there be? It seems terribly confusing, and I have no idea which | will win when it is #included. | | Thanks | | Simon | | | -----Original Message----- | | From: Gabor Greif [mailto:ggreif at gmail.com] | | Sent: 19 August 2014 23:38 | | To: Simon Peyton Jones | | Subject: Re: Windows build fails -- again! | | | | Simon, | | | | try this (attached) patch: | | | | $ git am 0001-Make-sure-that-a-prototype-is-included-for- | | setIOMana.patch | | | | Cheers, | | | | Gabor | | | | PS: on MacOS all is good, so I could not test it at all | | | | On 8/20/14, Simon Peyton Jones wrote: | | > Aaargh! My windows build is broken, again. | | > It's very painful that this keeps happening. | | > Can anyone help? | | > Simon | | > | | > "inplace/bin/ghc-stage1.exe" -optc-U__i686 -optc-march=i686 | | > -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall | | > -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes | | > -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return | | > -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs | | > -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist | | > -optc-Iincludes/dist-derivedconstants/header | | > -optc-Iincludes/dist-ghcconstants/header -optc-Irts | | > -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict-aliasing | | > -optc-fno-common -optc-O2 -optc-fomit-frame-pointer | | > -optc-DRtsWay=\"rts_v\" -static -H32m -O -Werror -Wall -H64m -O0 | | > -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header | | > -Iincludes/dist-ghcconstants/header | | > -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts | | > -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen - | | Irts/dist/build | | > -Irts/dist/build/autogen -O2 -c rts/Task.c -o | | > rts/dist/build/Task.o | | > | | > cc1.exe: warnings being treated as errors | | > | | > | | > | | > rts\Capability.c:1080:6: | | > | | > error: no previous prototype for 'setIOManagerControlFd' | | > | | > rts/ghc.mk:236: recipe for target 'rts/dist/build/Capability.o' | | failed | | > | | > make[1]: *** [rts/dist/build/Capability.o] Error 1 | | > | | > make[1]: *** Waiting for unfinished jobs.... | | > | | > Makefile:71: recipe for target 'all' failed | | > | | > make: *** [all] Error 2 | | > | | > HEAD (master)$ | | > | | > | | > From lukexipd at gmail.com Thu Aug 21 11:23:55 2014 From: lukexipd at gmail.com (Luke Iannini) Date: Thu, 21 Aug 2014 04:23:55 -0700 Subject: Status of GHC targetting ARM? In-Reply-To: References: Message-ID: Hi Iavor! I work with/on GHC for iOS daily and have been using it on a large near-production project for ~2 years without trouble. We released binaries for iOS compilers as part of 7.8: http://www.haskell.org/ghc/download_ghc_7_8_3#ios along with these toolchain scripts https://github.com/ghc-ios/ghc-ios-scripts and Cabal works well with it too. I've recently completed a patchset for LLVM and GHC to add ARM64 support that I'll be submitting soon. As far as building GHC for iOS, all of the patches are in mainline except for an out-of-date libffi, and some tweaks to the bindist system when generating redistributable binaries. The only major missing piece is Template Haskell support. Best Luke On Wed, Aug 20, 2014 at 1:32 PM, Iavor Diatchki wrote: > Hello, > > Does anyone have information about the status of GHC cross-compiling to > ARM architectures? More specifically, I am interested in finding out what > works and what still needs to be done before we can get GHC to generate > binaries that can run on the various mobile platforms (Android, iOS). > > -Iavor > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scpmw at leeds.ac.uk Thu Aug 21 19:16:41 2014 From: scpmw at leeds.ac.uk (Peter Wortmann) Date: Thu, 21 Aug 2014 20:16:41 +0100 Subject: How's the integration of DWARF support coming along? In-Reply-To: References: <53EBA10E.8060909@student.chalmers.se> Message-ID: <464A8583-5A46-4488-B736-E2FDC7752BE3@leeds.ac.uk> Okay, I have uploaded the ?core? set of patches to Phab: https://phabricator.haskell.org/D155 Not entirely sure this is the best way to go about it - even though this barely covers the essentials, it is still a huge patch. If that makes more sense, I might try to set it up as a series of dependant diffs. As usual, the ?full? stack of patches is on GitHub: http://github.com/scpmw/ghc/commits/profiling-import Greetings, Peter From howard_b_golden at yahoo.com Thu Aug 21 21:29:38 2014 From: howard_b_golden at yahoo.com (Howard B. Golden) Date: Thu, 21 Aug 2014 14:29:38 -0700 Subject: Suggestion for GHC System User's Guide documentation change Message-ID: <1408656578.37744.YahooMailNeo@web120804.mail.ne1.yahoo.com> I suggest changing the User's Guide extensions documentation to consistently use the LANGUAGE pragma form to specify extensions and code examples, rather than a combination of LANGUAGE pragmas and -XExtension flags. I find the combination of the two confusing. Also, the reader copying code examples which require a specific LANGUAGE to compile will be assisted by including the LANGUAGE pragma in the code examples. For example, in section 7.3, I would change: -------------------------------------------- 7.3. Syntactic extensions 7.3.1. Unicode syntax The language extension -XUnicodeSyntax enables Unicode characters to be used to stand for certain ASCII character sequences. -------------------------------------------- To: -------------------------------------------- 7.3. Syntactic extensions 7.3.1. Unicode syntax The language extension {-# LANGUAGE UnicodeSyntax #-} enables Unicode characters to be used to stand for certain ASCII character sequences. -------------------------------------------- Similarly, I would include the required LANGUAGE pragma(s) in _all_ code examples. For example, in section 7.3.7, I would change: -------------------------------------------- type Typ data TypView = Unit ???????????? | Arrow Typ Typ view :: Typ -> TypView -- additional operations for constructing Typ's ... -------------------------------------------- To: -------------------------------------------- {-# LANGUAGE ViewPatterns #-} type Typ data TypView = Unit ???????????? | Arrow Typ Typ view :: Typ -> TypView -- additional operations for constructing Typ's ... -------------------------------------------- I realize that LANGUAGE pragmas must be in file headers. While it is possible that users may be confused if they try to put pragmas in the body of a source file, I believe this will be outweighed by the benefit of making the examples clearer about the extensions necessary to use them. If this change is accepted, I volunteer to make the necessary documentation patches to implement it. Howard B. Golden Northridge, CA USA From bgamari.foss at gmail.com Thu Aug 21 23:00:40 2014 From: bgamari.foss at gmail.com (Ben Gamari) Date: Thu, 21 Aug 2014 19:00:40 -0400 Subject: How's the integration of DWARF support coming along? In-Reply-To: <464A8583-5A46-4488-B736-E2FDC7752BE3@leeds.ac.uk> References: <53EBA10E.8060909@student.chalmers.se> <464A8583-5A46-4488-B736-E2FDC7752BE3@leeds.ac.uk> Message-ID: <8738cptplz.fsf@gmail.com> Peter Wortmann writes: > Okay, I have uploaded the ?core? set of patches to Phab: > > https://phabricator.haskell.org/D155 > Surely you mean D169 [1]? Cheers, - Ben [1] https://phabricator.haskell.org/D169 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From simonpj at microsoft.com Fri Aug 22 07:37:07 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 22 Aug 2014 07:37:07 +0000 Subject: Suggestion for GHC System User's Guide documentation change In-Reply-To: <1408656578.37744.YahooMailNeo@web120804.mail.ne1.yahoo.com> References: <1408656578.37744.YahooMailNeo@web120804.mail.ne1.yahoo.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221ED15F@DB3PRD3001MB020.064d.mgd.msft.net> I'd be ok with this. It's a bit more verbose, but if it's less confusing for our users, then go for it. Thanks for offering to make a patch! SImon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Howard B. Golden | Sent: 21 August 2014 22:30 | To: ghc-devs at haskell.org | Subject: Suggestion for GHC System User's Guide documentation change | | I suggest changing the User's Guide extensions documentation to | consistently use the LANGUAGE pragma form to specify extensions and | code examples, rather than a combination of LANGUAGE pragmas and - | XExtension flags. I find the combination of the two confusing. Also, | the reader copying code examples which require a specific LANGUAGE to | compile will be assisted by including the LANGUAGE pragma in the code | examples. | | | For example, in section 7.3, I would change: | -------------------------------------------- | | 7.3. Syntactic extensions | 7.3.1. Unicode syntax | | The language extension -XUnicodeSyntax enables Unicode characters to be | used to stand for certain ASCII character sequences. | -------------------------------------------- | | | To: | -------------------------------------------- | 7.3. Syntactic extensions | 7.3.1. Unicode syntax | | The language extension {-# LANGUAGE UnicodeSyntax #-} enables Unicode | characters to be used to stand for certain ASCII character sequences. | -------------------------------------------- | | | | Similarly, I would include the required LANGUAGE pragma(s) in _all_ | code examples. For example, in section 7.3.7, I would change: | -------------------------------------------- | | type Typ | | data TypView = Unit | ???????????? | Arrow Typ Typ | | view :: Typ -> TypView | | -- additional operations for constructing Typ's ... | -------------------------------------------- | | | To: | -------------------------------------------- | | | {-# LANGUAGE ViewPatterns #-} | type Typ | | data TypView = Unit | ???????????? | Arrow Typ Typ | | view :: Typ -> TypView | | -- additional operations for constructing Typ's ... | -------------------------------------------- | | I realize that LANGUAGE pragmas must be in file headers. While it is | possible that users may be confused if they try to put pragmas in the | body of a source file, I believe this will be outweighed by the benefit | of making the examples clearer about the extensions necessary to use | them. | | If this change is accepted, I volunteer to make the necessary | documentation patches to implement it. | | | Howard B. Golden | Northridge, CA USA | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From alexander at plaimi.net Fri Aug 22 10:59:41 2014 From: alexander at plaimi.net (Alexander Berntsen) Date: Fri, 22 Aug 2014 12:59:41 +0200 Subject: The formal definition of a crash in GHC In-Reply-To: <59543203684B2244980D7E4057D5FBC1487CE8E1@DB3EX14MBXC306.europe.corp.microsoft.com> References: <531066B3.7020609@plaimi.net> <59543203684B2244980D7E4057D5FBC1487CE8E1@DB3EX14MBXC306.europe.corp.microsoft.com> Message-ID: <53F7229D.4030808@plaimi.net> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 28/02/14 13:41, Simon Peyton Jones wrote: > Crashing is usually formalised by the "progress" and "type > preservation" theorems that papers about statically-typed > programming languages usually offer. You will find many examples > of such theorems (and their proofs) in the papers about GHC's > intermediate language > http://research.microsoft.com/en-us/um/people/simonpj/papers/ext-f/ > > A "crash" would mean that execution get stuck, and the progress > theorem guarantees that cannot happen. Would you, or anyone else, be able to make a wiki article on these theorems? So that we have a centralised resource that we can refer to. - -- Alexander alexander at plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlP3Ip0ACgkQRtClrXBQc7WfAAD/VoQHW/bZLgdgoLCkSsLBGyIO B4F30jFLGuEB7NRvoIQBAKm0V4y2eutGUiSii0lTnzsP8Md3hKbZpUlcrqFmEQzf =OBHG -----END PGP SIGNATURE----- From simonpj at microsoft.com Fri Aug 22 12:07:06 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 22 Aug 2014 12:07:06 +0000 Subject: Windows build fails -- again! References: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E3E02@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221ED7F6@DB3PRD3001MB020.064d.mgd.msft.net> Friends My Windows build is still broken, and has been since Andreas's patch commit f9f89b7884ccc8ee5047cf4fffdf2b36df6832df on Tues 19th. Please can someone help? I'm begging. I suppose that if I hear nothing I can simply revert his patch but that seems like the Wrong Solution Thanks Simon | -----Original Message----- | From: Simon Peyton Jones | Sent: 20 August 2014 23:48 | To: 'Gabor Greif'; 'ghc-devs at haskell.org'; 'Andreas Voellmy' | Subject: RE: Windows build fails -- again! | | Help! My Windows build is still falling over as below. | | Andreas, you seem to be the author of the commit that broke this. I'd | really appreciate a fix. (From anyone!) | | thank you | | Simon | | | -----Original Message----- | | From: Simon Peyton Jones | | Sent: 20 August 2014 09:26 | | To: Gabor Greif; ghc-devs at haskell.org | | Subject: RE: Windows build fails -- again! | | | | Thanks Gabor. But it makes no difference. Your change is inside an | | #ifdef that checks for windows, and your change is in the no-windows | | branch only. | | | | Also there are two IOManager.h file | | includes/rts/IOManager.h | | rts/win32/IOManager.h | | | | Should there be? It seems terribly confusing, and I have no idea which | | will win when it is #included. | | | | Thanks | | | | Simon | | | | | -----Original Message----- | | | From: Gabor Greif [mailto:ggreif at gmail.com] | | | Sent: 19 August 2014 23:38 | | | To: Simon Peyton Jones | | | Subject: Re: Windows build fails -- again! | | | | | | Simon, | | | | | | try this (attached) patch: | | | | | | $ git am 0001-Make-sure-that-a-prototype-is-included-for- | | | setIOMana.patch | | | | | | Cheers, | | | | | | Gabor | | | | | | PS: on MacOS all is good, so I could not test it at all | | | | | | On 8/20/14, Simon Peyton Jones wrote: | | | > Aaargh! My windows build is broken, again. | | | > It's very painful that this keeps happening. | | | > Can anyone help? | | | > Simon | | | > | | | > "inplace/bin/ghc-stage1.exe" -optc-U__i686 -optc-march=i686 | | | > -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall | | | > -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes | | | > -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return | | | > -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs | | | > -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist | | | > -optc-Iincludes/dist-derivedconstants/header | | | > -optc-Iincludes/dist-ghcconstants/header -optc-Irts | | | > -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict- | aliasing | | | > -optc-fno-common -optc-O2 -optc-fomit-frame-pointer | | | > -optc-DRtsWay=\"rts_v\" -static -H32m -O -Werror -Wall -H64m -O0 | | | > -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header | | | > -Iincludes/dist-ghcconstants/header | | | > -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts | | | > -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen - | | | Irts/dist/build | | | > -Irts/dist/build/autogen -O2 -c rts/Task.c -o | | | > rts/dist/build/Task.o | | | > | | | > cc1.exe: warnings being treated as errors | | | > | | | > | | | > | | | > rts\Capability.c:1080:6: | | | > | | | > error: no previous prototype for 'setIOManagerControlFd' | | | > | | | > rts/ghc.mk:236: recipe for target 'rts/dist/build/Capability.o' | | | failed | | | > | | | > make[1]: *** [rts/dist/build/Capability.o] Error 1 | | | > | | | > make[1]: *** Waiting for unfinished jobs.... | | | > | | | > Makefile:71: recipe for target 'all' failed | | | > | | | > make: *** [all] Error 2 | | | > | | | > HEAD (master)$ | | | > | | | > | | | > From johan.tibell at gmail.com Fri Aug 22 12:17:28 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 22 Aug 2014 14:17:28 +0200 Subject: Windows build fails -- again! In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221ED7F6@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E3E02@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221ED7F6@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I think reverting the patch (and notifying the author) is an OK course of action if it's affecting other people's work. I believe it's common practice at many other places. The patch can always be replied later, after fixing it. P.S. We should re-open the bug if we revert the patch (i.e. by git revert SHA1 && git push). On Fri, Aug 22, 2014 at 2:07 PM, Simon Peyton Jones wrote: > Friends > > My Windows build is still broken, and has been since Andreas's patch > commit f9f89b7884ccc8ee5047cf4fffdf2b36df6832df > on Tues 19th. > > Please can someone help? I'm begging. > > I suppose that if I hear nothing I can simply revert his patch but that > seems like the Wrong Solution > > Thanks > > Simon > > | -----Original Message----- > | From: Simon Peyton Jones > | Sent: 20 August 2014 23:48 > | To: 'Gabor Greif'; 'ghc-devs at haskell.org'; 'Andreas Voellmy' > | Subject: RE: Windows build fails -- again! > | > | Help! My Windows build is still falling over as below. > | > | Andreas, you seem to be the author of the commit that broke this. I'd > | really appreciate a fix. (From anyone!) > | > | thank you > | > | Simon > | > | | -----Original Message----- > | | From: Simon Peyton Jones > | | Sent: 20 August 2014 09:26 > | | To: Gabor Greif; ghc-devs at haskell.org > | | Subject: RE: Windows build fails -- again! > | | > | | Thanks Gabor. But it makes no difference. Your change is inside an > | | #ifdef that checks for windows, and your change is in the no-windows > | | branch only. > | | > | | Also there are two IOManager.h file > | | includes/rts/IOManager.h > | | rts/win32/IOManager.h > | | > | | Should there be? It seems terribly confusing, and I have no idea which > | | will win when it is #included. > | | > | | Thanks > | | > | | Simon > | | > | | | -----Original Message----- > | | | From: Gabor Greif [mailto:ggreif at gmail.com] > | | | Sent: 19 August 2014 23:38 > | | | To: Simon Peyton Jones > | | | Subject: Re: Windows build fails -- again! > | | | > | | | Simon, > | | | > | | | try this (attached) patch: > | | | > | | | $ git am 0001-Make-sure-that-a-prototype-is-included-for- > | | | setIOMana.patch > | | | > | | | Cheers, > | | | > | | | Gabor > | | | > | | | PS: on MacOS all is good, so I could not test it at all > | | | > | | | On 8/20/14, Simon Peyton Jones wrote: > | | | > Aaargh! My windows build is broken, again. > | | | > It's very painful that this keeps happening. > | | | > Can anyone help? > | | | > Simon > | | | > > | | | > "inplace/bin/ghc-stage1.exe" -optc-U__i686 -optc-march=i686 > | | | > -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall > | | | > -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes > | | | > -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return > | | | > -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs > | | | > -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist > | | | > -optc-Iincludes/dist-derivedconstants/header > | | | > -optc-Iincludes/dist-ghcconstants/header -optc-Irts > | | | > -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict- > | aliasing > | | | > -optc-fno-common -optc-O2 -optc-fomit-frame-pointer > | | | > -optc-DRtsWay=\"rts_v\" -static -H32m -O -Werror -Wall -H64m -O0 > | | | > -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header > | | | > -Iincludes/dist-ghcconstants/header > | | | > -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts > | | | > -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen - > | | | Irts/dist/build > | | | > -Irts/dist/build/autogen -O2 -c rts/Task.c -o > | | | > rts/dist/build/Task.o > | | | > > | | | > cc1.exe: warnings being treated as errors > | | | > > | | | > > | | | > > | | | > rts\Capability.c:1080:6: > | | | > > | | | > error: no previous prototype for 'setIOManagerControlFd' > | | | > > | | | > rts/ghc.mk:236: recipe for target 'rts/dist/build/Capability.o' > | | | failed > | | | > > | | | > make[1]: *** [rts/dist/build/Capability.o] Error 1 > | | | > > | | | > make[1]: *** Waiting for unfinished jobs.... > | | | > > | | | > Makefile:71: recipe for target 'all' failed > | | | > > | | | > make: *** [all] Error 2 > | | | > > | | | > HEAD (master)$ > | | | > > | | | > > | | | > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.k.f.holzenspies at utwente.nl Fri Aug 22 12:38:57 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Fri, 22 Aug 2014 12:38:57 +0000 Subject: Suggestion for GHC System User's Guide documentation change In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221ED15F@DB3PRD3001MB020.064d.mgd.msft.net> References: <1408656578.37744.YahooMailNeo@web120804.mail.ne1.yahoo.com>, <618BE556AADD624C9C918AA5D5911BEF221ED15F@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <7647da15cc624842a8888ac2ecce6901@EXMBX31.ad.utwente.nl> Marginally less verbose; why not use the language extension *only* in running text? Preferably with a link to the documentation of that language extension. In your example: | The language extension UnicodeSyntax enables Unicode characters to be | used to stand for certain ASCII character sequences.? With regards to code examples: Ideally any explicit code example could just be copy-pasted into a .hs-file and loaded into ghci / compiled with ghc without special switches. Just my two cents ;) Ph. ________________________________ From: Simon Peyton Jones Sent: 22 August 2014 09:37 To: Howard B. Golden; ghc-devs at haskell.org Subject: RE: Suggestion for GHC System User's Guide documentation change I'd be ok with this. It's a bit more verbose, but if it's less confusing for our users, then go for it. Thanks for offering to make a patch! SImon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Howard B. Golden | Sent: 21 August 2014 22:30 | To: ghc-devs at haskell.org | Subject: Suggestion for GHC System User's Guide documentation change | | I suggest changing the User's Guide extensions documentation to | consistently use the LANGUAGE pragma form to specify extensions and | code examples, rather than a combination of LANGUAGE pragmas and - | XExtension flags. I find the combination of the two confusing. Also, | the reader copying code examples which require a specific LANGUAGE to | compile will be assisted by including the LANGUAGE pragma in the code | examples. | | | For example, in section 7.3, I would change: | -------------------------------------------- | | 7.3. Syntactic extensions | 7.3.1. Unicode syntax | | The language extension -XUnicodeSyntax enables Unicode characters to be | used to stand for certain ASCII character sequences. | -------------------------------------------- | | | To: | -------------------------------------------- | 7.3. Syntactic extensions | 7.3.1. Unicode syntax | | The language extension {-# LANGUAGE UnicodeSyntax #-} enables Unicode | characters to be used to stand for certain ASCII character sequences. | -------------------------------------------- | | | | Similarly, I would include the required LANGUAGE pragma(s) in _all_ | code examples. For example, in section 7.3.7, I would change: | -------------------------------------------- | | type Typ | | data TypView = Unit | | Arrow Typ Typ | | view :: Typ -> TypView | | -- additional operations for constructing Typ's ... | -------------------------------------------- | | | To: | -------------------------------------------- | | | {-# LANGUAGE ViewPatterns #-} | type Typ | | data TypView = Unit | | Arrow Typ Typ | | view :: Typ -> TypView | | -- additional operations for constructing Typ's ... | -------------------------------------------- | | I realize that LANGUAGE pragmas must be in file headers. While it is | possible that users may be confused if they try to put pragmas in the | body of a source file, I believe this will be outweighed by the benefit | of making the examples clearer about the extensions necessary to use | them. | | If this change is accepted, I volunteer to make the necessary | documentation patches to implement it. | | | Howard B. Golden | Northridge, CA USA | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyrab at mail.ru Fri Aug 22 13:04:43 2014 From: kyrab at mail.ru (kyra) Date: Fri, 22 Aug 2014 17:04:43 +0400 Subject: Windows build fails -- again! In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221ED7F6@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E3E02@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221ED7F6@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <53F73FEB.3010908@mail.ru> I've looked into this patch, it looks like this patch was intended to touch only linuxish IO Manager, but in fact it touched common (os unrelated) code here and there extensively and there are almost no chances somebody else (not the author) can fix the things. So, almost the only way to unbreak the build it to revert the patch. Regards, Kyra On 8/22/2014 16:07, Simon Peyton Jones wrote: > Friends > > My Windows build is still broken, and has been since Andreas's patch > commit f9f89b7884ccc8ee5047cf4fffdf2b36df6832df > on Tues 19th. > > Please can someone help? I'm begging. > > I suppose that if I hear nothing I can simply revert his patch but that seems like the Wrong Solution > > Thanks > > Simon > > | -----Original Message----- > | From: Simon Peyton Jones > | Sent: 20 August 2014 23:48 > | To: 'Gabor Greif'; 'ghc-devs at haskell.org'; 'Andreas Voellmy' > | Subject: RE: Windows build fails -- again! > | > | Help! My Windows build is still falling over as below. > | > | Andreas, you seem to be the author of the commit that broke this. I'd > | really appreciate a fix. (From anyone!) > | > | thank you > | > | Simon > | > | | -----Original Message----- > | | From: Simon Peyton Jones > | | Sent: 20 August 2014 09:26 > | | To: Gabor Greif; ghc-devs at haskell.org > | | Subject: RE: Windows build fails -- again! > | | > | | Thanks Gabor. But it makes no difference. Your change is inside an > | | #ifdef that checks for windows, and your change is in the no-windows > | | branch only. > | | > | | Also there are two IOManager.h file > | | includes/rts/IOManager.h > | | rts/win32/IOManager.h > | | > | | Should there be? It seems terribly confusing, and I have no idea which > | | will win when it is #included. > | | > | | Thanks > | | > | | Simon > | | > | | | -----Original Message----- > | | | From: Gabor Greif [mailto:ggreif at gmail.com] > | | | Sent: 19 August 2014 23:38 > | | | To: Simon Peyton Jones > | | | Subject: Re: Windows build fails -- again! > | | | > | | | Simon, > | | | > | | | try this (attached) patch: > | | | > | | | $ git am 0001-Make-sure-that-a-prototype-is-included-for- > | | | setIOMana.patch > | | | > | | | Cheers, > | | | > | | | Gabor > | | | > | | | PS: on MacOS all is good, so I could not test it at all > | | | > | | | On 8/20/14, Simon Peyton Jones wrote: > | | | > Aaargh! My windows build is broken, again. > | | | > It's very painful that this keeps happening. > | | | > Can anyone help? > | | | > Simon > | | | > > | | | > "inplace/bin/ghc-stage1.exe" -optc-U__i686 -optc-march=i686 > | | | > -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall > | | | > -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes > | | | > -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return > | | | > -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs > | | | > -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist > | | | > -optc-Iincludes/dist-derivedconstants/header > | | | > -optc-Iincludes/dist-ghcconstants/header -optc-Irts > | | | > -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict- > | aliasing > | | | > -optc-fno-common -optc-O2 -optc-fomit-frame-pointer > | | | > -optc-DRtsWay=\"rts_v\" -static -H32m -O -Werror -Wall -H64m -O0 > | | | > -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header > | | | > -Iincludes/dist-ghcconstants/header > | | | > -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts > | | | > -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen - > | | | Irts/dist/build > | | | > -Irts/dist/build/autogen -O2 -c rts/Task.c -o > | | | > rts/dist/build/Task.o > | | | > > | | | > cc1.exe: warnings being treated as errors > | | | > > | | | > > | | | > > | | | > rts\Capability.c:1080:6: > | | | > > | | | > error: no previous prototype for 'setIOManagerControlFd' > | | | > > | | | > rts/ghc.mk:236: recipe for target 'rts/dist/build/Capability.o' > | | | failed > | | | > > | | | > make[1]: *** [rts/dist/build/Capability.o] Error 1 > | | | > > | | | > make[1]: *** Waiting for unfinished jobs.... > | | | > > | | | > Makefile:71: recipe for target 'all' failed > | | | > > | | | > make: *** [all] Error 2 > | | | > > | | | > HEAD (master)$ > | | | > > | | | > > | | | > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From david.feuer at gmail.com Fri Aug 22 13:38:13 2014 From: david.feuer at gmail.com (David Feuer) Date: Fri, 22 Aug 2014 09:38:13 -0400 Subject: [GHC] #9496: Simplify primitives for short cut fusion In-Reply-To: <060.cc818e78c7c4486a1985158156722601@haskell.org> References: <045.ac67b2e89db14da0e2a7c577a8db7e03@haskell.org> <060.cc818e78c7c4486a1985158156722601@haskell.org> Message-ID: Yes, I meant "producer" there. On Fri, Aug 22, 2014 at 9:36 AM, GHC wrote: > #9496: Simplify primitives for short cut fusion > -------------------------------------+------------------------------------- > Reporter: dfeuer | Owner: dfeuer > Type: task | Status: new > Priority: normal | Milestone: > Component: | Version: 7.8.3 > libraries/base | Keywords: fusion > Resolution: | Architecture: Unknown/Multiple > Operating System: | Difficulty: Unknown > Unknown/Multiple | Blocked By: > Type of failure: Other | Related Tickets: > Test Case: | > Blocking: | > Differential Revisions: | > -------------------------------------+------------------------------------- > > Comment (by dfeuer): > > Replying to [comment:2 simonpj]: > > I believe that there are good reasons for distinguishing build and > augment. [http://research.microsoft.com/en-us/um/people/simonpj/papers > /andy-thesis.ps.gz Andy Gill's thesis] would be a good place to look. > But perhaps one could do everything in terms of augment; I'm not sure. > Worth a try. > > > > I think there is really only one primitive consumer, foldr. I thought > we rewrote into foldr and then back. If that is not done for or, any, > etc, I'm not sure why. Again, perhaps worth investigation. > > > > Certainly the original goal of the foldr/build paper was to say "ONE > rule, not n*m rules". > > > > Simon > > An aside: Just last night I saw a bit of the work Takano Akio has done on > incorporating a worker/wrapper transformation into the framework (although > I don't quite understand how it works yet). It doesn't seem to be quite > ready for prime time (there were apparently some issues with one NoFib > benchmark), but we might want to keep it in mind. > > I think the one rule concept is great. If that can be made to really work, > that would be ''ideal''. Unfortunately, the need to wrangle the inliner as > it currently works turns the one rule concept into an n*m-rule concept, > where m is certainly at least 1, but currently 2 (the rewrite back rule > clearly seems necessary for now?I don't yet understand things deeply > enough to know for sure if the rewrite to rule is strictly necessary in > all cases). I would speculate that the and/or/any/head/... rules came > about because someone thought to themselves "There's only one [sic] > consumer, `build`, so we can skip this difficult and invasive rewrite > to/from process and just fuse with `build`. That's easy!" Well, they were > a little wrong, but I'm not sure they were very wrong. > > I haven't had a chance to read the thesis yet, but from a purely practical > perspective, I don't see any difference between `build g` and `augment g > []`. I don't ''think'' anyone's tossing around partially applied > `augment`s or anything. > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler From howard_b_golden at yahoo.com Fri Aug 22 16:47:26 2014 From: howard_b_golden at yahoo.com (Howard B. Golden) Date: Fri, 22 Aug 2014 09:47:26 -0700 Subject: Suggestion for GHC System User's Guide documentation change In-Reply-To: <7647da15cc624842a8888ac2ecce6901@EXMBX31.ad.utwente.nl> References: <1408656578.37744.YahooMailNeo@web120804.mail.ne1.yahoo.com>, <618BE556AADD624C9C918AA5D5911BEF221ED15F@DB3PRD3001MB020.064d.mgd.msft.net> <7647da15cc624842a8888ac2ecce6901@EXMBX31.ad.utwente.nl> Message-ID: <1408726046.22840.YahooMailNeo@web120802.mail.ne1.yahoo.com> p.k.f., I like your less verbose suggestion better than my original. I don't understand your comment about code examples: Are you supporting or opposing the inclusion of the LANGUAGE pragmas in the examples? Howard ________________________________ From: "p.k.f.holzenspies at utwente.nl" To: simonpj at microsoft.com; howard_b_golden at yahoo.com; ghc-devs at haskell.org Sent: Friday, August 22, 2014 5:38 AM Subject: RE: Suggestion for GHC System User's Guide documentation change Marginally less verbose; why not use the language extension *only* in running text? Preferably with a link to the documentation of that language extension. In your example: | The language extension UnicodeSyntax?enables Unicode characters to be | used to stand for certain ASCII character sequences.? With regards to code examples: Ideally any explicit code example could just be copy-pasted into a .hs-file and loaded into ghci / compiled with ghc without special switches. Just my two cents ;) Ph. From ezyang at mit.edu Fri Aug 22 17:01:58 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 22 Aug 2014 18:01:58 +0100 Subject: Proposal: run GHC API tests on fast In-Reply-To: <1401411058-sup-8260@sabre> References: <1401411058-sup-8260@sabre> Message-ID: <1408726908-sup-9449@sabre> OK, I've gone ahead and done this. Edward Excerpts from Edward Z. Yang's message of 2014-05-30 01:55:35 +0100: > Currently, most GHC API tests are not run on 'make fast', > ostensibly because linking against the GHC API can take a while. > I propose that change this, and run GHC API tests by default. > Reasons: > > 1. The GHC API is closely tied a lot of internal structure of GHC, so > it's very easy to make a change, track it through the rest of the > compiler, but forget to update the tests/documentation. > > 2. We can boost this into poor man's testable documentation. The idea > is to duplicate all GHC API examples in the manual in the test suite, > and have a comment on all of the examples asking the developer to update > the manual. (Or we could automatically extract the snippets from the > manual, but that's work and this I could do in a few minutes.) > > 3. I don't think running these tests will add that much extra run > time to the test suite; certainly interactively the time spent linking > is unnoticeable. > > Let's set a one week discussion period for this proposal. > > Thanks, > Edward From andreas.voellmy at gmail.com Fri Aug 22 21:00:06 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Fri, 22 Aug 2014 16:00:06 -0500 Subject: Windows build fails -- again! In-Reply-To: <53F73FEB.3010908@mail.ru> References: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E3E02@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221ED7F6@DB3PRD3001MB020.064d.mgd.msft.net> <53F73FEB.3010908@mail.ru> Message-ID: I'm just noticing this thread now... sorry about the delay and the problems! I'll look into what happened here. On Fri, Aug 22, 2014 at 8:04 AM, kyra wrote: > I've looked into this patch, it looks like this patch was intended to > touch only linuxish IO Manager, but in fact it touched common (os > unrelated) code here and there extensively and there are almost no chances > somebody else (not the author) can fix the things. > > So, almost the only way to unbreak the build it to revert the patch. > > Regards, > Kyra > > > On 8/22/2014 16:07, Simon Peyton Jones wrote: > >> Friends >> >> My Windows build is still broken, and has been since Andreas's patch >> commit f9f89b7884ccc8ee5047cf4fffdf2b36df6832df >> on Tues 19th. >> >> Please can someone help? I'm begging. >> >> I suppose that if I hear nothing I can simply revert his patch but that >> seems like the Wrong Solution >> >> Thanks >> >> Simon >> >> | -----Original Message----- >> | From: Simon Peyton Jones >> | Sent: 20 August 2014 23:48 >> | To: 'Gabor Greif'; 'ghc-devs at haskell.org'; 'Andreas Voellmy' >> | Subject: RE: Windows build fails -- again! >> | >> | Help! My Windows build is still falling over as below. >> | >> | Andreas, you seem to be the author of the commit that broke this. I'd >> | really appreciate a fix. (From anyone!) >> | >> | thank you >> | >> | Simon >> | >> | | -----Original Message----- >> | | From: Simon Peyton Jones >> | | Sent: 20 August 2014 09:26 >> | | To: Gabor Greif; ghc-devs at haskell.org >> | | Subject: RE: Windows build fails -- again! >> | | >> | | Thanks Gabor. But it makes no difference. Your change is inside an >> | | #ifdef that checks for windows, and your change is in the no-windows >> | | branch only. >> | | >> | | Also there are two IOManager.h file >> | | includes/rts/IOManager.h >> | | rts/win32/IOManager.h >> | | >> | | Should there be? It seems terribly confusing, and I have no idea >> which >> | | will win when it is #included. >> | | >> | | Thanks >> | | >> | | Simon >> | | >> | | | -----Original Message----- >> | | | From: Gabor Greif [mailto:ggreif at gmail.com] >> | | | Sent: 19 August 2014 23:38 >> | | | To: Simon Peyton Jones >> | | | Subject: Re: Windows build fails -- again! >> | | | >> | | | Simon, >> | | | >> | | | try this (attached) patch: >> | | | >> | | | $ git am 0001-Make-sure-that-a-prototype-is-included-for- >> | | | setIOMana.patch >> | | | >> | | | Cheers, >> | | | >> | | | Gabor >> | | | >> | | | PS: on MacOS all is good, so I could not test it at all >> | | | >> | | | On 8/20/14, Simon Peyton Jones wrote: >> | | | > Aaargh! My windows build is broken, again. >> | | | > It's very painful that this keeps happening. >> | | | > Can anyone help? >> | | | > Simon >> | | | > >> | | | > "inplace/bin/ghc-stage1.exe" -optc-U__i686 -optc-march=i686 >> | | | > -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall >> | | | > -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes >> | | | > -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return >> | | | > -optc-Wpointer-arith -optc-Wmissing-noreturn -optc-Wnested-externs >> | | | > -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist >> | | | > -optc-Iincludes/dist-derivedconstants/header >> | | | > -optc-Iincludes/dist-ghcconstants/header -optc-Irts >> | | | > -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict- >> | aliasing >> | | | > -optc-fno-common -optc-O2 -optc-fomit-frame-pointer >> | | | > -optc-DRtsWay=\"rts_v\" -static -H32m -O -Werror -Wall -H64m -O0 >> | | | > -Iincludes -Iincludes/dist -Iincludes/dist- >> derivedconstants/header >> | | | > -Iincludes/dist-ghcconstants/header >> | | | > -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts >> | | | > -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen - >> | | | Irts/dist/build >> | | | > -Irts/dist/build/autogen -O2 -c rts/Task.c -o >> | | | > rts/dist/build/Task.o >> | | | > >> | | | > cc1.exe: warnings being treated as errors >> | | | > >> | | | > >> | | | > >> | | | > rts\Capability.c:1080:6: >> | | | > >> | | | > error: no previous prototype for 'setIOManagerControlFd' >> | | | > >> | | | > rts/ghc.mk:236: recipe for target 'rts/dist/build/Capability.o' >> | | | failed >> | | | > >> | | | > make[1]: *** [rts/dist/build/Capability.o] Error 1 >> | | | > >> | | | > make[1]: *** Waiting for unfinished jobs.... >> | | | > >> | | | > Makefile:71: recipe for target 'all' failed >> | | | > >> | | | > make: *** [all] Error 2 >> | | | > >> | | | > HEAD (master)$ >> | | | > >> | | | > >> | | | > >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Fri Aug 22 21:01:37 2014 From: austin at well-typed.com (Austin Seipp) Date: Fri, 22 Aug 2014 16:01:37 -0500 Subject: Windows build fails -- again! In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E3E02@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221ED7F6@DB3PRD3001MB020.064d.mgd.msft.net> <53F73FEB.3010908@mail.ru> Message-ID: I've reverted it in the mean time (4748f5936fe72d96edfa17b153dbfd84f2c4c053), sorry about that. I've spent some time working on a Windows Phabricator machine, so stay tuned! Andreas, if you need a Windows VM or something temporarily to test, do let me know. On Fri, Aug 22, 2014 at 4:00 PM, Andreas Voellmy wrote: > I'm just noticing this thread now... sorry about the delay and the problems! > I'll look into what happened here. > > > On Fri, Aug 22, 2014 at 8:04 AM, kyra wrote: >> >> I've looked into this patch, it looks like this patch was intended to >> touch only linuxish IO Manager, but in fact it touched common (os unrelated) >> code here and there extensively and there are almost no chances somebody >> else (not the author) can fix the things. >> >> So, almost the only way to unbreak the build it to revert the patch. >> >> Regards, >> Kyra >> >> >> On 8/22/2014 16:07, Simon Peyton Jones wrote: >>> >>> Friends >>> >>> My Windows build is still broken, and has been since Andreas's patch >>> commit f9f89b7884ccc8ee5047cf4fffdf2b36df6832df >>> on Tues 19th. >>> >>> Please can someone help? I'm begging. >>> >>> I suppose that if I hear nothing I can simply revert his patch but that >>> seems like the Wrong Solution >>> >>> Thanks >>> >>> Simon >>> >>> | -----Original Message----- >>> | From: Simon Peyton Jones >>> | Sent: 20 August 2014 23:48 >>> | To: 'Gabor Greif'; 'ghc-devs at haskell.org'; 'Andreas Voellmy' >>> | Subject: RE: Windows build fails -- again! >>> | >>> | Help! My Windows build is still falling over as below. >>> | >>> | Andreas, you seem to be the author of the commit that broke this. I'd >>> | really appreciate a fix. (From anyone!) >>> | >>> | thank you >>> | >>> | Simon >>> | >>> | | -----Original Message----- >>> | | From: Simon Peyton Jones >>> | | Sent: 20 August 2014 09:26 >>> | | To: Gabor Greif; ghc-devs at haskell.org >>> | | Subject: RE: Windows build fails -- again! >>> | | >>> | | Thanks Gabor. But it makes no difference. Your change is inside an >>> | | #ifdef that checks for windows, and your change is in the no-windows >>> | | branch only. >>> | | >>> | | Also there are two IOManager.h file >>> | | includes/rts/IOManager.h >>> | | rts/win32/IOManager.h >>> | | >>> | | Should there be? It seems terribly confusing, and I have no idea >>> which >>> | | will win when it is #included. >>> | | >>> | | Thanks >>> | | >>> | | Simon >>> | | >>> | | | -----Original Message----- >>> | | | From: Gabor Greif [mailto:ggreif at gmail.com] >>> | | | Sent: 19 August 2014 23:38 >>> | | | To: Simon Peyton Jones >>> | | | Subject: Re: Windows build fails -- again! >>> | | | >>> | | | Simon, >>> | | | >>> | | | try this (attached) patch: >>> | | | >>> | | | $ git am 0001-Make-sure-that-a-prototype-is-included-for- >>> | | | setIOMana.patch >>> | | | >>> | | | Cheers, >>> | | | >>> | | | Gabor >>> | | | >>> | | | PS: on MacOS all is good, so I could not test it at all >>> | | | >>> | | | On 8/20/14, Simon Peyton Jones wrote: >>> | | | > Aaargh! My windows build is broken, again. >>> | | | > It's very painful that this keeps happening. >>> | | | > Can anyone help? >>> | | | > Simon >>> | | | > >>> | | | > "inplace/bin/ghc-stage1.exe" -optc-U__i686 -optc-march=i686 >>> | | | > -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall >>> | | | > -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes >>> | | | > -optc-Wmissing-declarations -optc-Winline -optc-Waggregate-return >>> | | | > -optc-Wpointer-arith -optc-Wmissing-noreturn >>> -optc-Wnested-externs >>> | | | > -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist >>> | | | > -optc-Iincludes/dist-derivedconstants/header >>> | | | > -optc-Iincludes/dist-ghcconstants/header -optc-Irts >>> | | | > -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict- >>> | aliasing >>> | | | > -optc-fno-common -optc-O2 -optc-fomit-frame-pointer >>> | | | > -optc-DRtsWay=\"rts_v\" -static -H32m -O -Werror -Wall -H64m -O0 >>> | | | > -Iincludes -Iincludes/dist >>> -Iincludes/dist-derivedconstants/header >>> | | | > -Iincludes/dist-ghcconstants/header >>> | | | > -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts >>> | | | > -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen - >>> | | | Irts/dist/build >>> | | | > -Irts/dist/build/autogen -O2 -c rts/Task.c -o >>> | | | > rts/dist/build/Task.o >>> | | | > >>> | | | > cc1.exe: warnings being treated as errors >>> | | | > >>> | | | > >>> | | | > >>> | | | > rts\Capability.c:1080:6: >>> | | | > >>> | | | > error: no previous prototype for 'setIOManagerControlFd' >>> | | | > >>> | | | > rts/ghc.mk:236: recipe for target 'rts/dist/build/Capability.o' >>> | | | failed >>> | | | > >>> | | | > make[1]: *** [rts/dist/build/Capability.o] Error 1 >>> | | | > >>> | | | > make[1]: *** Waiting for unfinished jobs.... >>> | | | > >>> | | | > Makefile:71: recipe for target 'all' failed >>> | | | > >>> | | | > make: *** [all] Error 2 >>> | | | > >>> | | | > HEAD (master)$ >>> | | | > >>> | | | > >>> | | | > >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From rrnewton at gmail.com Fri Aug 22 21:58:15 2014 From: rrnewton at gmail.com (Ryan Newton) Date: Fri, 22 Aug 2014 17:58:15 -0400 Subject: Random maintainership -- Was: [core libraries] RE: Core libraries bug tracker Message-ID: Dear core library folks & others, > On Tue, Aug 19, 2014 at 10:31 AM, Simon Peyton Jones < simonpj at microsoft.com> wrote: > Some core libraries (e.g. random) have a maintainer that isn?t the committee. Ah, since it came up, maybe this is a good time to discuss that particular maintainership. I'm afraid that since it isn't close to my current work (and I'm pre-tenure!) I haven't been able to really push the random library forward the way it deserves to be pushed these last three years. Shall we move maintainership of it to the core libraries committee? Also/alternatively "Thomas Miedema " has stepped forward as a volunteer for taking over maintainership. The library was in limbo in part because it was clear that some API changes needed to be made and but there wasn't a major consensus building design effort around that topic. One thing that was already agreed upon on via the libraries list decision process was to separate out SplittableGen. Duncan Coutts was in favor of this and also (I think) had some other ideas about API changes that should be made. On the implementation front, my hope was that "tf-random" could replace random as the default/standard library. Koen and Michal support this, but I think they didn't want to become the maintainers themselves yet. (I think that was to maintain some separation, and get buy-in from someone other than them, the implementors, before/during the transition). Best, -Ryan On Tue, Aug 19, 2014 at 5:55 PM, Simon Peyton Jones wrote: > > If you don't mind the extra traffic in the ghc trac, I'm open to the plan to work there. > > > > OK great. > > > > Let?s agree that: > > ? The ?owner? of a Core Libraries ticket is the person responsible for progressing it ? or ?Core Libraries Committee? as one possibility. > > ? The ?component? should identify the ticket as belonging to the core libraries committee, not GHC. We have a bunch of components like ?libraries/base?, ?libraries/directory?, etc, but I?m sure that doesn?t cover all the core libraries, and even if it did, it?s probably too fine grain. I suggest having just ?Core Libraries?. > > > > Actions: > > ? Edward: update the Core Libraries home page (where is that?) to point people to the Trac, tell them how to correctly submit a ticket, etc? > > ? Edward: send email to tell everyone about the new plan. > > ? Austin: add the same guidance to the GHC bug tracker. > > ? Austin: add ?core libraries committee? as something that can be an owner. > > ? Austin: change the ?components? list to replace all the ?libraires/*? stuff with ?Core Libraries?. > > > > Thanks > > > > Simon > > > > > > From: haskell-core-libraries at googlegroups.com [mailto: haskell-core-libraries at googlegroups.com] On Behalf Of Edward Kmett > Sent: 19 August 2014 16:23 > To: Simon Peyton Jones > Cc: core-libraries-committee at haskell.org; ghc-devs at haskell.org > Subject: Re: [core libraries] RE: Core libraries bug tracker > > > > Hi Simon, > > > > If you don't mind the extra traffic in the ghc trac, I'm open to the plan to work there. > > > > I was talking to Eric Mertens a few days ago about this and he agreed to take lead on getting us set up to actually build tickets for items that go into the libraries@ proposal process, so we have something helping to force us to come to a definitive conclusion rather than letting things trail off. > > > > -Edward > > > > On Tue, Aug 19, 2014 at 10:31 AM, Simon Peyton Jones < simonpj at microsoft.com> wrote: > > Edward, and core library colleagues, > > Any views on this? It would be good to make progress. > > Thanks > > Simon > > > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon Peyton Jones > Sent: 04 August 2014 16:01 > To: core-libraries-committee at haskell.org > Cc: ghc-devs at haskell.org > Subject: Core libraries bug tracker > > > > Edward, and core library colleagues, > > This came up in our weekly GHC discussion > > ? Does the Core Libraries Committee have a Trac? Surely, surely you should, else you?ll lose track of issues. > > ? Would you like to use GHC?s Trac for the purpose? Advantages: > > o People often report core library issues on GHC?s Trac anyway, so telling them to move it somewhere else just creates busy-work --- and maybe they won?t bother, which leaves it in our pile. > > o Several of these libraries are closely coupled to GHC, and you might want to milestone some library tickets with an upcoming GHC release > > ? If so we?d need a canonical way to identify tickets as CLC issues. Perhaps by making ?core-libraries? the owner? Or perhaps the ?Component? field? > > ? Some core libraries (e.g. random) have a maintainer that isn?t the committee. So that maintainer should be the owner of the ticket. Or the CLC might like a particular member to own a ticket. Either way, that suggest using the ?Component? field to identify CLC tickets > > ? Or maybe you want a Trac of your own? > > The underlying issue from our end is that we?d like a way to > > ? filter out tickets that you are dealing with > > ? and be sure you are dealing with them > > ? without losing track of milestones? i.e. when building a release we want to be sure that important tickets are indeed fixed before releasing > > Simon > > -- > You received this message because you are subscribed to the Google Groups "haskell-core-libraries" group. > To unsubscribe from this group and stop receiving emails from it, send an email to haskell-core-libraries+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > > > > -- > You received this message because you are subscribed to the Google Groups "haskell-core-libraries" group. > To unsubscribe from this group and stop receiving emails from it, send an email to haskell-core-libraries+unsubscribe at googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Fri Aug 22 23:25:19 2014 From: ekmett at gmail.com (Edward Kmett) Date: Fri, 22 Aug 2014 19:25:19 -0400 Subject: Random maintainership -- Was: [core libraries] RE: Core libraries bug tracker In-Reply-To: References: Message-ID: I'm pretty sure we'd be up for taking ownership of it as it is a rather fundamental piece of infrastructure in the community, and easily falls within our purview. That said, if you're concerned that you haven't been able to really push the random library forward the way it deserves to be pushed, realize that handing it to the committee is going to trade having you as a passionate but very distracted maintainer for several folks who will mostly act to keep things alive, that aren't likely to go make big sweeping changes to it. -Edward On Fri, Aug 22, 2014 at 5:58 PM, Ryan Newton wrote: > Dear core library folks & others, > > > On Tue, Aug 19, 2014 at 10:31 AM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > > Some core libraries (e.g. random) have a maintainer that isn?t the > committee. > > Ah, since it came up, maybe this is a good time to discuss that particular > maintainership. I'm afraid that since it isn't close to my current work > (and I'm pre-tenure!) I haven't been able to really push the random library > forward the way it deserves to be pushed these last three years. Shall we > move maintainership of it to the core libraries committee? > > Also/alternatively "Thomas Miedema " has stepped > forward as a volunteer for taking over maintainership. > > The library was in limbo in part because it was clear that some API > changes needed to be made and but there wasn't a major consensus building > design effort around that topic. One thing that was already agreed upon on > via the libraries list decision process was to separate out SplittableGen. > Duncan Coutts was in favor of this and also (I think) had some other ideas > about API changes that should be made. > > On the implementation front, my hope was that "tf-random" could replace > random as the default/standard library. Koen and Michal support this, but I > think they didn't want to become the maintainers themselves yet. (I think > that was to maintain some separation, and get buy-in from someone other > than them, the implementors, before/during the transition). > > Best, > -Ryan > > > > On Tue, Aug 19, 2014 at 5:55 PM, Simon Peyton Jones > wrote: > > > > If you don't mind the extra traffic in the ghc trac, I'm open to the > plan to work there. > > > > > > > > OK great. > > > > > > > > Let?s agree that: > > > > ? The ?owner? of a Core Libraries ticket is the person > responsible for progressing it ? or ?Core Libraries Committee? as one > possibility. > > > > ? The ?component? should identify the ticket as belonging to the > core libraries committee, not GHC. We have a bunch of components like > ?libraries/base?, ?libraries/directory?, etc, but I?m sure that doesn?t > cover all the core libraries, and even if it did, it?s probably too fine > grain. I suggest having just ?Core Libraries?. > > > > > > > > Actions: > > > > ? Edward: update the Core Libraries home page (where is that?) to > point people to the Trac, tell them how to correctly submit a ticket, etc? > > > > ? Edward: send email to tell everyone about the new plan. > > > > ? Austin: add the same guidance to the GHC bug tracker. > > > > ? Austin: add ?core libraries committee? as something that can be > an owner. > > > > ? Austin: change the ?components? list to replace all the > ?libraires/*? stuff with ?Core Libraries?. > > > > > > > > Thanks > > > > > > > > Simon > > > > > > > > > > > > From: haskell-core-libraries at googlegroups.com [mailto: > haskell-core-libraries at googlegroups.com] On Behalf Of Edward Kmett > > Sent: 19 August 2014 16:23 > > To: Simon Peyton Jones > > Cc: core-libraries-committee at haskell.org; ghc-devs at haskell.org > > Subject: Re: [core libraries] RE: Core libraries bug tracker > > > > > > > > Hi Simon, > > > > > > > > If you don't mind the extra traffic in the ghc trac, I'm open to the > plan to work there. > > > > > > > > I was talking to Eric Mertens a few days ago about this and he agreed to > take lead on getting us set up to actually build tickets for items that go > into the libraries@ proposal process, so we have something helping to > force us to come to a definitive conclusion rather than letting things > trail off. > > > > > > > > -Edward > > > > > > > > On Tue, Aug 19, 2014 at 10:31 AM, Simon Peyton Jones < > simonpj at microsoft.com> wrote: > > > > Edward, and core library colleagues, > > > > Any views on this? It would be good to make progress. > > > > Thanks > > > > Simon > > > > > > > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon > Peyton Jones > > Sent: 04 August 2014 16:01 > > To: core-libraries-committee at haskell.org > > Cc: ghc-devs at haskell.org > > Subject: Core libraries bug tracker > > > > > > > > Edward, and core library colleagues, > > > > This came up in our weekly GHC discussion > > > > ? Does the Core Libraries Committee have a Trac? Surely, surely > you should, else you?ll lose track of issues. > > > > ? Would you like to use GHC?s Trac for the purpose? Advantages: > > > > o People often report core library issues on GHC?s Trac anyway, so > telling them to move it somewhere else just creates busy-work --- and maybe > they won?t bother, which leaves it in our pile. > > > > o Several of these libraries are closely coupled to GHC, and you might > want to milestone some library tickets with an upcoming GHC release > > > > ? If so we?d need a canonical way to identify tickets as CLC > issues. Perhaps by making ?core-libraries? the owner? Or perhaps the > ?Component? field? > > > > ? Some core libraries (e.g. random) have a maintainer that isn?t > the committee. So that maintainer should be the owner of the ticket. Or > the CLC might like a particular member to own a ticket. Either way, that > suggest using the ?Component? field to identify CLC tickets > > > > ? Or maybe you want a Trac of your own? > > > > The underlying issue from our end is that we?d like a way to > > > > ? filter out tickets that you are dealing with > > > > ? and be sure you are dealing with them > > > > ? without losing track of milestones? i.e. when building a > release we want to be sure that important tickets are indeed fixed before > releasing > > > > Simon > > > > -- > > You received this message because you are subscribed to the Google > Groups "haskell-core-libraries" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to haskell-core-libraries+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/d/optout. > > > > > > > > -- > > You received this message because you are subscribed to the Google > Groups "haskell-core-libraries" group. > > To unsubscribe from this group and stop receiving emails from it, send > an email to haskell-core-libraries+unsubscribe at googlegroups.com. > > For more options, visit https://groups.google.com/d/optout. > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sat Aug 23 14:44:48 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 23 Aug 2014 07:44:48 -0700 Subject: Proposal: run GHC API tests on fast In-Reply-To: <1408726908-sup-9449@sabre> References: <1401411058-sup-8260@sabre> <1408726908-sup-9449@sabre> Message-ID: <1408805088.2033.3.camel@joachim-breitner.de> Hi Edward, Am Freitag, den 22.08.2014, 18:01 +0100 schrieb Edward Z.Yang: > OK, I've gone ahead and done this. I?m seeing errors like this on travis, with DEBUG_STAGE2=YES, but with varying test cases from ghc-api: Compile failed (status 256) errors were: [1 of 1] Compiling Main ( ghcApi.hs, ghcApi.o ) Linking ghcApi ... /usr/bin/ld: reopening ghcApi.o: No such file or directory /usr/bin/ld:ghcApi.o: bfd_stat failed: No such file or directory /usr/bin/ld: reopening ghcApi.o: No such file or directory /usr/bin/ld: BFD (GNU Binutils for Ubuntu) 2.22 internal error, aborting at ../../bfd/merge.c line 873 in _bfd_merged_section_offset /usr/bin/ld: Please report this bug. collect2: ld returned 1 exit status *** unexpected failure for ghcApi(normal) Wrong exit code (expected 0 , actual 2 ) Stdout: Stderr: /usr/bin/ld: reopening T6145.o: No such file or directory /usr/bin/ld:T6145.o: bfd_stat failed: No such file or directory /usr/bin/ld: reopening T6145.o: No such file or directory /usr/bin/ld: BFD (GNU Binutils for Ubuntu) 2.22 internal error, aborting at ../../bfd/merge.c line 873 in _bfd_merged_section_offset /usr/bin/ld: Please report this bug. collect2: ld returned 1 exit status make[3]: *** [T6145] Error 1 *** unexpected failure for T6145(normal) e.g. https://s3.amazonaws.com/archive.travis-ci.org/jobs/33351170/log.txt https://s3.amazonaws.com/archive.travis-ci.org/jobs/33356796/log.txt Any idea? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From ezyang at mit.edu Sat Aug 23 14:51:59 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 23 Aug 2014 15:51:59 +0100 Subject: Proposal: run GHC API tests on fast In-Reply-To: <1408805088.2033.3.camel@joachim-breitner.de> References: <1401411058-sup-8260@sabre> <1408726908-sup-9449@sabre> <1408805088.2033.3.camel@joachim-breitner.de> Message-ID: <1408805510-sup-7602@sabre> I'll go ahead and try to reproduce. Sounds like a bug! Excerpts from Joachim Breitner's message of 2014-08-23 15:44:48 +0100: > Hi Edward, > > Am Freitag, den 22.08.2014, 18:01 +0100 schrieb Edward Z.Yang: > > OK, I've gone ahead and done this. > > I?m seeing errors like this on travis, with DEBUG_STAGE2=YES, but with > varying test cases from ghc-api: > > Compile failed (status 256) errors were: > [1 of 1] Compiling Main ( ghcApi.hs, ghcApi.o ) > Linking ghcApi ... > /usr/bin/ld: reopening ghcApi.o: No such file or directory > > /usr/bin/ld:ghcApi.o: bfd_stat failed: No such file or directory > /usr/bin/ld: reopening ghcApi.o: No such file or directory > > /usr/bin/ld: BFD (GNU Binutils for Ubuntu) 2.22 internal error, aborting at ../../bfd/merge.c line 873 in _bfd_merged_section_offset > > /usr/bin/ld: Please report this bug. > > collect2: ld returned 1 exit status > > *** unexpected failure for ghcApi(normal) > > > > Wrong exit code (expected 0 , actual 2 ) > Stdout: > > Stderr: > /usr/bin/ld: reopening T6145.o: No such file or directory > > /usr/bin/ld:T6145.o: bfd_stat failed: No such file or directory > /usr/bin/ld: reopening T6145.o: No such file or directory > > /usr/bin/ld: BFD (GNU Binutils for Ubuntu) 2.22 internal error, aborting at ../../bfd/merge.c line 873 in _bfd_merged_section_offset > > /usr/bin/ld: Please report this bug. > > collect2: ld returned 1 exit status > make[3]: *** [T6145] Error 1 > > *** unexpected failure for T6145(normal) > > e.g. > https://s3.amazonaws.com/archive.travis-ci.org/jobs/33351170/log.txt > https://s3.amazonaws.com/archive.travis-ci.org/jobs/33356796/log.txt > > Any idea? > > > Greetings, > Joachim > From andreas.voellmy at gmail.com Sat Aug 23 17:20:34 2014 From: andreas.voellmy at gmail.com (Andreas Voellmy) Date: Sat, 23 Aug 2014 13:20:34 -0400 Subject: Windows build fails -- again! In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221DC620@DBXPRD3001MB024.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E3E02@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221ED7F6@DB3PRD3001MB020.064d.mgd.msft.net> <53F73FEB.3010908@mail.ru> Message-ID: I think the problem with the commit was that setIOManagerControlFd() was defined for all OS types, whereas the prototype was defined only when mingw32_HOST_OS is not defined.I think the resolution is to only define setIOManagerControlFd when mingw32_HOST_OS is not defined, since if I understand correctly, on Windows we don't use the IO manager in GHC.Event anyway. I created a Phab diff that reverts the revert of the original commit and also fixes it as explained above. I tried to update the previous diff (D129), but I couldn't because it is closed. So I created a new one (D174). Austin: can you help me validate it on Windows? I don't have a Windows machine available. The patch also avoids defining the io_manager_control_wr_fd field in Capability struct when mingw32_HOST_OS is not defined. While we are at it, where should the prototype for setIOManagerControlFd be? It is currently in includes/rts/IOManager.h, whereas the function is defined in rts/Schedule.c. Should the prototype be moved to rts/Schedule.h? Or should the setIOManagerControlFd definition be moved somewhere else instead? Andi On Fri, Aug 22, 2014 at 5:01 PM, Austin Seipp wrote: > I've reverted it in the mean time > (4748f5936fe72d96edfa17b153dbfd84f2c4c053), sorry about that. I've > spent some time working on a Windows Phabricator machine, so stay > tuned! > > Andreas, if you need a Windows VM or something temporarily to test, do > let me know. > > On Fri, Aug 22, 2014 at 4:00 PM, Andreas Voellmy > wrote: > > I'm just noticing this thread now... sorry about the delay and the > problems! > > I'll look into what happened here. > > > > > > On Fri, Aug 22, 2014 at 8:04 AM, kyra wrote: > >> > >> I've looked into this patch, it looks like this patch was intended to > >> touch only linuxish IO Manager, but in fact it touched common (os > unrelated) > >> code here and there extensively and there are almost no chances somebody > >> else (not the author) can fix the things. > >> > >> So, almost the only way to unbreak the build it to revert the patch. > >> > >> Regards, > >> Kyra > >> > >> > >> On 8/22/2014 16:07, Simon Peyton Jones wrote: > >>> > >>> Friends > >>> > >>> My Windows build is still broken, and has been since Andreas's patch > >>> commit f9f89b7884ccc8ee5047cf4fffdf2b36df6832df > >>> on Tues 19th. > >>> > >>> Please can someone help? I'm begging. > >>> > >>> I suppose that if I hear nothing I can simply revert his patch but that > >>> seems like the Wrong Solution > >>> > >>> Thanks > >>> > >>> Simon > >>> > >>> | -----Original Message----- > >>> | From: Simon Peyton Jones > >>> | Sent: 20 August 2014 23:48 > >>> | To: 'Gabor Greif'; 'ghc-devs at haskell.org'; 'Andreas Voellmy' > >>> | Subject: RE: Windows build fails -- again! > >>> | > >>> | Help! My Windows build is still falling over as below. > >>> | > >>> | Andreas, you seem to be the author of the commit that broke this. > I'd > >>> | really appreciate a fix. (From anyone!) > >>> | > >>> | thank you > >>> | > >>> | Simon > >>> | > >>> | | -----Original Message----- > >>> | | From: Simon Peyton Jones > >>> | | Sent: 20 August 2014 09:26 > >>> | | To: Gabor Greif; ghc-devs at haskell.org > >>> | | Subject: RE: Windows build fails -- again! > >>> | | > >>> | | Thanks Gabor. But it makes no difference. Your change is inside > an > >>> | | #ifdef that checks for windows, and your change is in the > no-windows > >>> | | branch only. > >>> | | > >>> | | Also there are two IOManager.h file > >>> | | includes/rts/IOManager.h > >>> | | rts/win32/IOManager.h > >>> | | > >>> | | Should there be? It seems terribly confusing, and I have no idea > >>> which > >>> | | will win when it is #included. > >>> | | > >>> | | Thanks > >>> | | > >>> | | Simon > >>> | | > >>> | | | -----Original Message----- > >>> | | | From: Gabor Greif [mailto:ggreif at gmail.com] > >>> | | | Sent: 19 August 2014 23:38 > >>> | | | To: Simon Peyton Jones > >>> | | | Subject: Re: Windows build fails -- again! > >>> | | | > >>> | | | Simon, > >>> | | | > >>> | | | try this (attached) patch: > >>> | | | > >>> | | | $ git am 0001-Make-sure-that-a-prototype-is-included-for- > >>> | | | setIOMana.patch > >>> | | | > >>> | | | Cheers, > >>> | | | > >>> | | | Gabor > >>> | | | > >>> | | | PS: on MacOS all is good, so I could not test it at all > >>> | | | > >>> | | | On 8/20/14, Simon Peyton Jones wrote: > >>> | | | > Aaargh! My windows build is broken, again. > >>> | | | > It's very painful that this keeps happening. > >>> | | | > Can anyone help? > >>> | | | > Simon > >>> | | | > > >>> | | | > "inplace/bin/ghc-stage1.exe" -optc-U__i686 -optc-march=i686 > >>> | | | > -optc-fno-stack-protector -optc-Werror -optc-Wall -optc-Wall > >>> | | | > -optc-Wextra -optc-Wstrict-prototypes -optc-Wmissing-prototypes > >>> | | | > -optc-Wmissing-declarations -optc-Winline > -optc-Waggregate-return > >>> | | | > -optc-Wpointer-arith -optc-Wmissing-noreturn > >>> -optc-Wnested-externs > >>> | | | > -optc-Wredundant-decls -optc-Iincludes -optc-Iincludes/dist > >>> | | | > -optc-Iincludes/dist-derivedconstants/header > >>> | | | > -optc-Iincludes/dist-ghcconstants/header -optc-Irts > >>> | | | > -optc-Irts/dist/build -optc-DCOMPILING_RTS -optc-fno-strict- > >>> | aliasing > >>> | | | > -optc-fno-common -optc-O2 -optc-fomit-frame-pointer > >>> | | | > -optc-DRtsWay=\"rts_v\" -static -H32m -O -Werror -Wall -H64m > -O0 > >>> | | | > -Iincludes -Iincludes/dist > >>> -Iincludes/dist-derivedconstants/header > >>> | | | > -Iincludes/dist-ghcconstants/header > >>> | | | > -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts > >>> | | | > -dcmm-lint -i -irts -irts/dist/build -irts/dist/build/autogen - > >>> | | | Irts/dist/build > >>> | | | > -Irts/dist/build/autogen -O2 -c rts/Task.c -o > >>> | | | > rts/dist/build/Task.o > >>> | | | > > >>> | | | > cc1.exe: warnings being treated as errors > >>> | | | > > >>> | | | > > >>> | | | > > >>> | | | > rts\Capability.c:1080:6: > >>> | | | > > >>> | | | > error: no previous prototype for 'setIOManagerControlFd' > >>> | | | > > >>> | | | > rts/ghc.mk:236: recipe for target > 'rts/dist/build/Capability.o' > >>> | | | failed > >>> | | | > > >>> | | | > make[1]: *** [rts/dist/build/Capability.o] Error 1 > >>> | | | > > >>> | | | > make[1]: *** Waiting for unfinished jobs.... > >>> | | | > > >>> | | | > Makefile:71: recipe for target 'all' failed > >>> | | | > > >>> | | | > make: *** [all] Error 2 > >>> | | | > > >>> | | | > HEAD (master)$ > >>> | | | > > >>> | | | > > >>> | | | > > >>> _______________________________________________ > >>> ghc-devs mailing list > >>> ghc-devs at haskell.org > >>> http://www.haskell.org/mailman/listinfo/ghc-devs > >>> > >> > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Sat Aug 23 17:57:44 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 23 Aug 2014 18:57:44 +0100 Subject: Proposal: run GHC API tests on fast In-Reply-To: <1408805088.2033.3.camel@joachim-breitner.de> References: <1401411058-sup-8260@sabre> <1408726908-sup-9449@sabre> <1408805088.2033.3.camel@joachim-breitner.de> Message-ID: <1408816584-sup-9084@sabre> I couldn't reproduce this error on x86_64 with BuildFlavour = devel2. Is perhaps parallelism involved? Edward Excerpts from Joachim Breitner's message of 2014-08-23 15:44:48 +0100: > Hi Edward, > > Am Freitag, den 22.08.2014, 18:01 +0100 schrieb Edward Z.Yang: > > OK, I've gone ahead and done this. > > I?m seeing errors like this on travis, with DEBUG_STAGE2=YES, but with > varying test cases from ghc-api: > > Compile failed (status 256) errors were: > [1 of 1] Compiling Main ( ghcApi.hs, ghcApi.o ) > Linking ghcApi ... > /usr/bin/ld: reopening ghcApi.o: No such file or directory > > /usr/bin/ld:ghcApi.o: bfd_stat failed: No such file or directory > /usr/bin/ld: reopening ghcApi.o: No such file or directory > > /usr/bin/ld: BFD (GNU Binutils for Ubuntu) 2.22 internal error, aborting at ../../bfd/merge.c line 873 in _bfd_merged_section_offset > > /usr/bin/ld: Please report this bug. > > collect2: ld returned 1 exit status > > *** unexpected failure for ghcApi(normal) > > > > Wrong exit code (expected 0 , actual 2 ) > Stdout: > > Stderr: > /usr/bin/ld: reopening T6145.o: No such file or directory > > /usr/bin/ld:T6145.o: bfd_stat failed: No such file or directory > /usr/bin/ld: reopening T6145.o: No such file or directory > > /usr/bin/ld: BFD (GNU Binutils for Ubuntu) 2.22 internal error, aborting at ../../bfd/merge.c line 873 in _bfd_merged_section_offset > > /usr/bin/ld: Please report this bug. > > collect2: ld returned 1 exit status > make[3]: *** [T6145] Error 1 > > *** unexpected failure for T6145(normal) > > e.g. > https://s3.amazonaws.com/archive.travis-ci.org/jobs/33351170/log.txt > https://s3.amazonaws.com/archive.travis-ci.org/jobs/33356796/log.txt > > Any idea? > > > Greetings, > Joachim > From mail at joachim-breitner.de Sat Aug 23 17:59:02 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 23 Aug 2014 10:59:02 -0700 Subject: Proposal: run GHC API tests on fast In-Reply-To: <1408816584-sup-9084@sabre> References: <1401411058-sup-8260@sabre> <1408726908-sup-9449@sabre> <1408805088.2033.3.camel@joachim-breitner.de> <1408816584-sup-9084@sabre> Message-ID: <1408816742.4687.2.camel@joachim-breitner.de> Hi, Am Samstag, den 23.08.2014, 18:57 +0100 schrieb Edward Z.Yang: > I couldn't reproduce this error on x86_64 with BuildFlavour = devel2. > Is perhaps parallelism involved? likely. Did you try validating with "CPUS=2"? (But then, I believe that is actually the default). Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From rwbarton at gmail.com Sat Aug 23 18:07:40 2014 From: rwbarton at gmail.com (Reid Barton) Date: Sat, 23 Aug 2014 14:07:40 -0400 Subject: Proposal: run GHC API tests on fast In-Reply-To: <1408816742.4687.2.camel@joachim-breitner.de> References: <1401411058-sup-8260@sabre> <1408726908-sup-9449@sabre> <1408805088.2033.3.camel@joachim-breitner.de> <1408816584-sup-9084@sabre> <1408816742.4687.2.camel@joachim-breitner.de> Message-ID: I have seen this too just running "make THREADS=8". Looks like it's because the other tests in this directory are cleaning too aggressively. From the Makefile: ... clean: rm -f *.o *.hi T6145: clean '$(TEST_HC)' $(TEST_HC_OPTS) --make -v0 -package ghc T6145 ./T6145 "`'$(TEST_HC)' $(TEST_HC_OPTS) --print-libdir | tr -d '\r'`" ... so ghcApi.o is getting removed before the final link step, I would guess. Regards, Reid Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From scpmw at leeds.ac.uk Sat Aug 23 21:19:38 2014 From: scpmw at leeds.ac.uk (Peter Wortmann) Date: Sat, 23 Aug 2014 22:19:38 +0100 Subject: How's the integration of DWARF support coming along? In-Reply-To: <8738cptplz.fsf@gmail.com> References: <53EBA10E.8060909@student.chalmers.se> <464A8583-5A46-4488-B736-E2FDC7752BE3@leeds.ac.uk> <8738cptplz.fsf@gmail.com> Message-ID: <75A3537C-C494-4F98-A5F2-FA49362034A1@leeds.ac.uk> Er, yes. Copied it from the wrong browser window? Greetings, Peter From p.k.f.holzenspies at utwente.nl Mon Aug 25 08:42:15 2014 From: p.k.f.holzenspies at utwente.nl (p.k.f.holzenspies at utwente.nl) Date: Mon, 25 Aug 2014 08:42:15 +0000 Subject: Suggestion for GHC System User's Guide documentation change In-Reply-To: <1408726046.22840.YahooMailNeo@web120802.mail.ne1.yahoo.com> References: <1408656578.37744.YahooMailNeo@web120804.mail.ne1.yahoo.com>, <618BE556AADD624C9C918AA5D5911BEF221ED15F@DB3PRD3001MB020.064d.mgd.msft.net> <7647da15cc624842a8888ac2ecce6901@EXMBX31.ad.utwente.nl>, <1408726046.22840.YahooMailNeo@web120802.mail.ne1.yahoo.com> Message-ID: <3d0934e7c7394ebf8a18a41a9e892106@EXMBX31.ad.utwente.nl> Dear Howard, Yes, emphatically so! Any examples should be copy-paste-runnable if reasonably possible without any further switches, so that means the pragmas *should* be included! Regards, Philip ________________________________________ From: Howard B. Golden Sent: 22 August 2014 18:47 To: Holzenspies, P.K.F. (EWI); simonpj at microsoft.com; ghc-devs at haskell.org Subject: Re: Suggestion for GHC System User's Guide documentation change p.k.f., I like your less verbose suggestion better than my original. I don't understand your comment about code examples: Are you supporting or opposing the inclusion of the LANGUAGE pragmas in the examples? Howard ________________________________ From: "p.k.f.holzenspies at utwente.nl" To: simonpj at microsoft.com; howard_b_golden at yahoo.com; ghc-devs at haskell.org Sent: Friday, August 22, 2014 5:38 AM Subject: RE: Suggestion for GHC System User's Guide documentation change Marginally less verbose; why not use the language extension *only* in running text? Preferably with a link to the documentation of that language extension. In your example: | The language extension UnicodeSyntax?enables Unicode characters to be | used to stand for certain ASCII character sequences.? With regards to code examples: Ideally any explicit code example could just be copy-pasted into a .hs-file and loaded into ghci / compiled with ghc without special switches. Just my two cents ;) Ph. From karel.gardas at centrum.cz Mon Aug 25 08:45:02 2014 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 25 Aug 2014 10:45:02 +0200 Subject: windows-x86-head (Windows/x86 HEAD (Gabor Pali)), build 2, Success In-Reply-To: <53f8279a.c157b40a.7401.6b9e@mx.google.com> References: <53f8279a.c157b40a.7401.6b9e@mx.google.com> Message-ID: <53FAF78E.8050406@centrum.cz> Gabor, thanks a lot for your fantastic job on getting windows builder running. It's great to have that in the pool and not need to speculate if the change breaks windows build or not sometimes in the future when someone attempts to build that. Now it's one night turn over and this is great. Thanks a lot! Karel On 08/23/14 07:33 AM, Builder wrote: > windows-x86-head (Windows/x86 HEAD (Gabor Pali)), build 2 > > Build succeeded > Details: http://haskell.inf.elte.hu/builders/windows-x86-head/2.html > > git clone | Success > create mk/build.mk | Success > get subrepos | Success > repo versions | Success > touching clean-check files | Success > setting version date | Success > booting | Success > configuring | Success > creating check-remove-before | Success > compiling | Success > creating check-remove-after | Success > compiling testremove | Success > simulating clean | Success > checking clean | Success > making bindist | Success > making srcdist | Success > uploading bindist | Success > uploading srcdist | Success > uploading windows extra src tarball | Success > uploading tarball source | Success > testing bindist | Success > testing | Success > testsuite summary | Success > > Build succeeded > Details: http://haskell.inf.elte.hu/builders/windows-x86-head/2.html > > > > > _______________________________________________ > ghc-builds mailing list > ghc-builds at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-builds From pali.gabor at gmail.com Mon Aug 25 09:19:15 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Mon, 25 Aug 2014 11:19:15 +0200 Subject: windows-x86-head (Windows/x86 HEAD (Gabor Pali)), build 2, Success In-Reply-To: <53FAF78E.8050406@centrum.cz> References: <53f8279a.c157b40a.7401.6b9e@mx.google.com> <53FAF78E.8050406@centrum.cz> Message-ID: 2014-08-25 10:45 GMT+02:00 Karel Gardas : > thanks a lot for your fantastic job on getting windows builder running. You are most welcome. > It's great to have that in the pool and not need to speculate if the change > breaks windows build or not sometimes in the future when someone attempts > to build that. Now it's one night turn over and this is great. Though, I think it is still a bit bumpy. I will have to fix the build environment to avoid the build failing in odd ways, such as today's problem [1]. By digging into the config.log mentioned in the log, it appears to be some unrelated permission issue. (That I am hoping to fix locally.) On that note, I had a problem with the lndir utility in both MinGW [2]. For some reason, lndir does not like when the path (of the directory hierarchy to be mirrored, its first parameter) starts with "C:/". Instead, it prefers the traditional UNIX-ish pathname. That is, omitting calling ghc-pwd (and replacing for pwd) for setting the TOP make(1) variable in mk/config.mk would make it work fine, I guess. For now, I wrapped lndir to normalize the pathname it gets, but this is a band-aid solution only. I am pondering if anybody else has faced this problem before. All you have to do is to invoke the "sdist" target after the build is done. Otherwise it will not cause any other difficulties. [1] http://haskell.inf.elte.hu/builders/windows-x86-head/4/10.html [2] http://haskell.inf.elte.hu/builders/windows-x86_64-head/2/16.html From alan.zimm at gmail.com Mon Aug 25 12:21:50 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 25 Aug 2014 14:21:50 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <87ha1ca1nt.fsf@gmail.com> References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF221AE385@DB3PRD3001MB020.064d.mgd.msft.net> <53EF71E7.5090804@fuuzetsu.co.uk> <87ha1ca1nt.fsf@gmail.com> Message-ID: What happens in the case of a change to the dev branch of ghc that requires a patch to haddock as well, how does that patch get added to phabricator, or is there a separate process? A case in point is https://phabricator.haskell.org/D157 with matching change at https://github.com/alanz/haddock/tree/wip/landmine-param-family Regards Alan On Sat, Aug 16, 2014 at 5:34 PM, Herbert Valerio Riedel wrote: > On 2014-08-16 at 16:59:51 +0200, Mateusz Kowalczyk wrote: > > [...] > > > Herbert kindly updated the sync-all script that > > defaults to the new branch so I think we're covered. > > Minor correction: I did not touch the sync-all script at all. I merely > declared a default branch in the .gitmodules file: > > > http://git.haskell.org/ghc.git/commitdiff/03a8003e5d3aec97b3a14b2d3c774aad43e0456e > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Mon Aug 25 13:44:47 2014 From: austin at well-typed.com (Austin Seipp) Date: Mon, 25 Aug 2014 08:44:47 -0500 Subject: Status updates Message-ID: Hello *, Here are some notes from what's happened this week: - I've rejiggered some of the wiki pages a bit more, including updating the BugTracker page[1], Phabricator/Harbormaster docs[2], -- The bugtracker page mostly saw some minor updates in relation to the updates from *last* week; notably the new 'upstream' status. -- I split the Phabricator page up a bit to be easier to read, so Harbormaster is its own page now (but see more below). - I spent a day or two working on a Windows buildbot for Phabricator. Good news: I think I have a reliable set of steps to get a windows SSH instance running. However, I have not yet gotten it to build GHC through msys2, so it's not working yet. - I spent time on Friday getting a fix for #7602 ready for real this time. I'll put a review on Phabricator soon (my other machine doesn't have `arc` credentials.) Hopefully OS X users can soon rejoice with a solid performance boost. - I fiddled with Applicative-Monad a tiny bit, but haven't made much progress still. Like last time, it would be really amazing if anyone would like to help me out and try the patch yourself (feel free to email/IRC, or see last weeks' email for more). Other things: - Thomas Miedema helped out Herbert by writing a Trac anti-spam plugin for us - Thank you so much Thomas!! Hopefully the spam will go away soon. I am not yet sure if Thomas's new plugin is installed yet - Herbert? Upcoming: - #7602 will go up for review. - I will land D165 today probably since it's ready and other refactorings can come later. D166 (faster copies) is not yet ready. - I haven't done this yet, but I'm going to try to turn on `--slow` ./validate mode in the next day or two for Phabricator. At first, I'm only going to configure this for *commits* first, and perhaps patches will follow once we have more build machines. That means Phabricator is going to start annoying you with consistent failures (I think `--slow` has a few right now), but putting on pressure is the best way to fix it, I think. I'll send another email about this shortly. - I will probably rejigger the Phabricator page again to be smaller. I've had some complaints it's getting a bit large (due to the images, mostly), so I'll probably move the hierarchy around a bit. - Sit down and do some thorough code review. We have about 3 major features sitting on Phabricator at the moment which are going to need extensive review before landing. I expect this will take a while. See: -- D169, source code notes: https://phabricator.haskell.org/D169 -- D168: Partial type sigs: https://phabricator.haskell.org/D168 -- D119: StaticValues extension: https://phabricator.haskell.org/D119 Please please please - feel free to review these patches! Even if they are not your area of expertise, doing so will A) help you learn more hopefully! and B) you can surely help still (pointing out typos, needed docs, lint violations, suggestions) etc. That would be really useful to help increase the 'shared ownership' we all have, I think. [1] https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/BugTracker [2] https://ghc.haskell.org/trac/ghc/wiki/Phabricator/Harbormaster -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ker0sin at ya.ru Mon Aug 25 14:35:11 2014 From: ker0sin at ya.ru (Alexander Pakhomov) Date: Mon, 25 Aug 2014 18:35:11 +0400 Subject: nofib external dependencies Message-ID: <1633081408977311@web2g.yandex.ru> Hi all! I've noticed nofib depends on external packages, such as vector. So when vector is updated we have a benchmark result change even without compiler changes. I guess we need to use fixed version. When some package has major updates we do have to pick we should rename benchmark because it's not the same benchmark. From igloo at earth.li Mon Aug 25 15:27:56 2014 From: igloo at earth.li (Ian Lynagh) Date: Mon, 25 Aug 2014 16:27:56 +0100 Subject: Wired-in data-constructors with UNPACKed fields In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221C5A6A@DBXPRD3001MB024.064d.mgd.msft.net> References: <87a973z27g.fsf@gnu.org> <618BE556AADD624C9C918AA5D5911BEF221BCDCC@DB3PRD3001MB020.064d.mgd.msft.net> <87y4um1b9w.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF221C5A6A@DBXPRD3001MB024.064d.mgd.msft.net> Message-ID: <20140825152755.GA22852@matrix.chaos.earth.li> On Mon, Aug 18, 2014 at 10:01:17PM +0000, Simon Peyton-Jones wrote: > > My recommendation would be to try (3) first. Ian Lynagh (cc'd) may be able to comment about why the inconsistency above arose in the first place, and why we can't simply fix it. I don't know of any reason we can't. I think we didn't before because we didn't need to change S#, and didn't realise that there would be any benefit to doing so. Thanks Ian From mail at joachim-breitner.de Mon Aug 25 16:42:40 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 25 Aug 2014 09:42:40 -0700 Subject: nofib external dependencies In-Reply-To: <1633081408977311@web2g.yandex.ru> References: <1633081408977311@web2g.yandex.ru> Message-ID: <1408984960.2025.7.camel@joachim-breitner.de> Hi, Am Montag, den 25.08.2014, 18:35 +0400 schrieb Alexander Pakhomov: > I've noticed nofib depends on external packages, such as vector. > So when vector is updated we have a benchmark result change even without compiler changes. > I guess we need to use fixed version. When some package has major updates we do have to pick > we should rename benchmark because it's not the same benchmark. well, the version of vector we use is fixed via git submodules. So in a sense, GHC contains vector, and if someone bumps the vector version via a git commit, this commit has an effect on our benchmarks ? not much difference to other commits changing GHC code. Also note that nofib depends on base and the Prelude. Again, not much difference. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From fuuzetsu at fuuzetsu.co.uk Tue Aug 26 09:23:22 2014 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Tue, 26 Aug 2014 10:23:22 +0100 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF221AE385@DB3PRD3001MB020.064d.mgd.msft.net> <53EF71E7.5090804@fuuzetsu.co.uk> <87ha1ca1nt.fsf@gmail.com> Message-ID: <53FC520A.1070100@fuuzetsu.co.uk> On 08/25/2014 01:21 PM, Alan & Kim Zimmerman wrote: > What happens in the case of a change to the dev branch of ghc that requires > a patch to haddock as well, how does that patch get added to phabricator, > or is there a separate process? > > A case in point is https://phabricator.haskell.org/D157 with matching > change at https://github.com/alanz/haddock/tree/wip/landmine-param-family > > Regards > Alan > You need to push the patch against the Haddock ghc-head branch and update the submodule reference to point at your patch. I don't think that you need to do anything special for Phabricator unless it does some weird checking out instead of using whatever references GHC points to. -- Mateusz K. From alan.zimm at gmail.com Tue Aug 26 14:20:03 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Tue, 26 Aug 2014 16:20:03 +0200 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <53FC520A.1070100@fuuzetsu.co.uk> References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF221AE385@DB3PRD3001MB020.064d.mgd.msft.net> <53EF71E7.5090804@fuuzetsu.co.uk> <87ha1ca1nt.fsf@gmail.com> <53FC520A.1070100@fuuzetsu.co.uk> Message-ID: Ok thanks. I am travelling at the moment, will try this in a few days. Alan On 26 Aug 2014 11:23 AM, "Mateusz Kowalczyk" wrote: > On 08/25/2014 01:21 PM, Alan & Kim Zimmerman wrote: > > What happens in the case of a change to the dev branch of ghc that > requires > > a patch to haddock as well, how does that patch get added to phabricator, > > or is there a separate process? > > > > A case in point is https://phabricator.haskell.org/D157 with matching > > change at > https://github.com/alanz/haddock/tree/wip/landmine-param-family > > > > Regards > > Alan > > > > You need to push the patch against the Haddock ghc-head branch and > update the submodule reference to point at your patch. I don't think > that you need to do anything special for Phabricator unless it does some > weird checking out instead of using whatever references GHC points to. > > > -- > Mateusz K. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Wed Aug 27 03:49:50 2014 From: david.feuer at gmail.com (David Feuer) Date: Tue, 26 Aug 2014 23:49:50 -0400 Subject: Why isn't ($) inlining when I want? Message-ID: tl;dr I added a simplifier run with inlining enabled between specialization and floating out. It seems incapable of inlining saturated applications of ($), and I can't figure out why. These are inlined later, when phase 2 runs. Am I running the simplifier wrong or something? I'm working on this simple little fusion pipeline: {-# INLINE takeWhile #-} takeWhile p xs = build builder where builder c n = foldr go n xs where go x r = if p x then x `c` r else n foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] There are some issues with the enumFrom definition that break things. If I use a fusible unfoldr that produces some numbers instead, that issue goes away. Part of that problem (but not all of it) is that the simplifier doesn't run to apply rules between specialization and full laziness, so there's no opportunity for the specialization of enumFromTo to Int to trigger the rewrite to a build form and fusion with foldr before full laziness tears things apart. The other problem is that inlining doesn't happen at all before full laziness, so things defined using foldr and/or build aren't actually exposed as such until afterwards. Therefore I decided to try adding a simplifier run with inlining between specialization and floating out: runWhen do_specialise CoreDoSpecialising, runWhen full_laziness $ CoreDoSimplify max_iter (base_mode { sm_phase = InitialPhase , sm_names = ["PostGentle"] , sm_rules = rules_on , sm_inline = True , sm_case_case = False }), runWhen full_laziness $ CoreDoFloatOutwards FloatOutSwitches { floatOutLambdas = Just 0, floatOutConstants = True, floatOutPartialApplications = False }, The weird thing is that for some reason this doesn't inline ($), even though it appears to be saturated. Using the modified thing with (my version of) unfoldr: foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato 10) (-9) potato :: Int -> Int -> Maybe (Int, Int) potato n m | m <= n = Just (m, m) | otherwise = Nothing I get this out of the specializer: foo foo = \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> $ (. (foldr c_a1HT n_a1HU) (takeWhile (let { ds_s21z ds_s21z = I# 1 } in \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) (let { n_s21x n_s21x = I# 10 } in unfoldr (\ m_a1U7 -> case leInt m_a1U7 n_s21x of _ { False -> Nothing; True -> Just (m_a1U7, m_a1U7) }) ($fNumInt_$cnegate (I# 9))) and then I get this out of my extra simplifier run: foo foo = \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> $ (\ x_a20f -> foldr (\ x_a1HR r_a1HS -> case case x_a1HR of _ { I# x_a20R -> tagToEnum# (case x_a20R of _ { __DEFAULT -> 1; 1 -> 0 }) } of _ { False -> n_a1HU; True -> c_a1HT x_a1HR r_a1HS }) n_a1HU x_a20f) (let { b'_a1ZS b'_a1ZS = $fNumInt_$cnegate (I# 9) } in $ (build) (\ @ b1_a1ZU c_a1ZV n_a1ZW -> letrec { go_a1ZX go_a1ZX = \ b2_a1ZY -> case case case b2_a1ZY of _ { I# x_a218 -> tagToEnum# (<=# x_a218 10) } of _ { False -> Nothing; True -> Just (b2_a1ZY, b2_a1ZY) } of _ { Nothing -> n_a1ZW; Just ds_a203 -> case ds_a203 of _ { (a1_a207, new_b_a208) -> c_a1ZV a1_a207 (go_a1ZX new_b_a208) } }; } in go_a1ZX b'_a1ZS)) That is, neither the $ in the code nor the $ that was inserted when inlining unfoldr got inlined themselves, even though both appear to be saturated. As a result, foldr/build doesn't fire, and full laziness tears things apart. Later on, in simplifier phase 2, $ gets inlined. What's preventing this from happening in the PostGentle phase I added? David Feuer From simonpj at microsoft.com Wed Aug 27 08:03:07 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 27 Aug 2014 08:03:07 +0000 Subject: Why isn't ($) inlining when I want? In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> It's hard to tell since you are using a modified compiler. Try running with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will show you every inlining, whether failed or successful. You can see the attempt to inline ($) and there is some info with the output that may help to explain why it did or did not work. Try that Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David | Feuer | Sent: 27 August 2014 04:50 | To: ghc-devs; Carter Schonwald | Subject: Why isn't ($) inlining when I want? | | tl;dr I added a simplifier run with inlining enabled between | specialization and floating out. It seems incapable of inlining | saturated applications of ($), and I can't figure out why. These are | inlined later, when phase 2 runs. Am I running the simplifier wrong or | something? | | | I'm working on this simple little fusion pipeline: | | {-# INLINE takeWhile #-} | takeWhile p xs = build builder | where | builder c n = foldr go n xs | where | go x r = if p x then x `c` r else n | | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] | | There are some issues with the enumFrom definition that break things. | If I use a fusible unfoldr that produces some numbers instead, that | issue goes away. Part of that problem (but not all of it) is that the | simplifier doesn't run to apply rules between specialization and full | laziness, so there's no opportunity for the specialization of | enumFromTo to Int to trigger the rewrite to a build form and fusion | with foldr before full laziness tears things apart. The other problem | is that inlining doesn't happen at all before full laziness, so things | defined using foldr and/or build aren't actually exposed as such until | afterwards. Therefore I decided to try adding a simplifier run with | inlining between specialization and floating out: | | runWhen do_specialise CoreDoSpecialising, | | runWhen full_laziness $ CoreDoSimplify max_iter | (base_mode { sm_phase = InitialPhase | , sm_names = ["PostGentle"] | , sm_rules = rules_on | , sm_inline = True | , sm_case_case = False }), | | runWhen full_laziness $ | CoreDoFloatOutwards FloatOutSwitches { | floatOutLambdas = Just 0, | floatOutConstants = True, | floatOutPartialApplications = False }, | | The weird thing is that for some reason this doesn't inline ($), even | though it appears to be saturated. Using the modified thing with (my | version of) unfoldr: | | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato 10) | (-9) | | potato :: Int -> Int -> Maybe (Int, Int) | potato n m | m <= n = Just (m, m) | | otherwise = Nothing | | | I get this out of the specializer: | | foo | foo = | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | $ (. (foldr c_a1HT n_a1HU) | (takeWhile | (let { | ds_s21z | ds_s21z = I# 1 } in | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) | (let { | n_s21x | n_s21x = I# 10 } in | unfoldr | (\ m_a1U7 -> | case leInt m_a1U7 n_s21x of _ { | False -> Nothing; | True -> Just (m_a1U7, m_a1U7) | }) | ($fNumInt_$cnegate (I# 9))) | | | and then I get this out of my extra simplifier run: | | foo | foo = | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | $ (\ x_a20f -> | foldr | (\ x_a1HR r_a1HS -> | case case x_a1HR of _ { I# x_a20R -> | tagToEnum# | (case x_a20R of _ { | __DEFAULT -> 1; | 1 -> 0 | }) | } | of _ { | False -> n_a1HU; | True -> c_a1HT x_a1HR r_a1HS | }) | n_a1HU | x_a20f) | (let { | b'_a1ZS | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in | $ (build) | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> | letrec { | go_a1ZX | go_a1ZX = | \ b2_a1ZY -> | case case case b2_a1ZY of _ { I# x_a218 -> | tagToEnum# (<=# x_a218 10) | } | of _ { | False -> Nothing; | True -> Just (b2_a1ZY, b2_a1ZY) | } | of _ { | Nothing -> n_a1ZW; | Just ds_a203 -> | case ds_a203 of _ { (a1_a207, new_b_a208) -> | c_a1ZV a1_a207 (go_a1ZX new_b_a208) | } | }; } in | go_a1ZX b'_a1ZS)) | | | That is, neither the $ in the code nor the $ that was inserted when | inlining unfoldr got inlined themselves, even though both appear to be | saturated. As a result, foldr/build doesn't fire, and full laziness | tears things apart. Later on, in simplifier phase 2, $ gets inlined. | What's preventing this from happening in the PostGentle phase I added? | | David Feuer | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From david.feuer at gmail.com Wed Aug 27 16:21:34 2014 From: david.feuer at gmail.com (David Feuer) Date: Wed, 27 Aug 2014 12:21:34 -0400 Subject: Why isn't ($) inlining when I want? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I just ran that (results attached), and as far as I can tell, it doesn't even *consider* inlining ($) until phase 2. On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones wrote: > It's hard to tell since you are using a modified compiler. Try running with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will show you every inlining, whether failed or successful. You can see the attempt to inline ($) and there is some info with the output that may help to explain why it did or did not work. > > Try that > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David > | Feuer > | Sent: 27 August 2014 04:50 > | To: ghc-devs; Carter Schonwald > | Subject: Why isn't ($) inlining when I want? > | > | tl;dr I added a simplifier run with inlining enabled between > | specialization and floating out. It seems incapable of inlining > | saturated applications of ($), and I can't figure out why. These are > | inlined later, when phase 2 runs. Am I running the simplifier wrong or > | something? > | > | > | I'm working on this simple little fusion pipeline: > | > | {-# INLINE takeWhile #-} > | takeWhile p xs = build builder > | where > | builder c n = foldr go n xs > | where > | go x r = if p x then x `c` r else n > | > | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] > | > | There are some issues with the enumFrom definition that break things. > | If I use a fusible unfoldr that produces some numbers instead, that > | issue goes away. Part of that problem (but not all of it) is that the > | simplifier doesn't run to apply rules between specialization and full > | laziness, so there's no opportunity for the specialization of > | enumFromTo to Int to trigger the rewrite to a build form and fusion > | with foldr before full laziness tears things apart. The other problem > | is that inlining doesn't happen at all before full laziness, so things > | defined using foldr and/or build aren't actually exposed as such until > | afterwards. Therefore I decided to try adding a simplifier run with > | inlining between specialization and floating out: > | > | runWhen do_specialise CoreDoSpecialising, > | > | runWhen full_laziness $ CoreDoSimplify max_iter > | (base_mode { sm_phase = InitialPhase > | , sm_names = ["PostGentle"] > | , sm_rules = rules_on > | , sm_inline = True > | , sm_case_case = False }), > | > | runWhen full_laziness $ > | CoreDoFloatOutwards FloatOutSwitches { > | floatOutLambdas = Just 0, > | floatOutConstants = True, > | floatOutPartialApplications = False }, > | > | The weird thing is that for some reason this doesn't inline ($), even > | though it appears to be saturated. Using the modified thing with (my > | version of) unfoldr: > | > | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato 10) > | (-9) > | > | potato :: Int -> Int -> Maybe (Int, Int) > | potato n m | m <= n = Just (m, m) > | | otherwise = Nothing > | > | > | I get this out of the specializer: > | > | foo > | foo = > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> > | $ (. (foldr c_a1HT n_a1HU) > | (takeWhile > | (let { > | ds_s21z > | ds_s21z = I# 1 } in > | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) > | (let { > | n_s21x > | n_s21x = I# 10 } in > | unfoldr > | (\ m_a1U7 -> > | case leInt m_a1U7 n_s21x of _ { > | False -> Nothing; > | True -> Just (m_a1U7, m_a1U7) > | }) > | ($fNumInt_$cnegate (I# 9))) > | > | > | and then I get this out of my extra simplifier run: > | > | foo > | foo = > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> > | $ (\ x_a20f -> > | foldr > | (\ x_a1HR r_a1HS -> > | case case x_a1HR of _ { I# x_a20R -> > | tagToEnum# > | (case x_a20R of _ { > | __DEFAULT -> 1; > | 1 -> 0 > | }) > | } > | of _ { > | False -> n_a1HU; > | True -> c_a1HT x_a1HR r_a1HS > | }) > | n_a1HU > | x_a20f) > | (let { > | b'_a1ZS > | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in > | $ (build) > | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> > | letrec { > | go_a1ZX > | go_a1ZX = > | \ b2_a1ZY -> > | case case case b2_a1ZY of _ { I# x_a218 -> > | tagToEnum# (<=# x_a218 10) > | } > | of _ { > | False -> Nothing; > | True -> Just (b2_a1ZY, b2_a1ZY) > | } > | of _ { > | Nothing -> n_a1ZW; > | Just ds_a203 -> > | case ds_a203 of _ { (a1_a207, new_b_a208) -> > | c_a1ZV a1_a207 (go_a1ZX new_b_a208) > | } > | }; } in > | go_a1ZX b'_a1ZS)) > | > | > | That is, neither the $ in the code nor the $ that was inserted when > | inlining unfoldr got inlined themselves, even though both appear to be > | saturated. As a result, foldr/build doesn't fire, and full laziness > | tears things apart. Later on, in simplifier phase 2, $ gets inlined. > | What's preventing this from happening in the PostGentle phase I added? > | > | David Feuer > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- Glasgow Haskell Compiler, Version 7.9.20140818, stage 2 booted by GHC version 7.8.3 Using binary package database: /home/dfeuer/src/ghc/inplace/lib/package.conf.d/package.cache wired-in package ghc-prim mapped to ghc-prim-0.3.1.0-inplace wired-in package integer-gmp mapped to integer-gmp-0.5.1.0-inplace wired-in package base mapped to base-4.7.1.0-inplace wired-in package rts mapped to builtin_rts wired-in package template-haskell mapped to template-haskell-2.10.0.0-inplace wired-in package ghc mapped to ghc-7.9.20140818-inplace wired-in package dph-seq not found. wired-in package dph-par not found. wired-in package ghc-prim mapped to ghc-prim-0.3.1.0-inplace wired-in package integer-gmp mapped to integer-gmp-0.5.1.0-inplace wired-in package base mapped to base-4.7.1.0-inplace wired-in package rts mapped to builtin_rts wired-in package template-haskell mapped to template-haskell-2.10.0.0-inplace wired-in package ghc mapped to ghc-7.9.20140818-inplace wired-in package dph-seq not found. wired-in package dph-par not found. *** Chasing dependencies: Chasing modules from: *testTakeWhile.hs Stable obj: [] Stable BCO: [] Ready for upsweep [NONREC ModSummary { ms_hs_date = 2014-08-27 03:44:20.75863229 UTC ms_mod = main at main:Foo, ms_textual_imps = [import Prelude hiding ( takeWhile ), import Data.List ( unfoldr ), import GHC.Exts] ms_srcimps = [] }] *** Deleting temp files: compile: input file testTakeWhile.hs Created temporary directory: /tmp/ghc27658_0 *** Checking old interface for Foo: [1 of 1] Compiling Foo ( testTakeWhile.hs, testTakeWhile.o ) *** Parser: *** Renamer/typechecker: *** Desugar: ==================== Occurrence analysis ==================== Foo.takeWhile [InlPrag=INLINE (sat-args=2), Occ=OnceL!] :: forall a_a1Vj. (a_a1Vj -> GHC.Types.Bool) -> [a_a1Vj] -> [a_a1Vj] [LclId, Str=DmdType, Unf=Unf{Src=InlineStable, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=False,boring_ok=False) Tmpl= \ (@ a_a1Vl) (p_a1HL [Occ=OnceL!] :: a_a1Vl -> GHC.Types.Bool) (xs_a1HM [Occ=Once] :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) -> (\ (@ b_a1V3) (c_a1HO [Occ=OnceL!, OS=OneShot] :: a_a1Vl -> b_a1V3 -> b_a1V3) (n_a1HP [OS=OneShot] :: b_a1V3) -> GHC.Base.foldr @ a_a1Vl @ b_a1V3 (\ (x_a1HR :: a_a1Vl) (r_a1HS [Occ=Once, OS=OneShot] :: b_a1V3) -> case p_a1HL x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP xs_a1HM) @ b_a1Vd)}] Foo.takeWhile = \ (@ a_a1Vl) (eta_B2 [Occ=Once] :: a_a1Vl -> GHC.Types.Bool) (eta_B1 [Occ=Once] :: [a_a1Vl]) -> (\ (@ a_a1Vj) -> let { cobox_a1Vf [Occ=OnceL!] :: a_a1Vj ~ a_a1Vj [LclId, Str=DmdType] cobox_a1Vf = GHC.Types.Eq# @ * @ a_a1Vj @ a_a1Vj @~ _N } in let { takeWhile_a1UB [Occ=Once] :: (a_a1Vj -> GHC.Types.Bool) -> [a_a1Vj] -> [a_a1Vj] [LclId, Str=DmdType] takeWhile_a1UB = \ (p_a1HL [Occ=OnceL!] :: a_a1Vj -> GHC.Types.Bool) (xs_a1HM [Occ=OnceL] :: [a_a1Vj]) -> let { builder_a1HN [Occ=Once] :: forall b_a1V1. (a_a1Vj -> b_a1V1 -> b_a1V1) -> b_a1V1 -> b_a1V1 [LclId, Str=DmdType] builder_a1HN = \ (@ b_a1V3) -> (\ (@ b_a1V1) -> let { builder_a1UG [Occ=Once] :: (a_a1Vj -> b_a1V1 -> b_a1V1) -> b_a1V1 -> b_a1V1 [LclId, Str=DmdType] builder_a1UG = \ (c_a1HO [Occ=OnceL!] :: a_a1Vj -> b_a1V1 -> b_a1V1) (n_a1HP :: b_a1V1) -> let { go_a1HQ [Occ=Once] :: a_a1Vj -> b_a1V1 -> b_a1V1 [LclId, Str=DmdType] go_a1HQ = let { go_a1UL [Occ=Once] :: a_a1Vj -> b_a1V1 -> b_a1V1 [LclId, Str=DmdType] go_a1UL = \ (x_a1HR :: a_a1Vj) (r_a1HS [Occ=Once] :: b_a1V1) -> case p_a1HL x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS } } in go_a1UL } in GHC.Base.foldr @ a_a1Vj @ b_a1V1 go_a1HQ n_a1HP xs_a1HM } in builder_a1UG) @ b_a1V3 } in GHC.Base.build @ a_a1Vj (\ (@ b_a1Vd) -> case cobox_a1Vf of _ [Occ=Dead] { GHC.Types.Eq# _ [Occ=Dead] -> builder_a1HN @ b_a1Vd }) } in takeWhile_a1UB) @ a_a1Vl eta_B2 eta_B1 $dOrd_a1Uw [Occ=OnceL] :: GHC.Classes.Ord GHC.Types.Int [LclId, Str=DmdType] $dOrd_a1Uw = GHC.Classes.$fOrdInt Foo.potato [Occ=OnceL!] :: GHC.Types.Int -> GHC.Types.Int -> Data.Maybe.Maybe (GHC.Types.Int, GHC.Types.Int) [LclId, Str=DmdType] Foo.potato = let { potato_a1Ui [Occ=Once] :: GHC.Types.Int -> GHC.Types.Int -> Data.Maybe.Maybe (GHC.Types.Int, GHC.Types.Int) [LclId, Str=DmdType] potato_a1Ui = \ (n_a1U6 [Occ=Once] :: GHC.Types.Int) (m_a1U7 :: GHC.Types.Int) -> let { fail_d1Zr [Occ=Once!] :: GHC.Prim.Void# -> Data.Maybe.Maybe (GHC.Types.Int, GHC.Types.Int) [LclId, Str=DmdType] fail_d1Zr = \ _ [Occ=Dead, OS=OneShot] -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int) } in case GHC.Classes.<= @ GHC.Types.Int $dOrd_a1Uw m_a1U7 n_a1U6 of _ [Occ=Dead] { GHC.Types.False -> fail_d1Zr GHC.Prim.void#; GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (m_a1U7, m_a1U7) } } in potato_a1Ui Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Str=DmdType] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) -> (\ (@ t_a1Z7) (@ c_a1Z8) -> let { $dNum_a1X1 [Occ=OnceL] :: GHC.Num.Num GHC.Types.Int [LclId, Str=DmdType] $dNum_a1X1 = GHC.Num.$fNumInt } in let { $dEq_a1VO [Occ=OnceL] :: GHC.Classes.Eq GHC.Types.Int [LclId, Str=DmdType] $dEq_a1VO = GHC.Classes.$fEqInt } in let { foo_a1Vr [Occ=Once] :: (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclId, Str=DmdType] foo_a1Vr = \ (c_a1HT [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (n_a1HU [Occ=Once] :: c_a1Z8) _ [Occ=Dead] -> GHC.Base.$ @ [GHC.Types.Int] @ c_a1Z8 (GHC.Base.. @ [GHC.Types.Int] @ c_a1Z8 @ [GHC.Types.Int] (GHC.Base.foldr @ GHC.Types.Int @ c_a1Z8 c_a1HT n_a1HU) (Foo.takeWhile @ GHC.Types.Int (let { ds_d1Zx [Occ=OnceL] :: GHC.Types.Int [LclId, Str=DmdType] ds_d1Zx = GHC.Types.I# 1 } in \ (ds_d1Zw [Occ=Once] :: GHC.Types.Int) -> GHC.Classes./= @ GHC.Types.Int $dEq_a1VO ds_d1Zw ds_d1Zx))) (Data.List.unfoldr @ GHC.Types.Int @ GHC.Types.Int (Foo.potato (GHC.Types.I# 10)) (GHC.Num.negate @ GHC.Types.Int $dNum_a1X1 (GHC.Types.I# 9))) } in foo_a1Vr) @ t_a1Za @ c_a1Zb ==================== Desugar (after optimization) ==================== Result size of Desugar (after optimization) = {terms: 72, types: 80, coercions: 0} Foo.takeWhile [InlPrag=INLINE (sat-args=2), Occ=OnceL!] :: forall a_a1Vj. (a_a1Vj -> GHC.Types.Bool) -> [a_a1Vj] -> [a_a1Vj] [LclId, Str=DmdType, Unf=Unf{Src=InlineStable, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=False,boring_ok=False) Tmpl= \ (@ a_a1Vl) (p_a1HL [Occ=OnceL!] :: a_a1Vl -> GHC.Types.Bool) (xs_a1HM [Occ=Once] :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) -> (\ (@ b_a1V3) (c_a1HO [Occ=OnceL!, OS=OneShot] :: a_a1Vl -> b_a1V3 -> b_a1V3) (n_a1HP [OS=OneShot] :: b_a1V3) -> GHC.Base.foldr @ a_a1Vl @ b_a1V3 (\ (x_a1HR :: a_a1Vl) (r_a1HS [Occ=Once, OS=OneShot] :: b_a1V3) -> case p_a1HL x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP xs_a1HM) @ b_a1Vd)}] Foo.takeWhile = \ (@ a_a1Vl) (eta_B2 :: a_a1Vl -> GHC.Types.Bool) (eta_B1 :: [a_a1Vl]) -> (\ (p_a1HL :: a_a1Vl -> GHC.Types.Bool) (xs_a1HM :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) -> (\ (@ b_a1V3) (c_a1HO :: a_a1Vl -> b_a1V3 -> b_a1V3) (n_a1HP :: b_a1V3) -> GHC.Base.foldr @ a_a1Vl @ b_a1V3 (\ (x_a1HR :: a_a1Vl) (r_a1HS :: b_a1V3) -> case p_a1HL x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP xs_a1HM) @ b_a1Vd)) eta_B2 eta_B1 Foo.potato :: GHC.Types.Int -> GHC.Types.Int -> Data.Maybe.Maybe (GHC.Types.Int, GHC.Types.Int) [LclId, Str=DmdType] Foo.potato = \ (n_a1U6 :: GHC.Types.Int) (m_a1U7 :: GHC.Types.Int) -> case GHC.Classes.<= @ GHC.Types.Int GHC.Classes.$fOrdInt m_a1U7 n_a1U6 of _ [Occ=Dead] { GHC.Types.False -> (\ _ [Occ=Dead, OS=OneShot] -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int)) GHC.Prim.void#; GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (m_a1U7, m_a1U7) } Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Str=DmdType] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (GHC.Base.. @ [GHC.Types.Int] @ c_a1Zb @ [GHC.Types.Int] (GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb c_a1HT n_a1HU) (Foo.takeWhile @ GHC.Types.Int (let { ds_d1Zx :: GHC.Types.Int [LclId, Str=DmdType] ds_d1Zx = GHC.Types.I# 1 } in \ (ds_d1Zw :: GHC.Types.Int) -> GHC.Classes./= @ GHC.Types.Int GHC.Classes.$fEqInt ds_d1Zw ds_d1Zx))) (Data.List.unfoldr @ GHC.Types.Int @ GHC.Types.Int (Foo.potato (GHC.Types.I# 10)) (GHC.Num.negate @ GHC.Types.Int GHC.Num.$fNumInt (GHC.Types.I# 9))) *** Simplifier: ==================== Occurrence analysis ==================== Foo.takeWhile [InlPrag=INLINE (sat-args=2), Occ=OnceL!] :: forall a_a1Vj. (a_a1Vj -> GHC.Types.Bool) -> [a_a1Vj] -> [a_a1Vj] [LclId, Str=DmdType, Unf=Unf{Src=InlineStable, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=False,boring_ok=False) Tmpl= \ (@ a_a1Vl) (p_a1HL [Occ=OnceL!] :: a_a1Vl -> GHC.Types.Bool) (xs_a1HM [Occ=Once] :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) -> (\ (@ b_a1V3) (c_a1HO [Occ=OnceL!, OS=OneShot] :: a_a1Vl -> b_a1V3 -> b_a1V3) (n_a1HP [OS=OneShot] :: b_a1V3) -> GHC.Base.foldr @ a_a1Vl @ b_a1V3 (\ (x_a1HR :: a_a1Vl) (r_a1HS [Occ=Once, OS=OneShot] :: b_a1V3) -> case p_a1HL x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP xs_a1HM) @ b_a1Vd)}] Foo.takeWhile = \ (@ a_a1Vl) (eta_B2 [Occ=Once] :: a_a1Vl -> GHC.Types.Bool) (eta_B1 [Occ=Once] :: [a_a1Vl]) -> (\ (p_a1HL [Occ=OnceL!, OS=OneShot] :: a_a1Vl -> GHC.Types.Bool) (xs_a1HM [Occ=Once, OS=OneShot] :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) -> (\ (@ b_a1V3) (c_a1HO [Occ=OnceL!, OS=OneShot] :: a_a1Vl -> b_a1V3 -> b_a1V3) (n_a1HP [OS=OneShot] :: b_a1V3) -> GHC.Base.foldr @ a_a1Vl @ b_a1V3 (\ (x_a1HR :: a_a1Vl) (r_a1HS [Occ=Once, OS=OneShot] :: b_a1V3) -> case p_a1HL x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP xs_a1HM) @ b_a1Vd)) eta_B2 eta_B1 Foo.potato [Occ=OnceL!] :: GHC.Types.Int -> GHC.Types.Int -> Data.Maybe.Maybe (GHC.Types.Int, GHC.Types.Int) [LclId, Str=DmdType] Foo.potato = \ (n_a1U6 [Occ=Once] :: GHC.Types.Int) (m_a1U7 :: GHC.Types.Int) -> case GHC.Classes.<= @ GHC.Types.Int GHC.Classes.$fOrdInt m_a1U7 n_a1U6 of _ [Occ=Dead] { GHC.Types.False -> (\ _ [Occ=Dead, OS=OneShot] -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int)) GHC.Prim.void#; GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (m_a1U7, m_a1U7) } Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Str=DmdType] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=Once] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU [Occ=Once] :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (GHC.Base.. @ [GHC.Types.Int] @ c_a1Zb @ [GHC.Types.Int] (GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb c_a1HT n_a1HU) (Foo.takeWhile @ GHC.Types.Int (let { ds_d1Zx [Occ=OnceL] :: GHC.Types.Int [LclId, Str=DmdType] ds_d1Zx = GHC.Types.I# 1 } in \ (ds_d1Zw [Occ=Once] :: GHC.Types.Int) -> GHC.Classes./= @ GHC.Types.Int GHC.Classes.$fEqInt ds_d1Zw ds_d1Zx))) (Data.List.unfoldr @ GHC.Types.Int @ GHC.Types.Int (Foo.potato (GHC.Types.I# 10)) (GHC.Num.negate @ GHC.Types.Int GHC.Num.$fNumInt (GHC.Types.I# 9))) SimplBind takeWhile Inactive unfolding: build Inactive unfolding: foldr Inactive unfolding: build Inactive unfolding: foldr SimplBind potato SimplBind foo Inactive unfolding: $ Inactive unfolding: . Inactive unfolding: foldr Inactive unfolding: takeWhile Inactive unfolding: ds_d1Zx Rule fired Rule: Class op /= Before: GHC.Classes./= (TYPE GHC.Types.Int) GHC.Classes.$fEqInt ds_d1Zw ds_d1Zx After: GHC.Classes.neInt Cont: Stop[BoringCtxt] GHC.Types.Bool Inactive unfolding: neInt Inactive unfolding: unfoldr Inactive unfolding: n Rule fired Rule: Class op <= Before: GHC.Classes.<= (TYPE GHC.Types.Int) GHC.Classes.$fOrdInt m_a1U7 n_a1U6 After: GHC.Classes.leInt Cont: Select nodup wild_00 [] [(GHC.Types.False, [], (\ _ [Occ=Dead, OS=OneShot] -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int)) GHC.Prim.void#), (GHC.Types.True, [], Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (m_a1U7, m_a1U7))] Stop[BoringCtxt] Data.Maybe.Maybe (GHC.Types.Int, GHC.Types.Int) Inactive unfolding: leInt Rule fired Rule: Class op negate Before: GHC.Num.negate (TYPE GHC.Types.Int) GHC.Num.$fNumInt (GHC.Types.I# 9) After: GHC.Num.$fNumInt_$cnegate Cont: Stop[BoringCtxt] GHC.Types.Int Inactive unfolding: $fNumInt_$cnegate Result size of Simplifier iteration=1 = {terms: 60, types: 64, coercions: 0} ==================== Occurrence analysis ==================== Foo.takeWhile [InlPrag=INLINE (sat-args=2), Occ=OnceL!] :: forall a_a1Vj. (a_a1Vj -> GHC.Types.Bool) -> [a_a1Vj] -> [a_a1Vj] [LclId, Arity=2, Str=DmdType, Unf=Unf{Src=InlineStable, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=False,boring_ok=False) Tmpl= \ (@ a_a1Vl) (p_a1HL [Occ=OnceL!] :: a_a1Vl -> GHC.Types.Bool) (xs_a1HM [Occ=Once] :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) (c_a1HO [Occ=OnceL!, OS=OneShot] :: a_a1Vl -> b_a1Vd -> b_a1Vd) (n_a1HP [OS=OneShot] :: b_a1Vd) -> GHC.Base.foldr @ a_a1Vl @ b_a1Vd (\ (x_a1HR :: a_a1Vl) (r_a1HS [Occ=Once, OS=OneShot] :: b_a1Vd) -> case p_a1HL x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP xs_a1HM)}] Foo.takeWhile = \ (@ a_a1Vl) (eta_B2 [Occ=OnceL!] :: a_a1Vl -> GHC.Types.Bool) (eta_B1 [Occ=Once] :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) (c_a1HO [Occ=OnceL!, OS=OneShot] :: a_a1Vl -> b_a1Vd -> b_a1Vd) (n_a1HP [OS=OneShot] :: b_a1Vd) -> GHC.Base.foldr @ a_a1Vl @ b_a1Vd (\ (x_a1HR :: a_a1Vl) (r_a1HS [Occ=Once, OS=OneShot] :: b_a1Vd) -> case eta_B2 x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP eta_B1) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0 0 0] 330 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=Once] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU [Occ=Once] :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (GHC.Base.. @ [GHC.Types.Int] @ c_a1Zb @ [GHC.Types.Int] (GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb c_a1HT n_a1HU) (Foo.takeWhile @ GHC.Types.Int (let { ds_d1Zx [Occ=OnceL] :: GHC.Types.Int [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=0, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 20}] ds_d1Zx = GHC.Types.I# 1 } in \ (ds_d1Zw [Occ=Once] :: GHC.Types.Int) -> GHC.Classes.neInt ds_d1Zw ds_d1Zx))) (let { n_a1U6 [Occ=OnceL] :: GHC.Types.Int [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=0, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 20}] n_a1U6 = GHC.Types.I# 10 } in Data.List.unfoldr @ GHC.Types.Int @ GHC.Types.Int (\ (m_a1U7 :: GHC.Types.Int) -> case GHC.Classes.leInt m_a1U7 n_a1U6 of _ [Occ=Dead] { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (m_a1U7, m_a1U7) }) (GHC.Num.$fNumInt_$cnegate (GHC.Types.I# 9))) SimplBind takeWhile Inactive unfolding: build Inactive unfolding: foldr Inactive unfolding: build Inactive unfolding: foldr SimplBind foo Inactive unfolding: $ Inactive unfolding: . Inactive unfolding: foldr Inactive unfolding: takeWhile Inactive unfolding: neInt Inactive unfolding: ds_d1Zx Inactive unfolding: unfoldr Inactive unfolding: leInt Inactive unfolding: n Inactive unfolding: $fNumInt_$cnegate ==================== Simplifier ==================== Max iterations = 4 SimplMode {Phase = InitialPhase [Gentle], no inline, rules, eta-expand, no case-of-case} Result size of Simplifier = {terms: 60, types: 64, coercions: 0} Foo.takeWhile [InlPrag=INLINE (sat-args=2)] :: forall a_a1Vj. (a_a1Vj -> GHC.Types.Bool) -> [a_a1Vj] -> [a_a1Vj] [LclId, Arity=2, Str=DmdType, Unf=Unf{Src=InlineStable, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=False,boring_ok=False) Tmpl= \ (@ a_a1Vl) (p_a1HL [Occ=OnceL!] :: a_a1Vl -> GHC.Types.Bool) (xs_a1HM [Occ=Once] :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) (c_a1HO [Occ=OnceL!, OS=OneShot] :: a_a1Vl -> b_a1Vd -> b_a1Vd) (n_a1HP [OS=OneShot] :: b_a1Vd) -> GHC.Base.foldr @ a_a1Vl @ b_a1Vd (\ (x_a1HR :: a_a1Vl) (r_a1HS [Occ=Once, OS=OneShot] :: b_a1Vd) -> case p_a1HL x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP xs_a1HM)}] Foo.takeWhile = \ (@ a_a1Vl) (eta_B2 :: a_a1Vl -> GHC.Types.Bool) (eta_B1 :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) (c_a1HO [OS=OneShot] :: a_a1Vl -> b_a1Vd -> b_a1Vd) (n_a1HP [OS=OneShot] :: b_a1Vd) -> GHC.Base.foldr @ a_a1Vl @ b_a1Vd (\ (x_a1HR :: a_a1Vl) (r_a1HS [OS=OneShot] :: b_a1Vd) -> case eta_B2 x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP eta_B1) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0 0 0] 330 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (GHC.Base.. @ [GHC.Types.Int] @ c_a1Zb @ [GHC.Types.Int] (GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb c_a1HT n_a1HU) (Foo.takeWhile @ GHC.Types.Int (let { ds_d1Zx :: GHC.Types.Int [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=0, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 20}] ds_d1Zx = GHC.Types.I# 1 } in \ (ds_d1Zw :: GHC.Types.Int) -> GHC.Classes.neInt ds_d1Zw ds_d1Zx))) (let { n_a1U6 :: GHC.Types.Int [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=0, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 20}] n_a1U6 = GHC.Types.I# 10 } in Data.List.unfoldr @ GHC.Types.Int @ GHC.Types.Int (\ (m_a1U7 :: GHC.Types.Int) -> case GHC.Classes.leInt m_a1U7 n_a1U6 of _ [Occ=Dead] { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (m_a1U7, m_a1U7) }) (GHC.Num.$fNumInt_$cnegate (GHC.Types.I# 9))) *** Specialise: ==================== Specialise ==================== Result size of Specialise = {terms: 60, types: 64, coercions: 0} Foo.takeWhile [InlPrag=INLINE (sat-args=2)] :: forall a_a1Vj. (a_a1Vj -> GHC.Types.Bool) -> [a_a1Vj] -> [a_a1Vj] [LclId, Arity=2, Str=DmdType, Unf=Unf{Src=InlineStable, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=False,boring_ok=False) Tmpl= \ (@ a_a1Vl) (p_a1HL [Occ=OnceL!] :: a_a1Vl -> GHC.Types.Bool) (xs_a1HM [Occ=Once] :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) (c_a1HO [Occ=OnceL!, OS=OneShot] :: a_a1Vl -> b_a1Vd -> b_a1Vd) (n_a1HP [OS=OneShot] :: b_a1Vd) -> GHC.Base.foldr @ a_a1Vl @ b_a1Vd (\ (x_a1HR :: a_a1Vl) (r_a1HS [Occ=Once, OS=OneShot] :: b_a1Vd) -> case p_a1HL x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP xs_a1HM)}] Foo.takeWhile = \ (@ a_a1Vl) (eta_B2 :: a_a1Vl -> GHC.Types.Bool) (eta_B1 :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) (c_a1HO [OS=OneShot] :: a_a1Vl -> b_a1Vd -> b_a1Vd) (n_a1HP [OS=OneShot] :: b_a1Vd) -> GHC.Base.foldr @ a_a1Vl @ b_a1Vd (\ (x_a1HR :: a_a1Vl) (r_a1HS [OS=OneShot] :: b_a1Vd) -> case eta_B2 x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP eta_B1) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0 0 0] 330 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (GHC.Base.. @ [GHC.Types.Int] @ c_a1Zb @ [GHC.Types.Int] (GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb c_a1HT n_a1HU) (Foo.takeWhile @ GHC.Types.Int (let { ds_s21z :: GHC.Types.Int [LclId, Str=DmdType] ds_s21z = GHC.Types.I# 1 } in \ (ds_d1Zw :: GHC.Types.Int) -> GHC.Classes.neInt ds_d1Zw ds_s21z))) (let { n_s21x :: GHC.Types.Int [LclId, Str=DmdType] n_s21x = GHC.Types.I# 10 } in Data.List.unfoldr @ GHC.Types.Int @ GHC.Types.Int (\ (m_a1U7 :: GHC.Types.Int) -> case GHC.Classes.leInt m_a1U7 n_s21x of _ [Occ=Dead] { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (m_a1U7, m_a1U7) }) (GHC.Num.$fNumInt_$cnegate (GHC.Types.I# 9))) *** Simplifier: ==================== Occurrence analysis ==================== Foo.takeWhile [InlPrag=INLINE (sat-args=2), Occ=OnceL!] :: forall a_a1Vj. (a_a1Vj -> GHC.Types.Bool) -> [a_a1Vj] -> [a_a1Vj] [LclId, Arity=2, Str=DmdType, Unf=Unf{Src=InlineStable, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=False,boring_ok=False) Tmpl= \ (@ a_a1Vl) (p_a1HL [Occ=OnceL!] :: a_a1Vl -> GHC.Types.Bool) (xs_a1HM [Occ=Once] :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) (c_a1HO [Occ=OnceL!, OS=OneShot] :: a_a1Vl -> b_a1Vd -> b_a1Vd) (n_a1HP [OS=OneShot] :: b_a1Vd) -> GHC.Base.foldr @ a_a1Vl @ b_a1Vd (\ (x_a1HR :: a_a1Vl) (r_a1HS [Occ=Once, OS=OneShot] :: b_a1Vd) -> case p_a1HL x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP xs_a1HM)}] Foo.takeWhile = \ (@ a_a1Vl) (eta_B2 [Occ=OnceL!] :: a_a1Vl -> GHC.Types.Bool) (eta_B1 [Occ=Once] :: [a_a1Vl]) -> GHC.Base.build @ a_a1Vl (\ (@ b_a1Vd) (c_a1HO [Occ=OnceL!, OS=OneShot] :: a_a1Vl -> b_a1Vd -> b_a1Vd) (n_a1HP [OS=OneShot] :: b_a1Vd) -> GHC.Base.foldr @ a_a1Vl @ b_a1Vd (\ (x_a1HR :: a_a1Vl) (r_a1HS [Occ=Once, OS=OneShot] :: b_a1Vd) -> case eta_B2 x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP eta_B1) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0 0 0] 330 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=Once] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU [Occ=Once] :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (GHC.Base.. @ [GHC.Types.Int] @ c_a1Zb @ [GHC.Types.Int] (GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb c_a1HT n_a1HU) (Foo.takeWhile @ GHC.Types.Int (let { ds_s21z [Occ=OnceL] :: GHC.Types.Int [LclId, Str=DmdType] ds_s21z = GHC.Types.I# 1 } in \ (ds_d1Zw [Occ=Once] :: GHC.Types.Int) -> GHC.Classes.neInt ds_d1Zw ds_s21z))) (let { n_s21x [Occ=OnceL] :: GHC.Types.Int [LclId, Str=DmdType] n_s21x = GHC.Types.I# 10 } in Data.List.unfoldr @ GHC.Types.Int @ GHC.Types.Int (\ (m_a1U7 :: GHC.Types.Int) -> case GHC.Classes.leInt m_a1U7 n_s21x of _ [Occ=Dead] { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (m_a1U7, m_a1U7) }) (GHC.Num.$fNumInt_$cnegate (GHC.Types.I# 9))) SimplBind takeWhile Inactive unfolding: build Inactive unfolding: foldr Inactive unfolding: build Inactive unfolding: foldr SimplBind foo Inactive unfolding: $ Considering inlining: GHC.Base.. arg infos [ValueArg, ValueArg] uf arity 2 interesting continuation BoringCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=False,boring_ok=False) ANSWER = YES Inlining done: GHC.Base.. Inlined fn: \ (@ b) (@ c) (@ a) (f [Occ=Once!] :: b -> c) (g [Occ=Once!] :: a -> b) (x [Occ=Once] :: a) -> f (g x) Cont: ApplyTo nodup (TYPE [GHC.Types.Int]) ApplyTo nodup (TYPE c) ApplyTo nodup (TYPE [GHC.Types.Int]) ApplyTo nodup (GHC.Base.foldr @ GHC.Types.Int @ c c n) ApplyTo nodup (Foo.takeWhile @ GHC.Types.Int (let { ds_s21z [Occ=OnceL] :: GHC.Types.Int [LclId, Str=DmdType] ds_s21z = GHC.Types.I# 1 } in \ (ds_d1Zw [Occ=Once] :: GHC.Types.Int) -> GHC.Classes.neInt ds_d1Zw ds_s21z)) Stop[BoringCtxt] [GHC.Types.Int] -> c Inactive unfolding: foldr Considering inlining: Foo.takeWhile arg infos [ValueArg] uf arity 2 interesting continuation RhsCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=False,boring_ok=False) ANSWER = NO Considering inlining: GHC.Classes.neInt arg infos [TrivArg, ValueArg] uf arity 2 interesting continuation BoringCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=False,boring_ok=False) ANSWER = YES Inlining done: GHC.Classes.neInt Inlined fn: \ (ds [Occ=Once!] :: GHC.Types.Int) (ds1 [Occ=Once!] :: GHC.Types.Int) -> case ds of _ [Occ=Dead] { GHC.Types.I# x [Occ=Once] -> case ds1 of _ [Occ=Dead] { GHC.Types.I# y [Occ=Once] -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim./=# x y) } } Cont: ApplyTo nodup ds_d1Zw ApplyTo nodup ds_s21z Stop[BoringCtxt] GHC.Types.Bool Inactive unfolding: ds_s21z Rule fired Rule: /=# Before: GHC.Prim./=# x_a20R 1 After: case x_a20R of wild_00 { __DEFAULT -> 1; 1 -> 0 } Cont: StrictArg GHC.Prim.tagToEnum# Stop[BoringCtxt] GHC.Types.Bool Inactive unfolding: foldr Considering inlining: Foo.takeWhile arg infos [ValueArg, TrivArg] uf arity 2 interesting continuation RuleArgCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=False,boring_ok=False) ANSWER = YES Inlining done: Foo.takeWhile Inlined fn: \ (@ a) (p [Occ=OnceL!] :: a -> GHC.Types.Bool) (xs [Occ=Once] :: [a]) -> GHC.Base.build @ a (\ (@ b) (c [Occ=OnceL!, OS=OneShot] :: a -> b -> b) (n [OS=OneShot] :: b) -> GHC.Base.foldr @ a @ b (\ (x :: a) (r [Occ=Once, OS=OneShot] :: b) -> case p x of _ [Occ=Dead] { GHC.Types.False -> n; GHC.Types.True -> c x r }) n xs) Cont: ApplyTo nodup (TYPE GHC.Types.Int) ApplyTo nodup a_s21G ApplyTo nodup x StrictArg GHC.Base.foldr Stop[BoringCtxt] c Inactive unfolding: a_s21G Inactive unfolding: build Inactive unfolding: foldr Inactive unfolding: a_s21G Rule fired Rule: fold/build Before: GHC.Base.foldr (TYPE GHC.Types.Int) (TYPE c_a1Zb) c_a1HT n_a1HU (GHC.Base.build @ GHC.Types.Int (\ (@ b_a1Vd) (c_a1HO [OS=OneShot] :: GHC.Types.Int -> b_a1Vd -> b_a1Vd) (n_a1HP [OS=OneShot] :: b_a1Vd) -> GHC.Base.foldr @ GHC.Types.Int @ b_a1Vd (\ (x_a1HR :: GHC.Types.Int) (r_a1HS [OS=OneShot] :: b_a1Vd) -> case a_s21G x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP x_a20f)) After: (\ (@ a_a20E) (@ b_a20F) (k_a20G [Occ=Once] :: a_a20E -> b_a20F -> b_a20F) (z_a20H [Occ=Once] :: b_a20F) (g_a20I [Occ=Once!] :: forall b1_a20J. (a_a20E -> b1_a20J -> b1_a20J) -> b1_a20J -> b1_a20J) -> g_a20I @ b_a20F k_a20G z_a20H) @ GHC.Types.Int @ c_a1Zb c_a1HT n_a1HU (\ (@ b_a1Vd) (c_a1HO [OS=OneShot] :: GHC.Types.Int -> b_a1Vd -> b_a1Vd) (n_a1HP [OS=OneShot] :: b_a1Vd) -> GHC.Base.foldr @ GHC.Types.Int @ b_a1Vd (\ (x_a1HR :: GHC.Types.Int) (r_a1HS [OS=OneShot] :: b_a1Vd) -> case a_s21G x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HP; GHC.Types.True -> c_a1HO x_a1HR r_a1HS }) n_a1HP x_a20f) Cont: Stop[BoringCtxt] c_a1Zb Inactive unfolding: foldr Inactive unfolding: a_s21G Considering inlining: Data.List.unfoldr arg infos [ValueArg, NonTrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=False,boring_ok=False) ANSWER = YES Inlining done: Data.List.unfoldr Inlined fn: \ (@ b) (@ a) (f [Occ=OnceL!] :: b -> Data.Maybe.Maybe (a, b)) (b' [Occ=OnceL] :: b) -> GHC.Base.$ @ (forall b1. (a -> b1 -> b1) -> b1 -> b1) @ [a] (GHC.Base.build @ a) (\ (@ b1) (c [Occ=OnceL!] :: a -> b1 -> b1) (n [Occ=OnceL] :: b1) -> letrec { go [Occ=LoopBreaker] :: b -> b1 [LclId, Arity=1, Str=DmdType] go = \ (b2 [Occ=Once] :: b) -> case f b2 of _ [Occ=Dead] { Data.Maybe.Nothing -> n; Data.Maybe.Just ds [Occ=Once!] -> case ds of _ [Occ=Dead] { (a1 [Occ=Once], new_b [Occ=Once]) -> c a1 (go new_b) } }; } in go b') Cont: ApplyTo nodup (TYPE GHC.Types.Int) ApplyTo nodup (TYPE GHC.Types.Int) ApplyTo nodup (\ (m :: GHC.Types.Int) -> case GHC.Classes.leInt m n of _ [Occ=Dead] { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (m, m) }) ApplyTo nodup (GHC.Num.$fNumInt_$cnegate (GHC.Types.I# 9)) Stop[BoringCtxt] [GHC.Types.Int] Inactive unfolding: $fNumInt_$cnegate Inactive unfolding: $ Inactive unfolding: build SimplBind go Considering inlining: GHC.Classes.leInt arg infos [TrivArg, ValueArg] uf arity 2 interesting continuation CaseCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=False,boring_ok=False) ANSWER = YES Inlining done: GHC.Classes.leInt Inlined fn: \ (ds [Occ=Once!] :: GHC.Types.Int) (ds1 [Occ=Once!] :: GHC.Types.Int) -> case ds of _ [Occ=Dead] { GHC.Types.I# x [Occ=Once] -> case ds1 of _ [Occ=Dead] { GHC.Types.I# y [Occ=Once] -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x y) } } Cont: ApplyTo nodup m ApplyTo nodup n Select nodup wild_Xd [] [(GHC.Types.False, [], Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int)), (GHC.Types.True, [], Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (m, m))] Stop[BoringCtxt] Data.Maybe.Maybe (GHC.Types.Int, GHC.Types.Int) Inactive unfolding: n Inactive unfolding: b' Result size of Simplifier iteration=1 = {terms: 101, types: 99, coercions: 0} ==================== Occurrence analysis ==================== Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 523 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=OnceL!] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (let { a_s21G [Occ=OnceL!] :: GHC.Types.Int -> GHC.Types.Bool [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=0, Value=True, ConLike=True, WorkFree=False, Expandable=True, Guidance=IF_ARGS [] 61 60}] a_s21G = \ (ds_d1Zw [Occ=Once!] :: GHC.Types.Int) -> case ds_d1Zw of _ [Occ=Dead] { GHC.Types.I# x_a20R [Occ=Once!] -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (case x_a20R of _ [Occ=Dead] { __DEFAULT -> 1; 1 -> 0 }) } } in \ (x_a20f [Occ=Once] :: [GHC.Types.Int]) -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb (\ (x_a1HR :: GHC.Types.Int) (r_a1HS [Occ=Once, OS=OneShot] :: c_a1Zb) -> case a_s21G x_a1HR of _ [Occ=Dead] { GHC.Types.False -> n_a1HU; GHC.Types.True -> c_a1HT x_a1HR r_a1HS }) n_a1HU x_a20f) (let { b'_a1ZS [Occ=OnceL] :: GHC.Types.Int [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] b'_a1ZS = GHC.Num.$fNumInt_$cnegate (GHC.Types.I# 9) } in GHC.Base.$ @ (forall b1_a1ZT. (GHC.Types.Int -> b1_a1ZT -> b1_a1ZT) -> b1_a1ZT -> b1_a1ZT) @ [GHC.Types.Int] (GHC.Base.build @ GHC.Types.Int) (\ (@ b1_a1ZU) (c_a1ZV [Occ=OnceL!] :: GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) (n_a1ZW [Occ=OnceL] :: b1_a1ZU) -> letrec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> b1_a1ZU [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 132 0}] go_a1ZX = \ (b2_a1ZY :: GHC.Types.Int) -> case case case b2_a1ZY of _ [Occ=Dead] { GHC.Types.I# x_a218 [Occ=Once] -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) } of _ [Occ=Dead] { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (b2_a1ZY, b2_a1ZY) } of _ [Occ=Dead] { Data.Maybe.Nothing -> n_a1ZW; Data.Maybe.Just ds_a203 [Occ=Once!] -> case ds_a203 of _ [Occ=Dead] { (a1_a207 [Occ=Once], new_b_a208 [Occ=Once]) -> c_a1ZV a1_a207 (go_a1ZX new_b_a208) } }; } in go_a1ZX b'_a1ZS)) SimplBind foo Inactive unfolding: $ Inactive unfolding: foldr Inactive unfolding: $fNumInt_$cnegate Inactive unfolding: $ Inactive unfolding: build SimplBind go Inactive unfolding: b' Result size of Simplifier iteration=2 = {terms: 69, types: 71, coercions: 0} ==================== Occurrence analysis ==================== Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 443 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=OnceL!] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (\ (x_a20f [Occ=Once] :: [GHC.Types.Int]) -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb (\ (x_a1HR :: GHC.Types.Int) (r_a1HS [Occ=Once, OS=OneShot] :: c_a1Zb) -> case case x_a1HR of _ [Occ=Dead] { GHC.Types.I# x_a20R [Occ=Once!] -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (case x_a20R of _ [Occ=Dead] { __DEFAULT -> 1; 1 -> 0 }) } of _ [Occ=Dead] { GHC.Types.False -> n_a1HU; GHC.Types.True -> c_a1HT x_a1HR r_a1HS }) n_a1HU x_a20f) (let { b'_a1ZS [Occ=OnceL] :: GHC.Types.Int [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] b'_a1ZS = GHC.Num.$fNumInt_$cnegate (GHC.Types.I# 9) } in GHC.Base.$ @ (forall b1_a1ZT. (GHC.Types.Int -> b1_a1ZT -> b1_a1ZT) -> b1_a1ZT -> b1_a1ZT) @ [GHC.Types.Int] (GHC.Base.build @ GHC.Types.Int) (\ (@ b1_a1ZU) (c_a1ZV [Occ=OnceL!] :: GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) (n_a1ZW [Occ=OnceL] :: b1_a1ZU) -> letrec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> b1_a1ZU [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 132 0}] go_a1ZX = \ (b2_a1ZY :: GHC.Types.Int) -> case case case b2_a1ZY of _ [Occ=Dead] { GHC.Types.I# x_a218 [Occ=Once] -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) } of _ [Occ=Dead] { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (b2_a1ZY, b2_a1ZY) } of _ [Occ=Dead] { Data.Maybe.Nothing -> n_a1ZW; Data.Maybe.Just ds_a203 [Occ=Once!] -> case ds_a203 of _ [Occ=Dead] { (a1_a207 [Occ=Once], new_b_a208 [Occ=Once]) -> c_a1ZV a1_a207 (go_a1ZX new_b_a208) } }; } in go_a1ZX b'_a1ZS)) SimplBind foo Inactive unfolding: $ Inactive unfolding: foldr Inactive unfolding: $fNumInt_$cnegate Inactive unfolding: $ Inactive unfolding: build SimplBind go Inactive unfolding: b' ==================== Simplifier ==================== Max iterations = 4 SimplMode {Phase = InitialPhase [PostGentle], inline, rules, eta-expand, no case-of-case} Result size of Simplifier = {terms: 69, types: 71, coercions: 0} Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 443 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (\ (x_a20f :: [GHC.Types.Int]) -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb (\ (x_a1HR :: GHC.Types.Int) (r_a1HS [OS=OneShot] :: c_a1Zb) -> case case x_a1HR of _ [Occ=Dead] { GHC.Types.I# x_a20R -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (case x_a20R of _ [Occ=Dead] { __DEFAULT -> 1; 1 -> 0 }) } of _ [Occ=Dead] { GHC.Types.False -> n_a1HU; GHC.Types.True -> c_a1HT x_a1HR r_a1HS }) n_a1HU x_a20f) (let { b'_a1ZS :: GHC.Types.Int [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] b'_a1ZS = GHC.Num.$fNumInt_$cnegate (GHC.Types.I# 9) } in GHC.Base.$ @ (forall b1_a1ZT. (GHC.Types.Int -> b1_a1ZT -> b1_a1ZT) -> b1_a1ZT -> b1_a1ZT) @ [GHC.Types.Int] (GHC.Base.build @ GHC.Types.Int) (\ (@ b1_a1ZU) (c_a1ZV :: GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) (n_a1ZW :: b1_a1ZU) -> letrec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> b1_a1ZU [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 132 0}] go_a1ZX = \ (b2_a1ZY :: GHC.Types.Int) -> case case case b2_a1ZY of _ [Occ=Dead] { GHC.Types.I# x_a218 -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) } of _ [Occ=Dead] { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (b2_a1ZY, b2_a1ZY) } of _ [Occ=Dead] { Data.Maybe.Nothing -> n_a1ZW; Data.Maybe.Just ds_a203 -> case ds_a203 of _ [Occ=Dead] { (a1_a207, new_b_a208) -> c_a1ZV a1_a207 (go_a1ZX new_b_a208) } }; } in go_a1ZX b'_a1ZS)) *** Float out(FOS {Lam = Just 0, Consts = True, PAPs = False}): ==================== Levels added: ==================== > > = \ > > > > > -> GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (\ > -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb (let { > > = \ > > -> case case x_a1HR of > { GHC.Types.I# > -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (case x_a20R of > { __DEFAULT -> 1; 1 -> 0 }) } of > { GHC.Types.False -> n_a1HU; GHC.Types.True -> c_a1HT x_a1HR r_a1HS } } in lvl_s224) n_a1HU x_a20f) (let { > > = let { > > = GHC.Num.$fNumInt_$cnegate (let { > > = GHC.Types.I# 9 } in lvl_s225) } in GHC.Base.$ @ (forall b1_a1ZT. (GHC.Types.Int -> b1_a1ZT -> b1_a1ZT) -> b1_a1ZT -> b1_a1ZT) @ [GHC.Types.Int] (GHC.Base.build @ GHC.Types.Int) (let { > > = \ > > > -> letrec { > > = \ > -> case case case b2_a1ZY of > { GHC.Types.I# > -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) } of > { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (b2_a1ZY, b2_a1ZY) } of > { Data.Maybe.Nothing -> n_a1ZW; Data.Maybe.Just > -> case ds_a203 of > { (>, >) -> c_a1ZV a1_a207 (go_a1ZX new_b_a208) } }; } in go_a1ZX b'_s227 } in lvl_s228) } in lvl_s229) ==================== Float out(FOS {Lam = Just 0, Consts = True, PAPs = False}) ==================== Result size of Float out(FOS {Lam = Just 0, Consts = True, PAPs = False}) = {terms: 77, types: 83, coercions: 0} lvl_s225 :: GHC.Types.Int [LclId, Str=DmdType] lvl_s225 = GHC.Types.I# 9 b'_s227 :: GHC.Types.Int [LclId, Str=DmdType] b'_s227 = GHC.Num.$fNumInt_$cnegate lvl_s225 lvl_s228 :: forall b1_a1ZU. (GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) -> b1_a1ZU -> b1_a1ZU [LclId, Str=DmdType] lvl_s228 = \ (@ b1_a1ZU) (c_a1ZV :: GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) (n_a1ZW :: b1_a1ZU) -> letrec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> b1_a1ZU [LclId, Arity=1, Str=DmdType] go_a1ZX = \ (b2_a1ZY :: GHC.Types.Int) -> case case case b2_a1ZY of _ [Occ=Dead] { GHC.Types.I# x_a218 -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) } of _ [Occ=Dead] { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (b2_a1ZY, b2_a1ZY) } of _ [Occ=Dead] { Data.Maybe.Nothing -> n_a1ZW; Data.Maybe.Just ds_a203 -> case ds_a203 of _ [Occ=Dead] { (a1_a207, new_b_a208) -> c_a1ZV a1_a207 (go_a1ZX new_b_a208) } }; } in go_a1ZX b'_s227 lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType] lvl_s229 = GHC.Base.$ @ (forall b1_a1ZT. (GHC.Types.Int -> b1_a1ZT -> b1_a1ZT) -> b1_a1ZT -> b1_a1ZT) @ [GHC.Types.Int] (GHC.Base.build @ GHC.Types.Int) lvl_s228 Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> let { lvl_s224 :: GHC.Types.Int -> c_a1Zb -> c_a1Zb [LclId, Str=DmdType] lvl_s224 = \ (x_a1HR :: GHC.Types.Int) (r_a1HS [OS=OneShot] :: c_a1Zb) -> case case x_a1HR of _ [Occ=Dead] { GHC.Types.I# x_a20R -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (case x_a20R of _ [Occ=Dead] { __DEFAULT -> 1; 1 -> 0 }) } of _ [Occ=Dead] { GHC.Types.False -> n_a1HU; GHC.Types.True -> c_a1HT x_a1HR r_a1HS } } in GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (\ (x_a20f :: [GHC.Types.Int]) -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb lvl_s224 n_a1HU x_a20f) lvl_s229 *** Float inwards: ==================== Float inwards ==================== Result size of Float inwards = {terms: 77, types: 83, coercions: 0} lvl_s225 :: GHC.Types.Int [LclId, Str=DmdType] lvl_s225 = GHC.Types.I# 9 b'_s227 :: GHC.Types.Int [LclId, Str=DmdType] b'_s227 = GHC.Num.$fNumInt_$cnegate lvl_s225 lvl_s228 :: forall b1_a1ZU. (GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) -> b1_a1ZU -> b1_a1ZU [LclId, Str=DmdType] lvl_s228 = \ (@ b1_a1ZU) (c_a1ZV :: GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) (n_a1ZW :: b1_a1ZU) -> (letrec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> b1_a1ZU [LclId, Arity=1, Str=DmdType] go_a1ZX = \ (b2_a1ZY :: GHC.Types.Int) -> case case case b2_a1ZY of _ [Occ=Dead] { GHC.Types.I# x_a218 -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) } of _ [Occ=Dead] { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (b2_a1ZY, b2_a1ZY) } of _ [Occ=Dead] { Data.Maybe.Nothing -> n_a1ZW; Data.Maybe.Just ds_a203 -> case ds_a203 of _ [Occ=Dead] { (a1_a207, new_b_a208) -> c_a1ZV a1_a207 (go_a1ZX new_b_a208) } }; } in go_a1ZX) b'_s227 lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType] lvl_s229 = GHC.Base.$ @ (forall b1_a1ZT. (GHC.Types.Int -> b1_a1ZT -> b1_a1ZT) -> b1_a1ZT -> b1_a1ZT) @ [GHC.Types.Int] (GHC.Base.build @ GHC.Types.Int) lvl_s228 Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> let { lvl_s224 :: GHC.Types.Int -> c_a1Zb -> c_a1Zb [LclId, Str=DmdType] lvl_s224 = \ (x_a1HR :: GHC.Types.Int) (r_a1HS [OS=OneShot] :: c_a1Zb) -> case case x_a1HR of _ [Occ=Dead] { GHC.Types.I# x_a20R -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (case x_a20R of _ [Occ=Dead] { __DEFAULT -> 1; 1 -> 0 }) } of _ [Occ=Dead] { GHC.Types.False -> n_a1HU; GHC.Types.True -> c_a1HT x_a1HR r_a1HS } } in GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (\ (x_a20f :: [GHC.Types.Int]) -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb lvl_s224 n_a1HU x_a20f) lvl_s229 *** Simplifier: ==================== Occurrence analysis ==================== lvl_s225 [Occ=Once] :: GHC.Types.Int [LclId, Str=DmdType] lvl_s225 = GHC.Types.I# 9 b'_s227 [Occ=OnceL] :: GHC.Types.Int [LclId, Str=DmdType] b'_s227 = GHC.Num.$fNumInt_$cnegate lvl_s225 lvl_s228 [Occ=Once] :: forall b1_a1ZU. (GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) -> b1_a1ZU -> b1_a1ZU [LclId, Str=DmdType] lvl_s228 = \ (@ b1_a1ZU) (c_a1ZV [Occ=OnceL!] :: GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) (n_a1ZW [Occ=OnceL] :: b1_a1ZU) -> (letrec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> b1_a1ZU [LclId, Arity=1, Str=DmdType] go_a1ZX = \ (b2_a1ZY :: GHC.Types.Int) -> case case case b2_a1ZY of _ [Occ=Dead] { GHC.Types.I# x_a218 [Occ=Once] -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) } of _ [Occ=Dead] { GHC.Types.False -> Data.Maybe.Nothing @ (GHC.Types.Int, GHC.Types.Int); GHC.Types.True -> Data.Maybe.Just @ (GHC.Types.Int, GHC.Types.Int) (b2_a1ZY, b2_a1ZY) } of _ [Occ=Dead] { Data.Maybe.Nothing -> n_a1ZW; Data.Maybe.Just ds_a203 [Occ=Once!] -> case ds_a203 of _ [Occ=Dead] { (a1_a207 [Occ=Once], new_b_a208 [Occ=Once]) -> c_a1ZV a1_a207 (go_a1ZX new_b_a208) } }; } in go_a1ZX) b'_s227 lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType] lvl_s229 = GHC.Base.$ @ (forall b1_a1ZT. (GHC.Types.Int -> b1_a1ZT -> b1_a1ZT) -> b1_a1ZT -> b1_a1ZT) @ [GHC.Types.Int] (GHC.Base.build @ GHC.Types.Int) lvl_s228 Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=OnceL!] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> let { lvl_s224 [Occ=OnceL] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb [LclId, Str=DmdType] lvl_s224 = \ (x_a1HR :: GHC.Types.Int) (r_a1HS [Occ=Once, OS=OneShot] :: c_a1Zb) -> case case x_a1HR of _ [Occ=Dead] { GHC.Types.I# x_a20R [Occ=Once!] -> GHC.Prim.tagToEnum# @ GHC.Types.Bool (case x_a20R of _ [Occ=Dead] { __DEFAULT -> 1; 1 -> 0 }) } of _ [Occ=Dead] { GHC.Types.False -> n_a1HU; GHC.Types.True -> c_a1HT x_a1HR r_a1HS } } in GHC.Base.$ @ [GHC.Types.Int] @ c_a1Zb (\ (x_a20f [Occ=Once] :: [GHC.Types.Int]) -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb lvl_s224 n_a1HU x_a20f) lvl_s229 SimplBind lvl_s225 SimplBind b' Considering inlining: GHC.Num.$fNumInt_$cnegate arg infos [ValueArg] uf arity 1 interesting continuation RhsCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=True,boring_ok=False) ANSWER = YES Inlining done: GHC.Num.$fNumInt_$cnegate Inlined fn: \ (ds [Occ=Once!] :: GHC.Types.Int) -> case ds of _ [Occ=Dead] { GHC.Types.I# x [Occ=Once] -> GHC.Types.I# (GHC.Prim.negateInt# x) } Cont: ApplyTo nodup lvl_s225 Stop[RhsCtxt] GHC.Types.Int Rule fired Rule: negateInt# Before: GHC.Prim.negateInt# 9 After: (-9) Cont: StrictArg GHC.Types.I# Stop[RhsCtxt] GHC.Types.Int SimplBind lvl_s228 SimplBind lvl_s229 Considering inlining: GHC.Base.$ arg infos [ValueArg, ValueArg] uf arity 2 interesting continuation RhsCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=False,boring_ok=True) ANSWER = YES Inlining done: GHC.Base.$ Inlined fn: \ (@ a) (@ (b :: OpenKind)) (tpl_B1 [Occ=Once!] :: a -> b) (tpl_B2 [Occ=Once] :: a) -> tpl_B1 tpl_B2 Cont: ApplyTo nodup (TYPE forall b1. (GHC.Types.Int -> b1 -> b1) -> b1 -> b1) ApplyTo nodup (TYPE [GHC.Types.Int]) ApplyTo nodup (GHC.Base.build @ GHC.Types.Int) ApplyTo nodup lvl_s228 Stop[RhsCtxt] [GHC.Types.Int] Inactive unfolding: build SimplBind go Considering inlining: b2_a1ZY arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: b2_a1ZY arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: $j_s22f arg infos [ValueArg] uf arity 1 interesting continuation BoringCtxt some_benefit True is exp: True is work-free: True guidance IF_ARGS [20] 60 0 discounted size = 10 ANSWER = YES Inlining done: $j_s22f Inlined fn: \ (ds [Occ=Once!, OS=OneShot] :: (GHC.Types.Int, GHC.Types.Int)) -> case ds of _ [Occ=Dead] { (a1 [Occ=Once], new_b [Occ=Once]) -> c a1 (go new_b) } Cont: ApplyTo nodup ds Stop[BoringCtxt] b1 Considering inlining: ds_a203 arg infos [] uf arity 0 interesting continuation CaseCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 30 discounted size = -45 ANSWER = NO Considering inlining: b2_a1ZY arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: b2_a1ZY arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: b'_s227 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO SimplBind foo Considering inlining: x_a1HR arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Rule fired Rule: tagToEnum# Before: GHC.Prim.tagToEnum# (TYPE GHC.Types.Bool) 1 After: GHC.Types.True Cont: Select ok wild_Xc [] [(GHC.Types.False, [], n_a1HU), (GHC.Types.True, [], c_a1HT x_a1HR r_a1HS)] Stop[BoringCtxt] c_a1Zb Considering inlining: x_a1HR arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Rule fired Rule: tagToEnum# Before: GHC.Prim.tagToEnum# (TYPE GHC.Types.Bool) 0 After: GHC.Types.False Cont: Select ok wild_Xc [] [(GHC.Types.False, [], n_a1HU), (GHC.Types.True, [], c_a1HT x_a1HR r_a1HS)] Stop[BoringCtxt] c_a1Zb Considering inlining: GHC.Base.$ arg infos [ValueArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=False,boring_ok=True) ANSWER = YES Inlining done: GHC.Base.$ Inlined fn: \ (@ a) (@ (b :: OpenKind)) (tpl_B1 [Occ=Once!] :: a -> b) (tpl_B2 [Occ=Once] :: a) -> tpl_B1 tpl_B2 Cont: ApplyTo nodup (TYPE [GHC.Types.Int]) ApplyTo nodup (TYPE c) ApplyTo nodup (\ (x [Occ=Once] :: [GHC.Types.Int]) -> GHC.Base.foldr @ GHC.Types.Int @ c lvl_s224 n x) ApplyTo nodup lvl_s229 Stop[BoringCtxt] c Inactive unfolding: foldr Considering inlining: lvl_s224 arg infos [] uf arity 2 interesting continuation RuleArgCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [20 0] 60 0 discounted size = 50 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation RuleArgCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 242 40 discounted size = 172 ANSWER = NO Result size of Simplifier iteration=1 = {terms: 64, types: 58, coercions: 0} ==================== Occurrence analysis ==================== b'_s227 [Occ=Once] :: GHC.Types.Int [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 20}] b'_s227 = GHC.Types.I# (-9) lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 242 40}] lvl_s229 = GHC.Base.build @ GHC.Types.Int (\ (@ b1_a1ZU) (c_a1ZV [Occ=OnceL!, OS=OneShot] :: GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) (n_a1ZW [Occ=OnceL, OS=OneShot] :: b1_a1ZU) -> letrec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> b1_a1ZU [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 182 0}] go_a1ZX = \ (b2_a1ZY [Occ=Once!] :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 [Occ=Once] -> let { b2_a1ZY :: GHC.Types.Int [LclId, Str=DmdType] b2_a1ZY = wild_a216 } in case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> n_a1ZW; GHC.Types.True -> c_a1ZV b2_a1ZY (go_a1ZX b2_a1ZY) } }; } in go_a1ZX b'_s227) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 130 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=OnceL!] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> let { lvl_s224 [Occ=Once] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb [LclId, Arity=2, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20 0] 60 0}] lvl_s224 = \ (x_a1HR [Occ=Once!] :: GHC.Types.Int) (r_a1HS [Occ=Once, OS=OneShot] :: c_a1Zb) -> case x_a1HR of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!] -> let { x_a1HR [Occ=Once] :: GHC.Types.Int [LclId, Str=DmdType] x_a1HR = wild_a20P } in case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT x_a1HR r_a1HS; 1 -> n_a1HU } } } in GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb lvl_s224 n_a1HU lvl_s229 SimplBind b' SimplBind lvl_s229 Inactive unfolding: build SimplBind go Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation RhsCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = -30 ANSWER = NO Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO SimplBind foo Inactive unfolding: foldr Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation RuleArgCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 152 40 discounted size = 82 ANSWER = NO Result size of Simplifier iteration=2 = {terms: 47, types: 37, coercions: 0} ==================== Occurrence analysis ==================== lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 152 40}] lvl_s229 = GHC.Base.build @ GHC.Types.Int (\ (@ b1_a1ZU) (c_a1ZV [Occ=OnceL!, OS=OneShot] :: GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) (n_a1ZW [Occ=OnceL, OS=OneShot] :: b1_a1ZU) -> letrec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> b1_a1ZU [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 82 0}] go_a1ZX = \ (b2_a1ZY [Occ=Once!] :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 [Occ=Once] -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> n_a1ZW; GHC.Types.True -> c_a1ZV wild_a216 (go_a1ZX wild_a216) } }; } in go_a1ZX (GHC.Types.I# (-9))) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 120 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=OnceL!] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb (\ (x_a1HR [Occ=Once!] :: GHC.Types.Int) (r_a1HS [Occ=Once, OS=OneShot] :: c_a1Zb) -> case x_a1HR of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!] -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT wild_a20P r_a1HS; 1 -> n_a1HU } }) n_a1HU lvl_s229 SimplBind lvl_s229 Inactive unfolding: build SimplBind go Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO SimplBind foo Inactive unfolding: foldr Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation RuleArgCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 152 40 discounted size = 82 ANSWER = NO ==================== Simplifier ==================== Max iterations = 4 SimplMode {Phase = 2 [main], inline, rules, eta-expand, case-of-case} Result size of Simplifier = {terms: 47, types: 37, coercions: 0} lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 152 40}] lvl_s229 = GHC.Base.build @ GHC.Types.Int (\ (@ b1_a1ZU) (c_a1ZV [OS=OneShot] :: GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) (n_a1ZW [OS=OneShot] :: b1_a1ZU) -> letrec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> b1_a1ZU [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 82 0}] go_a1ZX = \ (b2_a1ZY :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> n_a1ZW; GHC.Types.True -> c_a1ZV wild_a216 (go_a1ZX wild_a216) } }; } in go_a1ZX (GHC.Types.I# (-9))) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 120 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb (\ (x_a1HR :: GHC.Types.Int) (r_a1HS [OS=OneShot] :: c_a1Zb) -> case x_a1HR of wild_a20P { GHC.Types.I# x_a20R -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT wild_a20P r_a1HS; 1 -> n_a1HU } }) n_a1HU lvl_s229 *** Simplifier: ==================== Occurrence analysis ==================== lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 152 40}] lvl_s229 = GHC.Base.build @ GHC.Types.Int (\ (@ b1_a1ZU) (c_a1ZV [Occ=OnceL!, OS=OneShot] :: GHC.Types.Int -> b1_a1ZU -> b1_a1ZU) (n_a1ZW [Occ=OnceL, OS=OneShot] :: b1_a1ZU) -> letrec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> b1_a1ZU [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 82 0}] go_a1ZX = \ (b2_a1ZY [Occ=Once!] :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 [Occ=Once] -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> n_a1ZW; GHC.Types.True -> c_a1ZV wild_a216 (go_a1ZX wild_a216) } }; } in go_a1ZX (GHC.Types.I# (-9))) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 120 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=OnceL!] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb (\ (x_a1HR [Occ=Once!] :: GHC.Types.Int) (r_a1HS [Occ=Once, OS=OneShot] :: c_a1Zb) -> case x_a1HR of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!] -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT wild_a20P r_a1HS; 1 -> n_a1HU } }) n_a1HU lvl_s229 SimplBind lvl_s229 Considering inlining: GHC.Base.build arg infos [ValueArg] uf arity 1 interesting continuation RhsCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=False,boring_ok=False) ANSWER = YES Inlining done: GHC.Base.build Inlined fn: \ (@ a) (g [Occ=Once!] :: forall b. (a -> b -> b) -> b -> b) -> g @ [a] (GHC.Types.: @ a) (GHC.Types.[] @ a) Cont: ApplyTo nodup (TYPE GHC.Types.Int) ApplyTo nodup (\ (@ b1) (c [Occ=OnceL!, OS=OneShot] :: GHC.Types.Int -> b1 -> b1) (n [Occ=OnceL, OS=OneShot] :: b1) -> letrec { go [Occ=LoopBreaker] :: GHC.Types.Int -> b1 [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 82 0}] go = \ (b2 [Occ=Once!] :: GHC.Types.Int) -> case b2 of wild { GHC.Types.I# x [Occ=Once] -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x 10) of _ [Occ=Dead] { GHC.Types.False -> n; GHC.Types.True -> c wild (go wild) } }; } in go (GHC.Types.I# (-9))) Stop[RhsCtxt] [GHC.Types.Int] SimplBind go Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO SimplBind foo Inactive unfolding: foldr Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation RuleArgCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 30 0 discounted size = 20 ANSWER = NO Result size of Simplifier iteration=1 = {terms: 43, types: 34, coercions: 0} ==================== Occurrence analysis ==================== Rec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 62 40}] go_a1ZX = \ (b2_a1ZY [Occ=Once!] :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 [Occ=Once] -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 (go_a1ZX wild_a216) } } end Rec } lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] lvl_s229 = go_a1ZX (GHC.Types.I# (-9)) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 120 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=OnceL!] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb (\ (x_a1HR [Occ=Once!] :: GHC.Types.Int) (r_a1HS [Occ=Once, OS=OneShot] :: c_a1Zb) -> case x_a1HR of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!] -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT wild_a20P r_a1HS; 1 -> n_a1HU } }) n_a1HU lvl_s229 SimplBind go Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO SimplBind lvl_s229 SimplBind foo Inactive unfolding: foldr Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation RuleArgCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 30 0 discounted size = 20 ANSWER = NO ==================== Simplifier ==================== Max iterations = 4 SimplMode {Phase = 1 [main], inline, rules, eta-expand, case-of-case} Result size of Simplifier = {terms: 43, types: 34, coercions: 0} Rec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 62 40}] go_a1ZX = \ (b2_a1ZY :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 (go_a1ZX wild_a216) } } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] lvl_s229 = go_a1ZX (GHC.Types.I# (-9)) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 120 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb (\ (x_a1HR :: GHC.Types.Int) (r_a1HS [OS=OneShot] :: c_a1Zb) -> case x_a1HR of wild_a20P { GHC.Types.I# x_a20R -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT wild_a20P r_a1HS; 1 -> n_a1HU } }) n_a1HU lvl_s229 *** Simplifier: ==================== Occurrence analysis ==================== Rec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 62 40}] go_a1ZX = \ (b2_a1ZY [Occ=Once!] :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 [Occ=Once] -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 (go_a1ZX wild_a216) } } end Rec } lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] lvl_s229 = go_a1ZX (GHC.Types.I# (-9)) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 120 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=OnceL!] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> GHC.Base.foldr @ GHC.Types.Int @ c_a1Zb (\ (x_a1HR [Occ=Once!] :: GHC.Types.Int) (r_a1HS [Occ=Once, OS=OneShot] :: c_a1Zb) -> case x_a1HR of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!] -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT wild_a20P r_a1HS; 1 -> n_a1HU } }) n_a1HU lvl_s229 SimplBind go Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO SimplBind lvl_s229 SimplBind foo Considering inlining: GHC.Base.foldr arg infos [ValueArg, TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=False,boring_ok=False) ANSWER = YES Inlining done: GHC.Base.foldr Inlined fn: \ (@ a) (@ b) (k [Occ=OnceL!] :: a -> b -> b) (z [Occ=OnceL] :: b) (eta [Occ=Once] :: [a]) -> letrec { go [Occ=LoopBreaker] :: [a] -> b [LclId, Arity=1, Str=DmdType] go = \ (ds [Occ=Once!] :: [a]) -> case ds of _ [Occ=Dead] { [] -> z; : y [Occ=Once] ys [Occ=Once] -> k y (go ys) }; } in go eta Cont: ApplyTo nodup (TYPE GHC.Types.Int) ApplyTo nodup (TYPE c) ApplyTo nodup (\ (x [Occ=Once!] :: GHC.Types.Int) (r [Occ=Once, OS=OneShot] :: c) -> case x of wild { GHC.Types.I# x [Occ=Once!] -> case x of _ [Occ=Dead] { __DEFAULT -> c wild r; 1 -> n } }) ApplyTo nodup n ApplyTo nodup lvl_s229 Stop[BoringCtxt] c SimplBind go Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 30 0 discounted size = 20 ANSWER = NO Result size of Simplifier iteration=1 = {terms: 48, types: 40, coercions: 0} ==================== Occurrence analysis ==================== Rec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 62 40}] go_a1ZX = \ (b2_a1ZY [Occ=Once!] :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 [Occ=Once] -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 (go_a1ZX wild_a216) } } end Rec } lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] lvl_s229 = go_a1ZX (GHC.Types.I# (-9)) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 140 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=OnceL!] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU [Occ=OnceL*] :: c_a1Zb) _ [Occ=Dead] -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Zb [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE [Occ=Once!] :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead] { [] -> n_a1HU; : y_a1ZJ [Occ=Once!] ys_a1ZK [Occ=Once] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!] -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT wild_a20P (go_a1ZD ys_a1ZK); 1 -> n_a1HU } } }; } in go_a1ZD lvl_s229 SimplBind go Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO SimplBind lvl_s229 SimplBind foo SimplBind go Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 30 0 discounted size = 20 ANSWER = NO ==================== Simplifier ==================== Max iterations = 4 SimplMode {Phase = 0 [main], inline, rules, eta-expand, case-of-case} Result size of Simplifier = {terms: 48, types: 40, coercions: 0} Rec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 62 40}] go_a1ZX = \ (b2_a1ZY :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 (go_a1ZX wild_a216) } } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] lvl_s229 = go_a1ZX (GHC.Types.I# (-9)) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 140 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Zb [LclId, Arity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead] { [] -> n_a1HU; : y_a1ZJ ys_a1ZK -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT wild_a20P (go_a1ZD ys_a1ZK); 1 -> n_a1HU } } }; } in go_a1ZD lvl_s229 *** Called arity analysis: ==================== Called arity analysis ==================== Result size of Called arity analysis = {terms: 48, types: 40, coercions: 0} Rec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> [GHC.Types.Int] [LclId, Arity=1, CallArity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 62 40}] go_a1ZX = \ (b2_a1ZY :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 (go_a1ZX wild_a216) } } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] lvl_s229 = go_a1ZX (GHC.Types.I# (-9)) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 140 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Zb [LclId, Arity=1, CallArity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead] { [] -> n_a1HU; : y_a1ZJ ys_a1ZK -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT wild_a20P (go_a1ZD ys_a1ZK); 1 -> n_a1HU } } }; } in go_a1ZD lvl_s229 *** Simplifier: ==================== Occurrence analysis ==================== Rec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> [GHC.Types.Int] [LclId, Arity=1, CallArity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 62 40}] go_a1ZX = \ (b2_a1ZY [Occ=Once!] :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 [Occ=Once] -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 (go_a1ZX wild_a216) } } end Rec } lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] lvl_s229 = go_a1ZX (GHC.Types.I# (-9)) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 140 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=OnceL!] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU [Occ=OnceL*] :: c_a1Zb) _ [Occ=Dead] -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Zb [LclId, Arity=1, CallArity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE [Occ=Once!] :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead] { [] -> n_a1HU; : y_a1ZJ [Occ=Once!] ys_a1ZK [Occ=Once] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!] -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT wild_a20P (go_a1ZD ys_a1ZK); 1 -> n_a1HU } } }; } in go_a1ZD lvl_s229 SimplBind go Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO SimplBind lvl_s229 SimplBind foo SimplBind go Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 30 0 discounted size = 20 ANSWER = NO ==================== Simplifier ==================== Max iterations = 4 SimplMode {Phase = 0 [post-call-arity], inline, rules, eta-expand, case-of-case} Result size of Simplifier = {terms: 48, types: 40, coercions: 0} Rec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> [GHC.Types.Int] [LclId, Arity=1, CallArity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 62 40}] go_a1ZX = \ (b2_a1ZY :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 { GHC.Types.I# x_a218 -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 (go_a1ZX wild_a216) } } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] lvl_s229 = go_a1ZX (GHC.Types.I# (-9)) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 140 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead] -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Zb [LclId, Arity=1, CallArity=1, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead] { [] -> n_a1HU; : y_a1ZJ ys_a1ZK -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> c_a1HT wild_a20P (go_a1ZD ys_a1ZK); 1 -> n_a1HU } } }; } in go_a1ZD lvl_s229 *** Demand analysis: ==================== Demand analysis ==================== Result size of Demand analysis = {terms: 48, types: 40, coercions: 0} Rec { go_a1ZX [Occ=LoopBreaker] :: GHC.Types.Int -> [GHC.Types.Int] [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [20] 62 40}] go_a1ZX = \ (b2_a1ZY [Dmd=] :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 [Dmd=] { GHC.Types.I# x_a218 -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 (go_a1ZX wild_a216) } } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] lvl_s229 = go_a1ZX (GHC.Types.I# (-9)) Foo.foo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0 0] 140 0}] Foo.foo = \ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Dmd=] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead, Dmd=] -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Zb [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE [Dmd=] :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> n_a1HU; : y_a1ZJ [Dmd=] ys_a1ZK [Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> c_a1HT wild_a20P (go_a1ZD ys_a1ZK); 1 -> n_a1HU } } }; } in go_a1ZD lvl_s229 *** Worker Wrapper binds: ==================== Worker Wrapper binds ==================== Result size of Worker Wrapper binds = {terms: 79, types: 74, coercions: 0} Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType ] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> let { w_s23m [Dmd=] :: GHC.Types.Int [LclId, Str=DmdType] w_s23m = GHC.Types.I# ww_s23p } in (\ (b2_a1ZY [Dmd=] :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 [Dmd=] { GHC.Types.I# x_a218 -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 (go_a1ZX wild_a216) } }) w_s23m go_a1ZX [InlPrag=INLINE[0]] :: GHC.Types.Int -> [GHC.Types.Int] [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=False) Tmpl= \ (w_s23m [Occ=Once!, Dmd=] :: GHC.Types.Int) -> case w_s23m of _ [Occ=Dead] { GHC.Types.I# ww_s23p [Occ=Once] -> $wgo_s23r ww_s23p }}] go_a1ZX = \ (w_s23m [Dmd=] :: GHC.Types.Int) -> case w_s23m of ww_s23o { GHC.Types.I# ww_s23p -> $wgo_s23r ww_s23p } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] lvl_s229 = go_a1ZX (GHC.Types.I# (-9)) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType ] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Dmd=] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) -> let { w_s23v [Dmd=] :: t_a1Z7 [LclId, Str=DmdType] w_s23v = Control.Exception.Base.absentError @ t_a1Z7 "w_s23v t"# } in (\ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Dmd=] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU :: c_a1Zb) _ [Occ=Dead, Dmd=] -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Zb [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE [Dmd=] :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> n_a1HU; : y_a1ZJ [Dmd=] ys_a1ZK [Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> c_a1HT wild_a20P (go_a1ZD ys_a1ZK); 1 -> n_a1HU } } }; } in go_a1ZD lvl_s229) @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u w_s23v Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=False) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once, Dmd=] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Dmd=] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) (w_s23v [Dmd=] :: t_a1Z7) -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u *** Simplifier: ==================== Occurrence analysis ==================== Rec { go_a1ZX [InlPrag=INLINE[0]] :: GHC.Types.Int -> [GHC.Types.Int] [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=False) Tmpl= \ (w_s23m [Occ=Once!, Dmd=] :: GHC.Types.Int) -> case w_s23m of _ [Occ=Dead] { GHC.Types.I# ww_s23p [Occ=Once] -> $wgo_s23r ww_s23p }}] go_a1ZX = \ (w_s23m [Occ=Once!, Dmd=] :: GHC.Types.Int) -> case w_s23m of _ [Occ=Dead] { GHC.Types.I# ww_s23p [Occ=Once] -> $wgo_s23r ww_s23p } $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType ] $wgo_s23r = \ (ww_s23p [Occ=Once] :: GHC.Prim.Int#) -> let { w_s23m [Occ=Once, Dmd=] :: GHC.Types.Int [LclId, Str=DmdType] w_s23m = GHC.Types.I# ww_s23p } in (\ (b2_a1ZY [Occ=Once!, Dmd=, OS=OneShot] :: GHC.Types.Int) -> case b2_a1ZY of wild_a216 [Dmd=] { GHC.Types.I# x_a218 [Occ=Once] -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# x_a218 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 (go_a1ZX wild_a216) } }) w_s23m end Rec } lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 30 0}] lvl_s229 = go_a1ZX (GHC.Types.I# (-9)) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType ] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once, Dmd=] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) -> let { w_s23v [Occ=Once, Dmd=] :: t_a1Z7 [LclId, Str=DmdType] w_s23v = Control.Exception.Base.absentError @ t_a1Z7 "w_s23v t"# } in (\ (@ t_a1Za) (@ c_a1Zb) (c_a1HT [Occ=OnceL!, Dmd=, OS=OneShot] :: GHC.Types.Int -> c_a1Zb -> c_a1Zb) (n_a1HU [Occ=OnceL*, OS=OneShot] :: c_a1Zb) _ [Occ=Dead, Dmd=, OS=OneShot] -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Zb [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE [Occ=Once!, Dmd=] :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> n_a1HU; : y_a1ZJ [Occ=Once!, Dmd=] ys_a1ZK [Occ=Once, Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!, Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> c_a1HT wild_a20P (go_a1ZD ys_a1ZK); 1 -> n_a1HU } } }; } in go_a1ZD lvl_s229) @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u w_s23v Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=False) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once, Dmd=] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once, Dmd=] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u SimplBind go SimplBind $wgo Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: go_a1ZX arg infos [ValueArg] uf arity 1 interesting continuation BoringCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=True,boring_ok=False) ANSWER = YES Inlining done: go Inlined fn: \ (w_s23m [Occ=Once!] :: GHC.Types.Int) -> case w_s23m of _ [Occ=Dead] { GHC.Types.I# ww_s23p [Occ=Once] -> $wgo ww_s23p } Cont: ApplyTo nodup wild Stop[BoringCtxt] [GHC.Types.Int] Considering inlining: wild_a216 arg infos [] uf arity 0 interesting continuation CaseCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = -30 ANSWER = NO SimplBind lvl_s229 Considering inlining: go_a1ZX arg infos [ValueArg] uf arity 1 interesting continuation RhsCtxt some_benefit True is exp: True is work-free: True guidance ALWAYS_IF(unsat_ok=True,boring_ok=False) ANSWER = YES Inlining done: go Inlined fn: \ (w_s23m [Occ=Once!] :: GHC.Types.Int) -> case w_s23m of _ [Occ=Dead] { GHC.Types.I# ww_s23p [Occ=Once] -> $wgo ww_s23p } Cont: ApplyTo nodup (GHC.Types.I# (-9)) Stop[RhsCtxt] [GHC.Types.Int] SimplBind $wfoo SimplBind go Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 20 0 discounted size = 10 ANSWER = NO SimplBind foo Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO Result size of Simplifier iteration=1 = {terms: 62, types: 60, coercions: 0} ==================== Occurrence analysis ==================== Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0] 72 40}] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> let { wild_a216 [Occ=Once] :: GHC.Types.Int [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=False, Arity=0, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [] 10 20}] wild_a216 = GHC.Types.I# ww_s23p } in case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int wild_a216 ($wgo_s23r ww_s23p) } end Rec } lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 20 0}] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0] 140 0}] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=OnceL!] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=OnceL*] :: c_a1Z8) -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE [Occ=Once!] :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> w_s23u; : y_a1ZJ [Occ=Once!, Dmd=] ys_a1ZK [Occ=Once, Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!, Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> w_s23t wild_a20P (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u SimplBind $wgo SimplBind lvl_s229 SimplBind $wfoo SimplBind go Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 20 0 discounted size = 10 ANSWER = NO SimplBind foo Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO Result size of Simplifier iteration=2 = {terms: 53, types: 53, coercions: 0} ==================== Occurrence analysis ==================== Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0] 62 40}] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 20 0}] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0] 140 0}] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=OnceL!] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=OnceL*] :: c_a1Z8) -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE [Occ=Once!] :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> w_s23u; : y_a1ZJ [Occ=Once!, Dmd=] ys_a1ZK [Occ=Once, Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!, Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> w_s23t wild_a20P (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u SimplBind $wgo SimplBind lvl_s229 SimplBind $wfoo SimplBind go Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 20 0 discounted size = 10 ANSWER = NO SimplBind foo Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO ==================== Simplifier ==================== Max iterations = 4 SimplMode {Phase = 0 [post-worker-wrapper], inline, rules, eta-expand, case-of-case} Result size of Simplifier = {terms: 53, types: 53, coercions: 0} Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0] 62 40}] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 20 0}] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0] 140 0}] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> w_s23u; : y_a1ZJ [Dmd=] ys_a1ZK [Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> w_s23t wild_a20P (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u *** Float out(FOS {Lam = Just 0, Consts = True, PAPs = True}): ==================== Levels added: ==================== <$wgo_s23r,<0,0>> <$wgo_s23r,<0,0>> = \ > -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of > { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) }; > > = $wgo_s23r (-9) <$wfoo_s23w,<0,0>> <$wfoo_s23w,<0,0>> = \ > > > > -> letrec { > > = \ > -> case ds_a1ZE of > { [] -> w_s23u; : > > -> case y_a1ZJ of > { GHC.Types.I# > -> case x_a20R of > { __DEFAULT -> w_s23t y_a1ZJ (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD lvl_s229 > > = \ > > > > > -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u ==================== Float out(FOS {Lam = Just 0, Consts = True, PAPs = True}) ==================== Result size of Float out(FOS {Lam = Just 0, Consts = True, PAPs = True}) = {terms: 53, types: 53, coercions: 0} Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType ] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType ] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType ] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> w_s23u; : y_a1ZJ [Dmd=] ys_a1ZK [Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> w_s23t y_a1ZJ (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u *** Common sub-expression: ==================== Common sub-expression ==================== Result size of Common sub-expression = {terms: 53, types: 53, coercions: 0} Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType ] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of wild_Xd [Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType ] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType ] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of wild_a1ZF [Dmd=] { [] -> w_s23u; : y_a1ZJ [Dmd=] ys_a1ZK [Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Dmd=] -> case x_a20R of wild_Xj [Dmd=] { __DEFAULT -> w_s23t y_a1ZJ (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u *** Float inwards: ==================== Float inwards ==================== Result size of Float inwards = {terms: 53, types: 53, coercions: 0} Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType ] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of wild_Xd [Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType ] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) -> (letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType ] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of wild_a1ZF [Dmd=] { [] -> w_s23u; : y_a1ZJ [Dmd=] ys_a1ZK [Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Dmd=] -> case x_a20R of wild_Xj [Dmd=] { __DEFAULT -> w_s23t y_a1ZJ (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD) lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u *** Liberate case: ==================== Liberate case ==================== Result size of Liberate case = {terms: 53, types: 53, coercions: 0} Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType ] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of wild_Xd [Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType ] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) -> (letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType ] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of wild_a1ZF [Dmd=] { [] -> w_s23u; : y_a1ZJ [Dmd=] ys_a1ZK [Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Dmd=] -> case x_a20R of wild_Xj [Dmd=] { __DEFAULT -> w_s23t y_a1ZJ (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD) lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u *** Simplifier: ==================== Occurrence analysis ==================== Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType ] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType ] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=OnceL!] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=OnceL*] :: c_a1Z8) -> (letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType ] go_a1ZD = \ (ds_a1ZE [Occ=Once!] :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> w_s23u; : y_a1ZJ [Occ=Once!, Dmd=] ys_a1ZK [Occ=Once, Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!, Dmd=] -> let { y_a1ZJ [Occ=Once] :: GHC.Types.Int [LclId, Str=DmdType] y_a1ZJ = wild_a20P } in case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> w_s23t y_a1ZJ (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD) lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u SimplBind $wgo SimplBind lvl_s229 SimplBind $wfoo SimplBind go Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 20 0 discounted size = 10 ANSWER = NO SimplBind foo Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO Result size of Simplifier iteration=1 = {terms: 53, types: 53, coercions: 0} ==================== Occurrence analysis ==================== Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0] 62 40}] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 20 0}] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0] 140 0}] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=OnceL!] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=OnceL*] :: c_a1Z8) -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE [Occ=Once!] :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> w_s23u; : y_a1ZJ [Occ=Once!, Dmd=] ys_a1ZK [Occ=Once, Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!, Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> w_s23t wild_a20P (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u SimplBind $wgo SimplBind lvl_s229 SimplBind $wfoo SimplBind go Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 20 0 discounted size = 10 ANSWER = NO SimplBind foo Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO ==================== Simplifier ==================== Max iterations = 4 SimplMode {Phase = 0 [post-liberate-case], inline, rules, eta-expand, case-of-case} Result size of Simplifier = {terms: 53, types: 53, coercions: 0} Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0] 62 40}] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 20 0}] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0] 140 0}] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> w_s23u; : y_a1ZJ [Dmd=] ys_a1ZK [Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> w_s23t wild_a20P (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u *** SpecConstr: ==================== SpecConstr ==================== Result size of SpecConstr = {terms: 53, types: 53, coercions: 0} Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType ] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType ] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType ] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> w_s23u; : y_a1ZJ [Dmd=] ys_a1ZK [Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> w_s23t wild_a20P (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u *** Simplifier: ==================== Occurrence analysis ==================== Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType ] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 [Occ=OnceL] :: [GHC.Types.Int] [LclId, Str=DmdType] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType ] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=OnceL!] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=OnceL*] :: c_a1Z8) -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType ] go_a1ZD = \ (ds_a1ZE [Occ=Once!] :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> w_s23u; : y_a1ZJ [Occ=Once!, Dmd=] ys_a1ZK [Occ=Once, Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Occ=Once!, Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> w_s23t wild_a20P (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u SimplBind $wgo SimplBind lvl_s229 SimplBind $wfoo SimplBind go Considering inlining: wild_a20P arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [] 10 20 discounted size = 0 ANSWER = NO Considering inlining: lvl_s229 arg infos [] uf arity 0 interesting continuation BoringCtxt some_benefit False is exp: False is work-free: False guidance IF_ARGS [] 20 0 discounted size = 10 ANSWER = NO SimplBind foo Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO Considering inlining: $wfoo_s23w arg infos [TrivArg, TrivArg] uf arity 2 interesting continuation BoringCtxt some_benefit False is exp: True is work-free: True guidance IF_ARGS [60 0] 140 0 discounted size = 110 ANSWER = NO ==================== Simplifier ==================== Max iterations = 4 SimplMode {Phase = 0 [final], inline, rules, eta-expand, case-of-case} Result size of Simplifier = {terms: 53, types: 53, coercions: 0} Rec { $wgo_s23r [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [LclId, Arity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [0] 62 40}] $wgo_s23r = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of _ [Occ=Dead, Dmd=] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) ($wgo_s23r ww_s23p) } end Rec } lvl_s229 :: [GHC.Types.Int] [LclId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 20 0}] lvl_s229 = $wgo_s23r (-9) $wfoo_s23w :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [LclId, Arity=2, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0] 140 0}] $wfoo_s23w = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, CallArity=1, Str=DmdType , Unf=Unf{Src=, TopLvl=False, Arity=1, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [30] 100 0}] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead, Dmd=] { [] -> w_s23u; : y_a1ZJ [Dmd=] ys_a1ZK [Dmd=] -> case y_a1ZJ of wild_a20P { GHC.Types.I# x_a20R [Dmd=] -> case x_a20R of _ [Occ=Dead, Dmd=] { __DEFAULT -> w_s23t wild_a20P (go_a1ZD ys_a1ZK); 1 -> w_s23u } } }; } in go_a1ZD lvl_s229 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [LclIdX, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w_s23u :: c_a1Z8) _ [Occ=Dead, Dmd=] -> $wfoo_s23w @ t_a1Z7 @ c_a1Z8 w_s23t w_s23u *** Tidy Core: ==================== Tidy Core ==================== Result size of Tidy Core = {terms: 53, types: 53, coercions: 0} Rec { Foo.$wgo [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [GblId, Arity=1, Caf=NoCafRefs, Str=DmdType ] Foo.$wgo = \ (ww_s23p :: GHC.Prim.Int#) -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool (GHC.Prim.<=# ww_s23p 10) of _ [Occ=Dead] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> GHC.Types.: @ GHC.Types.Int (GHC.Types.I# ww_s23p) (Foo.$wgo ww_s23p) } end Rec } Foo.foo1 :: [GHC.Types.Int] [GblId, Str=DmdType, Unf=Unf{Src=, TopLvl=True, Arity=0, Value=False, ConLike=False, WorkFree=False, Expandable=False, Guidance=IF_ARGS [] 20 0}] Foo.foo1 = Foo.$wgo (-9) Foo.$wfoo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [GblId, Arity=2, Str=DmdType , Unf=Unf{Src=, TopLvl=True, Arity=2, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=IF_ARGS [60 0] 140 0}] Foo.$wfoo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w1_s23u :: c_a1Z8) -> letrec { go_a1ZD [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, Str=DmdType ] go_a1ZD = \ (ds_a1ZE :: [GHC.Types.Int]) -> case ds_a1ZE of _ [Occ=Dead] { [] -> w1_s23u; : y_a1ZJ ys_a1ZK -> case y_a1ZJ of wild1_a20P { GHC.Types.I# x_a20R -> case x_a20R of _ [Occ=Dead] { __DEFAULT -> w_s23t wild1_a20P (go_a1ZD ys_a1ZK); 1 -> w1_s23u } } }; } in go_a1ZD Foo.foo1 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [GblId, Arity=3, Str=DmdType , Unf=Unf{Src=InlineStable, TopLvl=True, Arity=3, Value=True, ConLike=True, WorkFree=True, Expandable=True, Guidance=ALWAYS_IF(unsat_ok=True,boring_ok=True) Tmpl= \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w1_s23u [Occ=Once] :: c_a1Z8) _ [Occ=Dead] -> Foo.$wfoo @ t_a1Z7 @ c_a1Z8 w_s23t w1_s23u}] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s23t :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w1_s23u :: c_a1Z8) _ [Occ=Dead] -> Foo.$wfoo @ t_a1Z7 @ c_a1Z8 w_s23t w1_s23u *** CorePrep: ==================== CorePrep ==================== Result size of CorePrep = {terms: 62, types: 58, coercions: 0} Rec { Foo.$wgo [Occ=LoopBreaker] :: GHC.Prim.Int# -> [GHC.Types.Int] [GblId, Arity=1, Caf=NoCafRefs, Str=DmdType , Unf=OtherCon []] Foo.$wgo = \ (ww_s24e :: GHC.Prim.Int#) -> case GHC.Prim.<=# ww_s24e 10 of sat_s24f { __DEFAULT -> case GHC.Prim.tagToEnum# @ GHC.Types.Bool sat_s24f of _ [Occ=Dead] { GHC.Types.False -> GHC.Types.[] @ GHC.Types.Int; GHC.Types.True -> let { sat_s24i [Occ=Once] :: [GHC.Types.Int] [LclId, Str=DmdType] sat_s24i = Foo.$wgo ww_s24e } in let { sat_s24h [Occ=Once] :: GHC.Types.Int [LclId, Str=DmdType] sat_s24h = GHC.Types.I# ww_s24e } in GHC.Types.: @ GHC.Types.Int sat_s24h sat_s24i } } end Rec } Foo.foo1 :: [GHC.Types.Int] [GblId, Str=DmdType] Foo.foo1 = Foo.$wgo (-9) Foo.$wfoo :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> c_a1Z8 [GblId, Arity=2, Str=DmdType , Unf=OtherCon []] Foo.$wfoo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s24j [Occ=OnceL!] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w1_s24k [Occ=OnceL*] :: c_a1Z8) -> letrec { go_s24l [Occ=LoopBreaker] :: [GHC.Types.Int] -> c_a1Z8 [LclId, Arity=1, Str=DmdType , Unf=OtherCon []] go_s24l = \ (ds_s24m [Occ=Once!] :: [GHC.Types.Int]) -> case ds_s24m of _ [Occ=Dead] { [] -> w1_s24k; : y_s24o [Occ=Once!] ys_s24p [Occ=Once] -> case y_s24o of wild1_s24q { GHC.Types.I# x_s24r [Occ=Once!] -> case x_s24r of _ [Occ=Dead] { __DEFAULT -> let { sat_s24t [Occ=Once] :: c_a1Z8 [LclId, Str=DmdType] sat_s24t = go_s24l ys_s24p } in w_s24j wild1_s24q sat_s24t; 1 -> w1_s24k } } }; } in go_s24l Foo.foo1 Foo.foo [InlPrag=INLINE[0]] :: forall t_a1Z7 c_a1Z8. (GHC.Types.Int -> c_a1Z8 -> c_a1Z8) -> c_a1Z8 -> t_a1Z7 -> c_a1Z8 [GblId, Arity=3, Str=DmdType , Unf=OtherCon []] Foo.foo = \ (@ t_a1Z7) (@ c_a1Z8) (w_s24u [Occ=Once] :: GHC.Types.Int -> c_a1Z8 -> c_a1Z8) (w1_s24v [Occ=Once] :: c_a1Z8) _ [Occ=Dead] -> Foo.$wfoo @ t_a1Z7 @ c_a1Z8 w_s24u w1_s24v *** Stg2Stg: *** CodeGen: *** Assembler: Upsweep completely successful. *** Deleting temp files: Warning: deleting non-existent /tmp/ghc27658_0/ghc27658_3.c Warning: deleting non-existent /tmp/ghc27658_0/ghc27658_1.s *** Deleting temp files: *** Deleting temp dirs: From david.feuer at gmail.com Wed Aug 27 16:39:46 2014 From: david.feuer at gmail.com (David Feuer) Date: Wed, 27 Aug 2014 12:39:46 -0400 Subject: Why isn't ($) inlining when I want? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Another data point: if I add this rule, it fires successfully and inlines ($) for me: "$" forall f x . f $ x = f x Side note: I wonder why the Report specified an arity of 2 for ($) instead of an arity of 1, but I guess there's nothing to be done about that now, since ($) undefined `seq` 1 = 1 but id undefined `seq` 1 = undefined On Wed, Aug 27, 2014 at 12:21 PM, David Feuer wrote: > I just ran that (results attached), and as far as I can tell, it > doesn't even *consider* inlining ($) until phase 2. > > On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones > wrote: >> It's hard to tell since you are using a modified compiler. Try running with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will show you every inlining, whether failed or successful. You can see the attempt to inline ($) and there is some info with the output that may help to explain why it did or did not work. >> >> Try that >> >> Simon >> >> | -----Original Message----- >> | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of David >> | Feuer >> | Sent: 27 August 2014 04:50 >> | To: ghc-devs; Carter Schonwald >> | Subject: Why isn't ($) inlining when I want? >> | >> | tl;dr I added a simplifier run with inlining enabled between >> | specialization and floating out. It seems incapable of inlining >> | saturated applications of ($), and I can't figure out why. These are >> | inlined later, when phase 2 runs. Am I running the simplifier wrong or >> | something? >> | >> | >> | I'm working on this simple little fusion pipeline: >> | >> | {-# INLINE takeWhile #-} >> | takeWhile p xs = build builder >> | where >> | builder c n = foldr go n xs >> | where >> | go x r = if p x then x `c` r else n >> | >> | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] >> | >> | There are some issues with the enumFrom definition that break things. >> | If I use a fusible unfoldr that produces some numbers instead, that >> | issue goes away. Part of that problem (but not all of it) is that the >> | simplifier doesn't run to apply rules between specialization and full >> | laziness, so there's no opportunity for the specialization of >> | enumFromTo to Int to trigger the rewrite to a build form and fusion >> | with foldr before full laziness tears things apart. The other problem >> | is that inlining doesn't happen at all before full laziness, so things >> | defined using foldr and/or build aren't actually exposed as such until >> | afterwards. Therefore I decided to try adding a simplifier run with >> | inlining between specialization and floating out: >> | >> | runWhen do_specialise CoreDoSpecialising, >> | >> | runWhen full_laziness $ CoreDoSimplify max_iter >> | (base_mode { sm_phase = InitialPhase >> | , sm_names = ["PostGentle"] >> | , sm_rules = rules_on >> | , sm_inline = True >> | , sm_case_case = False }), >> | >> | runWhen full_laziness $ >> | CoreDoFloatOutwards FloatOutSwitches { >> | floatOutLambdas = Just 0, >> | floatOutConstants = True, >> | floatOutPartialApplications = False }, >> | >> | The weird thing is that for some reason this doesn't inline ($), even >> | though it appears to be saturated. Using the modified thing with (my >> | version of) unfoldr: >> | >> | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato 10) >> | (-9) >> | >> | potato :: Int -> Int -> Maybe (Int, Int) >> | potato n m | m <= n = Just (m, m) >> | | otherwise = Nothing >> | >> | >> | I get this out of the specializer: >> | >> | foo >> | foo = >> | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> >> | $ (. (foldr c_a1HT n_a1HU) >> | (takeWhile >> | (let { >> | ds_s21z >> | ds_s21z = I# 1 } in >> | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) >> | (let { >> | n_s21x >> | n_s21x = I# 10 } in >> | unfoldr >> | (\ m_a1U7 -> >> | case leInt m_a1U7 n_s21x of _ { >> | False -> Nothing; >> | True -> Just (m_a1U7, m_a1U7) >> | }) >> | ($fNumInt_$cnegate (I# 9))) >> | >> | >> | and then I get this out of my extra simplifier run: >> | >> | foo >> | foo = >> | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> >> | $ (\ x_a20f -> >> | foldr >> | (\ x_a1HR r_a1HS -> >> | case case x_a1HR of _ { I# x_a20R -> >> | tagToEnum# >> | (case x_a20R of _ { >> | __DEFAULT -> 1; >> | 1 -> 0 >> | }) >> | } >> | of _ { >> | False -> n_a1HU; >> | True -> c_a1HT x_a1HR r_a1HS >> | }) >> | n_a1HU >> | x_a20f) >> | (let { >> | b'_a1ZS >> | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in >> | $ (build) >> | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> >> | letrec { >> | go_a1ZX >> | go_a1ZX = >> | \ b2_a1ZY -> >> | case case case b2_a1ZY of _ { I# x_a218 -> >> | tagToEnum# (<=# x_a218 10) >> | } >> | of _ { >> | False -> Nothing; >> | True -> Just (b2_a1ZY, b2_a1ZY) >> | } >> | of _ { >> | Nothing -> n_a1ZW; >> | Just ds_a203 -> >> | case ds_a203 of _ { (a1_a207, new_b_a208) -> >> | c_a1ZV a1_a207 (go_a1ZX new_b_a208) >> | } >> | }; } in >> | go_a1ZX b'_a1ZS)) >> | >> | >> | That is, neither the $ in the code nor the $ that was inserted when >> | inlining unfoldr got inlined themselves, even though both appear to be >> | saturated. As a result, foldr/build doesn't fire, and full laziness >> | tears things apart. Later on, in simplifier phase 2, $ gets inlined. >> | What's preventing this from happening in the PostGentle phase I added? >> | >> | David Feuer >> | _______________________________________________ >> | ghc-devs mailing list >> | ghc-devs at haskell.org >> | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Wed Aug 27 19:38:03 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 27 Aug 2014 19:38:03 +0000 Subject: Why isn't ($) inlining when I want? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F34E7@DB3PRD3001MB020.064d.mgd.msft.net> You'll have to do more detective work! In your dump I see "Inactive unfolding $". So that's why it's not being inlined. That message comes from CoreUnfold, line 941 or so. The Boolean active_unfolding is passed in to callSiteInline from Simplify, line 1408 or so. It is generated by the function activeUnfolding, defined in SimplUtils. But you have probably change the "CompilerPhase" data type, so I can't guess what is happening. But if you just follow it through I'm sure you'll find it. Simon | -----Original Message----- | From: David Feuer [mailto:david.feuer at gmail.com] | Sent: 27 August 2014 17:22 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: Why isn't ($) inlining when I want? | | I just ran that (results attached), and as far as I can tell, it | doesn't even *consider* inlining ($) until phase 2. | | On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones | wrote: | > It's hard to tell since you are using a modified compiler. Try running | with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will | show you every inlining, whether failed or successful. You can see the | attempt to inline ($) and there is some info with the output that may | help to explain why it did or did not work. | > | > Try that | > | > Simon | > | > | -----Original Message----- | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | David | > | Feuer | > | Sent: 27 August 2014 04:50 | > | To: ghc-devs; Carter Schonwald | > | Subject: Why isn't ($) inlining when I want? | > | | > | tl;dr I added a simplifier run with inlining enabled between | > | specialization and floating out. It seems incapable of inlining | > | saturated applications of ($), and I can't figure out why. These are | > | inlined later, when phase 2 runs. Am I running the simplifier wrong | or | > | something? | > | | > | | > | I'm working on this simple little fusion pipeline: | > | | > | {-# INLINE takeWhile #-} | > | takeWhile p xs = build builder | > | where | > | builder c n = foldr go n xs | > | where | > | go x r = if p x then x `c` r else n | > | | > | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] | > | | > | There are some issues with the enumFrom definition that break things. | > | If I use a fusible unfoldr that produces some numbers instead, that | > | issue goes away. Part of that problem (but not all of it) is that the | > | simplifier doesn't run to apply rules between specialization and full | > | laziness, so there's no opportunity for the specialization of | > | enumFromTo to Int to trigger the rewrite to a build form and fusion | > | with foldr before full laziness tears things apart. The other problem | > | is that inlining doesn't happen at all before full laziness, so | things | > | defined using foldr and/or build aren't actually exposed as such | until | > | afterwards. Therefore I decided to try adding a simplifier run with | > | inlining between specialization and floating out: | > | | > | runWhen do_specialise CoreDoSpecialising, | > | | > | runWhen full_laziness $ CoreDoSimplify max_iter | > | (base_mode { sm_phase = InitialPhase | > | , sm_names = ["PostGentle"] | > | , sm_rules = rules_on | > | , sm_inline = True | > | , sm_case_case = False }), | > | | > | runWhen full_laziness $ | > | CoreDoFloatOutwards FloatOutSwitches { | > | floatOutLambdas = Just 0, | > | floatOutConstants = True, | > | floatOutPartialApplications = False | }, | > | | > | The weird thing is that for some reason this doesn't inline ($), even | > | though it appears to be saturated. Using the modified thing with (my | > | version of) unfoldr: | > | | > | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato | 10) | > | (-9) | > | | > | potato :: Int -> Int -> Maybe (Int, Int) | > | potato n m | m <= n = Just (m, m) | > | | otherwise = Nothing | > | | > | | > | I get this out of the specializer: | > | | > | foo | > | foo = | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | > | $ (. (foldr c_a1HT n_a1HU) | > | (takeWhile | > | (let { | > | ds_s21z | > | ds_s21z = I# 1 } in | > | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) | > | (let { | > | n_s21x | > | n_s21x = I# 10 } in | > | unfoldr | > | (\ m_a1U7 -> | > | case leInt m_a1U7 n_s21x of _ { | > | False -> Nothing; | > | True -> Just (m_a1U7, m_a1U7) | > | }) | > | ($fNumInt_$cnegate (I# 9))) | > | | > | | > | and then I get this out of my extra simplifier run: | > | | > | foo | > | foo = | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | > | $ (\ x_a20f -> | > | foldr | > | (\ x_a1HR r_a1HS -> | > | case case x_a1HR of _ { I# x_a20R -> | > | tagToEnum# | > | (case x_a20R of _ { | > | __DEFAULT -> 1; | > | 1 -> 0 | > | }) | > | } | > | of _ { | > | False -> n_a1HU; | > | True -> c_a1HT x_a1HR r_a1HS | > | }) | > | n_a1HU | > | x_a20f) | > | (let { | > | b'_a1ZS | > | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in | > | $ (build) | > | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> | > | letrec { | > | go_a1ZX | > | go_a1ZX = | > | \ b2_a1ZY -> | > | case case case b2_a1ZY of _ { I# x_a218 -> | > | tagToEnum# (<=# x_a218 10) | > | } | > | of _ { | > | False -> Nothing; | > | True -> Just (b2_a1ZY, b2_a1ZY) | > | } | > | of _ { | > | Nothing -> n_a1ZW; | > | Just ds_a203 -> | > | case ds_a203 of _ { (a1_a207, new_b_a208) -> | > | c_a1ZV a1_a207 (go_a1ZX new_b_a208) | > | } | > | }; } in | > | go_a1ZX b'_a1ZS)) | > | | > | | > | That is, neither the $ in the code nor the $ that was inserted when | > | inlining unfoldr got inlined themselves, even though both appear to | be | > | saturated. As a result, foldr/build doesn't fire, and full laziness | > | tears things apart. Later on, in simplifier phase 2, $ gets inlined. | > | What's preventing this from happening in the PostGentle phase I | added? | > | | > | David Feuer | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | http://www.haskell.org/mailman/listinfo/ghc-devs From dan.doel at gmail.com Wed Aug 27 21:56:51 2014 From: dan.doel at gmail.com (Dan Doel) Date: Wed, 27 Aug 2014 17:56:51 -0400 Subject: Why isn't ($) inlining when I want? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221F34E7@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F34E7@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I think talking about inlining of $ may not be addressing the crux of the problem here. The issue seems to be about functions like the one in the first message. For instance: loop :: (Int -> Int) -> Int loop g = sum . map g $ [1..1000000] Suppose for argument that we have a fusion framework that would handle this. The problem is that this does not actually turn into a loop over integers, because the constant [1..1000000] gets floated out. It instead builds a list/vector/whatever. By contrast, if we write: loop' :: Int loop' = sum . map (+1) $ [1..1000000] this does turn into a loop over integers, with no intermediate list. Presumably this is due to there being no work to be saved ever by floating the list out. These are the examples people usually test fusion with. And if loop is small enough to inline, it turns out that the actual code that gets run will be the same as loop', because everything will get inlined and fused. But it is also possible to make loop big enough to not inline, and then the floating will pessimize the overall code. So the core issue is that constant floating blocks some fusion opportunities. It is trying to save the work of building the structure more than once, but fusion can cause the structure to not be built at all. And the floating happens before fusion can reasonably be expected to work. Can anything be done about this? I've verified that this kind of situation also affects vector. And it seems to be an issue even if loop is written: loop g = sum (map g [1..1000000]) -- Dan On Wed, Aug 27, 2014 at 3:38 PM, Simon Peyton Jones wrote: > You'll have to do more detective work! In your dump I see "Inactive > unfolding $". So that's why it's not being inlined. That message comes > from CoreUnfold, line 941 or so. The Boolean active_unfolding is passed in > to callSiteInline from Simplify, line 1408 or so. It is generated by the > function activeUnfolding, defined in SimplUtils. > > But you have probably change the "CompilerPhase" data type, so I can't > guess what is happening. But if you just follow it through I'm sure you'll > find it. > > Simon > > | -----Original Message----- > | From: David Feuer [mailto:david.feuer at gmail.com] > | Sent: 27 August 2014 17:22 > | To: Simon Peyton Jones > | Cc: ghc-devs > | Subject: Re: Why isn't ($) inlining when I want? > | > | I just ran that (results attached), and as far as I can tell, it > | doesn't even *consider* inlining ($) until phase 2. > | > | On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones > | wrote: > | > It's hard to tell since you are using a modified compiler. Try running > | with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will > | show you every inlining, whether failed or successful. You can see the > | attempt to inline ($) and there is some info with the output that may > | help to explain why it did or did not work. > | > > | > Try that > | > > | > Simon > | > > | > | -----Original Message----- > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | David > | > | Feuer > | > | Sent: 27 August 2014 04:50 > | > | To: ghc-devs; Carter Schonwald > | > | Subject: Why isn't ($) inlining when I want? > | > | > | > | tl;dr I added a simplifier run with inlining enabled between > | > | specialization and floating out. It seems incapable of inlining > | > | saturated applications of ($), and I can't figure out why. These are > | > | inlined later, when phase 2 runs. Am I running the simplifier wrong > | or > | > | something? > | > | > | > | > | > | I'm working on this simple little fusion pipeline: > | > | > | > | {-# INLINE takeWhile #-} > | > | takeWhile p xs = build builder > | > | where > | > | builder c n = foldr go n xs > | > | where > | > | go x r = if p x then x `c` r else n > | > | > | > | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] > | > | > | > | There are some issues with the enumFrom definition that break things. > | > | If I use a fusible unfoldr that produces some numbers instead, that > | > | issue goes away. Part of that problem (but not all of it) is that the > | > | simplifier doesn't run to apply rules between specialization and full > | > | laziness, so there's no opportunity for the specialization of > | > | enumFromTo to Int to trigger the rewrite to a build form and fusion > | > | with foldr before full laziness tears things apart. The other problem > | > | is that inlining doesn't happen at all before full laziness, so > | things > | > | defined using foldr and/or build aren't actually exposed as such > | until > | > | afterwards. Therefore I decided to try adding a simplifier run with > | > | inlining between specialization and floating out: > | > | > | > | runWhen do_specialise CoreDoSpecialising, > | > | > | > | runWhen full_laziness $ CoreDoSimplify max_iter > | > | (base_mode { sm_phase = InitialPhase > | > | , sm_names = ["PostGentle"] > | > | , sm_rules = rules_on > | > | , sm_inline = True > | > | , sm_case_case = False }), > | > | > | > | runWhen full_laziness $ > | > | CoreDoFloatOutwards FloatOutSwitches { > | > | floatOutLambdas = Just 0, > | > | floatOutConstants = True, > | > | floatOutPartialApplications = False > | }, > | > | > | > | The weird thing is that for some reason this doesn't inline ($), even > | > | though it appears to be saturated. Using the modified thing with (my > | > | version of) unfoldr: > | > | > | > | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato > | 10) > | > | (-9) > | > | > | > | potato :: Int -> Int -> Maybe (Int, Int) > | > | potato n m | m <= n = Just (m, m) > | > | | otherwise = Nothing > | > | > | > | > | > | I get this out of the specializer: > | > | > | > | foo > | > | foo = > | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> > | > | $ (. (foldr c_a1HT n_a1HU) > | > | (takeWhile > | > | (let { > | > | ds_s21z > | > | ds_s21z = I# 1 } in > | > | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) > | > | (let { > | > | n_s21x > | > | n_s21x = I# 10 } in > | > | unfoldr > | > | (\ m_a1U7 -> > | > | case leInt m_a1U7 n_s21x of _ { > | > | False -> Nothing; > | > | True -> Just (m_a1U7, m_a1U7) > | > | }) > | > | ($fNumInt_$cnegate (I# 9))) > | > | > | > | > | > | and then I get this out of my extra simplifier run: > | > | > | > | foo > | > | foo = > | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> > | > | $ (\ x_a20f -> > | > | foldr > | > | (\ x_a1HR r_a1HS -> > | > | case case x_a1HR of _ { I# x_a20R -> > | > | tagToEnum# > | > | (case x_a20R of _ { > | > | __DEFAULT -> 1; > | > | 1 -> 0 > | > | }) > | > | } > | > | of _ { > | > | False -> n_a1HU; > | > | True -> c_a1HT x_a1HR r_a1HS > | > | }) > | > | n_a1HU > | > | x_a20f) > | > | (let { > | > | b'_a1ZS > | > | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in > | > | $ (build) > | > | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> > | > | letrec { > | > | go_a1ZX > | > | go_a1ZX = > | > | \ b2_a1ZY -> > | > | case case case b2_a1ZY of _ { I# x_a218 -> > | > | tagToEnum# (<=# x_a218 10) > | > | } > | > | of _ { > | > | False -> Nothing; > | > | True -> Just (b2_a1ZY, b2_a1ZY) > | > | } > | > | of _ { > | > | Nothing -> n_a1ZW; > | > | Just ds_a203 -> > | > | case ds_a203 of _ { (a1_a207, new_b_a208) -> > | > | c_a1ZV a1_a207 (go_a1ZX new_b_a208) > | > | } > | > | }; } in > | > | go_a1ZX b'_a1ZS)) > | > | > | > | > | > | That is, neither the $ in the code nor the $ that was inserted when > | > | inlining unfoldr got inlined themselves, even though both appear to > | be > | > | saturated. As a result, foldr/build doesn't fire, and full laziness > | > | tears things apart. Later on, in simplifier phase 2, $ gets inlined. > | > | What's preventing this from happening in the PostGentle phase I > | added? > | > | > | > | David Feuer > | > | _______________________________________________ > | > | ghc-devs mailing list > | > | ghc-devs at haskell.org > | > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwlato at gmail.com Wed Aug 27 23:16:33 2014 From: jwlato at gmail.com (John Lato) Date: Thu, 28 Aug 2014 07:16:33 +0800 Subject: Why isn't ($) inlining when I want? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F34E7@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I sometimes think the solution is to make let-floating apply in fewer cases. I'm not sure we ever want to float out intermediate lists, the cost of creating them is very small relative to the memory consumption if they do happen to get shared. My approach is typically to mark loop INLINE. This very often results in the code I want (with vector, which I use more than lists), but it is a big hammer to apply. John On Thu, Aug 28, 2014 at 5:56 AM, Dan Doel wrote: > I think talking about inlining of $ may not be addressing the crux of the > problem here. > > The issue seems to be about functions like the one in the first message. > For instance: > > loop :: (Int -> Int) -> Int > loop g = sum . map g $ [1..1000000] > > Suppose for argument that we have a fusion framework that would handle > this. The problem is that this does not actually turn into a loop over > integers, because the constant [1..1000000] gets floated out. It instead > builds a list/vector/whatever. > > By contrast, if we write: > > loop' :: Int > loop' = sum . map (+1) $ [1..1000000] > > this does turn into a loop over integers, with no intermediate list. > Presumably this is due to there being no work to be saved ever by floating > the list out. These are the examples people usually test fusion with. > > And if loop is small enough to inline, it turns out that the actual code > that gets run will be the same as loop', because everything will get > inlined and fused. But it is also possible to make loop big enough to not > inline, and then the floating will pessimize the overall code. > > So the core issue is that constant floating blocks some fusion > opportunities. It is trying to save the work of building the structure more > than once, but fusion can cause the structure to not be built at all. And > the floating happens before fusion can reasonably be expected to work. > > Can anything be done about this? > > I've verified that this kind of situation also affects vector. And it > seems to be an issue even if loop is written: > > loop g = sum (map g [1..1000000]) > > -- Dan > > > On Wed, Aug 27, 2014 at 3:38 PM, Simon Peyton Jones > wrote: > >> You'll have to do more detective work! In your dump I see "Inactive >> unfolding $". So that's why it's not being inlined. That message comes >> from CoreUnfold, line 941 or so. The Boolean active_unfolding is passed in >> to callSiteInline from Simplify, line 1408 or so. It is generated by the >> function activeUnfolding, defined in SimplUtils. >> >> But you have probably change the "CompilerPhase" data type, so I can't >> guess what is happening. But if you just follow it through I'm sure you'll >> find it. >> >> Simon >> >> | -----Original Message----- >> | From: David Feuer [mailto:david.feuer at gmail.com] >> | Sent: 27 August 2014 17:22 >> | To: Simon Peyton Jones >> | Cc: ghc-devs >> | Subject: Re: Why isn't ($) inlining when I want? >> | >> | I just ran that (results attached), and as far as I can tell, it >> | doesn't even *consider* inlining ($) until phase 2. >> | >> | On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones >> | wrote: >> | > It's hard to tell since you are using a modified compiler. Try >> running >> | with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will >> | show you every inlining, whether failed or successful. You can see the >> | attempt to inline ($) and there is some info with the output that may >> | help to explain why it did or did not work. >> | > >> | > Try that >> | > >> | > Simon >> | > >> | > | -----Original Message----- >> | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of >> | David >> | > | Feuer >> | > | Sent: 27 August 2014 04:50 >> | > | To: ghc-devs; Carter Schonwald >> | > | Subject: Why isn't ($) inlining when I want? >> | > | >> | > | tl;dr I added a simplifier run with inlining enabled between >> | > | specialization and floating out. It seems incapable of inlining >> | > | saturated applications of ($), and I can't figure out why. These are >> | > | inlined later, when phase 2 runs. Am I running the simplifier wrong >> | or >> | > | something? >> | > | >> | > | >> | > | I'm working on this simple little fusion pipeline: >> | > | >> | > | {-# INLINE takeWhile #-} >> | > | takeWhile p xs = build builder >> | > | where >> | > | builder c n = foldr go n xs >> | > | where >> | > | go x r = if p x then x `c` r else n >> | > | >> | > | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] >> | > | >> | > | There are some issues with the enumFrom definition that break >> things. >> | > | If I use a fusible unfoldr that produces some numbers instead, that >> | > | issue goes away. Part of that problem (but not all of it) is that >> the >> | > | simplifier doesn't run to apply rules between specialization and >> full >> | > | laziness, so there's no opportunity for the specialization of >> | > | enumFromTo to Int to trigger the rewrite to a build form and fusion >> | > | with foldr before full laziness tears things apart. The other >> problem >> | > | is that inlining doesn't happen at all before full laziness, so >> | things >> | > | defined using foldr and/or build aren't actually exposed as such >> | until >> | > | afterwards. Therefore I decided to try adding a simplifier run with >> | > | inlining between specialization and floating out: >> | > | >> | > | runWhen do_specialise CoreDoSpecialising, >> | > | >> | > | runWhen full_laziness $ CoreDoSimplify max_iter >> | > | (base_mode { sm_phase = InitialPhase >> | > | , sm_names = ["PostGentle"] >> | > | , sm_rules = rules_on >> | > | , sm_inline = True >> | > | , sm_case_case = False }), >> | > | >> | > | runWhen full_laziness $ >> | > | CoreDoFloatOutwards FloatOutSwitches { >> | > | floatOutLambdas = Just 0, >> | > | floatOutConstants = True, >> | > | floatOutPartialApplications = False >> | }, >> | > | >> | > | The weird thing is that for some reason this doesn't inline ($), >> even >> | > | though it appears to be saturated. Using the modified thing with (my >> | > | version of) unfoldr: >> | > | >> | > | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato >> | 10) >> | > | (-9) >> | > | >> | > | potato :: Int -> Int -> Maybe (Int, Int) >> | > | potato n m | m <= n = Just (m, m) >> | > | | otherwise = Nothing >> | > | >> | > | >> | > | I get this out of the specializer: >> | > | >> | > | foo >> | > | foo = >> | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> >> | > | $ (. (foldr c_a1HT n_a1HU) >> | > | (takeWhile >> | > | (let { >> | > | ds_s21z >> | > | ds_s21z = I# 1 } in >> | > | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) >> | > | (let { >> | > | n_s21x >> | > | n_s21x = I# 10 } in >> | > | unfoldr >> | > | (\ m_a1U7 -> >> | > | case leInt m_a1U7 n_s21x of _ { >> | > | False -> Nothing; >> | > | True -> Just (m_a1U7, m_a1U7) >> | > | }) >> | > | ($fNumInt_$cnegate (I# 9))) >> | > | >> | > | >> | > | and then I get this out of my extra simplifier run: >> | > | >> | > | foo >> | > | foo = >> | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> >> | > | $ (\ x_a20f -> >> | > | foldr >> | > | (\ x_a1HR r_a1HS -> >> | > | case case x_a1HR of _ { I# x_a20R -> >> | > | tagToEnum# >> | > | (case x_a20R of _ { >> | > | __DEFAULT -> 1; >> | > | 1 -> 0 >> | > | }) >> | > | } >> | > | of _ { >> | > | False -> n_a1HU; >> | > | True -> c_a1HT x_a1HR r_a1HS >> | > | }) >> | > | n_a1HU >> | > | x_a20f) >> | > | (let { >> | > | b'_a1ZS >> | > | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in >> | > | $ (build) >> | > | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> >> | > | letrec { >> | > | go_a1ZX >> | > | go_a1ZX = >> | > | \ b2_a1ZY -> >> | > | case case case b2_a1ZY of _ { I# x_a218 -> >> | > | tagToEnum# (<=# x_a218 10) >> | > | } >> | > | of _ { >> | > | False -> Nothing; >> | > | True -> Just (b2_a1ZY, b2_a1ZY) >> | > | } >> | > | of _ { >> | > | Nothing -> n_a1ZW; >> | > | Just ds_a203 -> >> | > | case ds_a203 of _ { (a1_a207, new_b_a208) -> >> | > | c_a1ZV a1_a207 (go_a1ZX new_b_a208) >> | > | } >> | > | }; } in >> | > | go_a1ZX b'_a1ZS)) >> | > | >> | > | >> | > | That is, neither the $ in the code nor the $ that was inserted when >> | > | inlining unfoldr got inlined themselves, even though both appear to >> | be >> | > | saturated. As a result, foldr/build doesn't fire, and full laziness >> | > | tears things apart. Later on, in simplifier phase 2, $ gets inlined. >> | > | What's preventing this from happening in the PostGentle phase I >> | added? >> | > | >> | > | David Feuer >> | > | _______________________________________________ >> | > | ghc-devs mailing list >> | > | ghc-devs at haskell.org >> | > | http://www.haskell.org/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 28 08:14:17 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 28 Aug 2014 08:14:17 +0000 Subject: Why isn't ($) inlining when I want? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F34E7@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F3D8D@DB3PRD3001MB020.064d.mgd.msft.net> I remember doing some work on the ?floating of constant lists? question. First, [1..n] turns into (enumFromTo 1 n), and if enumFromTo was expensive, then sharing it might be a good plan. So GHC would have to know that it was cheap. I did experiment with ?cheapBuild? see https://ghc.haskell.org/trac/ghc/ticket/7206, but as you?ll see there, the results were equivocal. By duplicating the [1..n] we were allocating two copies of (I# 4), (I# 5) etc, and that increased allocation and GC time. So it?s unclear, in general, whether in these examples it is better to share the [1..n] between all calls of ?loop?, or to duplicate it. All that said, Dan?s question of why X fuses and very-similar Y doesn?t was a surprise to me; I?ll look into that. Simon From: John Lato [mailto:jwlato at gmail.com] Sent: 28 August 2014 00:17 To: Dan Doel Cc: Simon Peyton Jones; David Feuer; ghc-devs Subject: Re: Why isn't ($) inlining when I want? I sometimes think the solution is to make let-floating apply in fewer cases. I'm not sure we ever want to float out intermediate lists, the cost of creating them is very small relative to the memory consumption if they do happen to get shared. My approach is typically to mark loop INLINE. This very often results in the code I want (with vector, which I use more than lists), but it is a big hammer to apply. John On Thu, Aug 28, 2014 at 5:56 AM, Dan Doel > wrote: I think talking about inlining of $ may not be addressing the crux of the problem here. The issue seems to be about functions like the one in the first message. For instance: loop :: (Int -> Int) -> Int loop g = sum . map g $ [1..1000000] Suppose for argument that we have a fusion framework that would handle this. The problem is that this does not actually turn into a loop over integers, because the constant [1..1000000] gets floated out. It instead builds a list/vector/whatever. By contrast, if we write: loop' :: Int loop' = sum . map (+1) $ [1..1000000] this does turn into a loop over integers, with no intermediate list. Presumably this is due to there being no work to be saved ever by floating the list out. These are the examples people usually test fusion with. And if loop is small enough to inline, it turns out that the actual code that gets run will be the same as loop', because everything will get inlined and fused. But it is also possible to make loop big enough to not inline, and then the floating will pessimize the overall code. So the core issue is that constant floating blocks some fusion opportunities. It is trying to save the work of building the structure more than once, but fusion can cause the structure to not be built at all. And the floating happens before fusion can reasonably be expected to work. Can anything be done about this? I've verified that this kind of situation also affects vector. And it seems to be an issue even if loop is written: loop g = sum (map g [1..1000000]) -- Dan On Wed, Aug 27, 2014 at 3:38 PM, Simon Peyton Jones > wrote: You'll have to do more detective work! In your dump I see "Inactive unfolding $". So that's why it's not being inlined. That message comes from CoreUnfold, line 941 or so. The Boolean active_unfolding is passed in to callSiteInline from Simplify, line 1408 or so. It is generated by the function activeUnfolding, defined in SimplUtils. But you have probably change the "CompilerPhase" data type, so I can't guess what is happening. But if you just follow it through I'm sure you'll find it. Simon | -----Original Message----- | From: David Feuer [mailto:david.feuer at gmail.com] | Sent: 27 August 2014 17:22 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: Why isn't ($) inlining when I want? | | I just ran that (results attached), and as far as I can tell, it | doesn't even *consider* inlining ($) until phase 2. | | On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones | > wrote: | > It's hard to tell since you are using a modified compiler. Try running | with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will | show you every inlining, whether failed or successful. You can see the | attempt to inline ($) and there is some info with the output that may | help to explain why it did or did not work. | > | > Try that | > | > Simon | > | > | -----Original Message----- | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | David | > | Feuer | > | Sent: 27 August 2014 04:50 | > | To: ghc-devs; Carter Schonwald | > | Subject: Why isn't ($) inlining when I want? | > | | > | tl;dr I added a simplifier run with inlining enabled between | > | specialization and floating out. It seems incapable of inlining | > | saturated applications of ($), and I can't figure out why. These are | > | inlined later, when phase 2 runs. Am I running the simplifier wrong | or | > | something? | > | | > | | > | I'm working on this simple little fusion pipeline: | > | | > | {-# INLINE takeWhile #-} | > | takeWhile p xs = build builder | > | where | > | builder c n = foldr go n xs | > | where | > | go x r = if p x then x `c` r else n | > | | > | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] | > | | > | There are some issues with the enumFrom definition that break things. | > | If I use a fusible unfoldr that produces some numbers instead, that | > | issue goes away. Part of that problem (but not all of it) is that the | > | simplifier doesn't run to apply rules between specialization and full | > | laziness, so there's no opportunity for the specialization of | > | enumFromTo to Int to trigger the rewrite to a build form and fusion | > | with foldr before full laziness tears things apart. The other problem | > | is that inlining doesn't happen at all before full laziness, so | things | > | defined using foldr and/or build aren't actually exposed as such | until | > | afterwards. Therefore I decided to try adding a simplifier run with | > | inlining between specialization and floating out: | > | | > | runWhen do_specialise CoreDoSpecialising, | > | | > | runWhen full_laziness $ CoreDoSimplify max_iter | > | (base_mode { sm_phase = InitialPhase | > | , sm_names = ["PostGentle"] | > | , sm_rules = rules_on | > | , sm_inline = True | > | , sm_case_case = False }), | > | | > | runWhen full_laziness $ | > | CoreDoFloatOutwards FloatOutSwitches { | > | floatOutLambdas = Just 0, | > | floatOutConstants = True, | > | floatOutPartialApplications = False | }, | > | | > | The weird thing is that for some reason this doesn't inline ($), even | > | though it appears to be saturated. Using the modified thing with (my | > | version of) unfoldr: | > | | > | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato | 10) | > | (-9) | > | | > | potato :: Int -> Int -> Maybe (Int, Int) | > | potato n m | m <= n = Just (m, m) | > | | otherwise = Nothing | > | | > | | > | I get this out of the specializer: | > | | > | foo | > | foo = | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | > | $ (. (foldr c_a1HT n_a1HU) | > | (takeWhile | > | (let { | > | ds_s21z | > | ds_s21z = I# 1 } in | > | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) | > | (let { | > | n_s21x | > | n_s21x = I# 10 } in | > | unfoldr | > | (\ m_a1U7 -> | > | case leInt m_a1U7 n_s21x of _ { | > | False -> Nothing; | > | True -> Just (m_a1U7, m_a1U7) | > | }) | > | ($fNumInt_$cnegate (I# 9))) | > | | > | | > | and then I get this out of my extra simplifier run: | > | | > | foo | > | foo = | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | > | $ (\ x_a20f -> | > | foldr | > | (\ x_a1HR r_a1HS -> | > | case case x_a1HR of _ { I# x_a20R -> | > | tagToEnum# | > | (case x_a20R of _ { | > | __DEFAULT -> 1; | > | 1 -> 0 | > | }) | > | } | > | of _ { | > | False -> n_a1HU; | > | True -> c_a1HT x_a1HR r_a1HS | > | }) | > | n_a1HU | > | x_a20f) | > | (let { | > | b'_a1ZS | > | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in | > | $ (build) | > | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> | > | letrec { | > | go_a1ZX | > | go_a1ZX = | > | \ b2_a1ZY -> | > | case case case b2_a1ZY of _ { I# x_a218 -> | > | tagToEnum# (<=# x_a218 10) | > | } | > | of _ { | > | False -> Nothing; | > | True -> Just (b2_a1ZY, b2_a1ZY) | > | } | > | of _ { | > | Nothing -> n_a1ZW; | > | Just ds_a203 -> | > | case ds_a203 of _ { (a1_a207, new_b_a208) -> | > | c_a1ZV a1_a207 (go_a1ZX new_b_a208) | > | } | > | }; } in | > | go_a1ZX b'_a1ZS)) | > | | > | | > | That is, neither the $ in the code nor the $ that was inserted when | > | inlining unfoldr got inlined themselves, even though both appear to | be | > | saturated. As a result, foldr/build doesn't fire, and full laziness | > | tears things apart. Later on, in simplifier phase 2, $ gets inlined. | > | What's preventing this from happening in the PostGentle phase I | added? | > | | > | David Feuer | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 28 10:22:50 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 28 Aug 2014 10:22:50 +0000 Subject: Why isn't ($) inlining when I want? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F34E7@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F4768@DB3PRD3001MB020.064d.mgd.msft.net> Oh, now I understand. In loop g = sum . map g $ [1..1000000] GHC can share [1..100000] across all calls to loop, although that nixes fusion. Because each call of loop may have a different g. But in loop' = sum . map (+1) $ [1..1000000] GHC can share (sum . map (+1) $ [1..1000]) across all calls to loop?, so it can readily fuse the sum, map, and [1..n]. I hope that explains it. Simon From: Dan Doel [mailto:dan.doel at gmail.com] Sent: 27 August 2014 22:57 To: Simon Peyton Jones Cc: David Feuer; ghc-devs Subject: Re: Why isn't ($) inlining when I want? I think talking about inlining of $ may not be addressing the crux of the problem here. The issue seems to be about functions like the one in the first message. For instance: loop :: (Int -> Int) -> Int loop g = sum . map g $ [1..1000000] Suppose for argument that we have a fusion framework that would handle this. The problem is that this does not actually turn into a loop over integers, because the constant [1..1000000] gets floated out. It instead builds a list/vector/whatever. By contrast, if we write: loop' :: Int loop' = sum . map (+1) $ [1..1000000] this does turn into a loop over integers, with no intermediate list. Presumably this is due to there being no work to be saved ever by floating the list out. These are the examples people usually test fusion with. And if loop is small enough to inline, it turns out that the actual code that gets run will be the same as loop', because everything will get inlined and fused. But it is also possible to make loop big enough to not inline, and then the floating will pessimize the overall code. So the core issue is that constant floating blocks some fusion opportunities. It is trying to save the work of building the structure more than once, but fusion can cause the structure to not be built at all. And the floating happens before fusion can reasonably be expected to work. Can anything be done about this? I've verified that this kind of situation also affects vector. And it seems to be an issue even if loop is written: loop g = sum (map g [1..1000000]) -- Dan On Wed, Aug 27, 2014 at 3:38 PM, Simon Peyton Jones > wrote: You'll have to do more detective work! In your dump I see "Inactive unfolding $". So that's why it's not being inlined. That message comes from CoreUnfold, line 941 or so. The Boolean active_unfolding is passed in to callSiteInline from Simplify, line 1408 or so. It is generated by the function activeUnfolding, defined in SimplUtils. But you have probably change the "CompilerPhase" data type, so I can't guess what is happening. But if you just follow it through I'm sure you'll find it. Simon | -----Original Message----- | From: David Feuer [mailto:david.feuer at gmail.com] | Sent: 27 August 2014 17:22 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: Why isn't ($) inlining when I want? | | I just ran that (results attached), and as far as I can tell, it | doesn't even *consider* inlining ($) until phase 2. | | On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones | > wrote: | > It's hard to tell since you are using a modified compiler. Try running | with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will | show you every inlining, whether failed or successful. You can see the | attempt to inline ($) and there is some info with the output that may | help to explain why it did or did not work. | > | > Try that | > | > Simon | > | > | -----Original Message----- | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | David | > | Feuer | > | Sent: 27 August 2014 04:50 | > | To: ghc-devs; Carter Schonwald | > | Subject: Why isn't ($) inlining when I want? | > | | > | tl;dr I added a simplifier run with inlining enabled between | > | specialization and floating out. It seems incapable of inlining | > | saturated applications of ($), and I can't figure out why. These are | > | inlined later, when phase 2 runs. Am I running the simplifier wrong | or | > | something? | > | | > | | > | I'm working on this simple little fusion pipeline: | > | | > | {-# INLINE takeWhile #-} | > | takeWhile p xs = build builder | > | where | > | builder c n = foldr go n xs | > | where | > | go x r = if p x then x `c` r else n | > | | > | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] | > | | > | There are some issues with the enumFrom definition that break things. | > | If I use a fusible unfoldr that produces some numbers instead, that | > | issue goes away. Part of that problem (but not all of it) is that the | > | simplifier doesn't run to apply rules between specialization and full | > | laziness, so there's no opportunity for the specialization of | > | enumFromTo to Int to trigger the rewrite to a build form and fusion | > | with foldr before full laziness tears things apart. The other problem | > | is that inlining doesn't happen at all before full laziness, so | things | > | defined using foldr and/or build aren't actually exposed as such | until | > | afterwards. Therefore I decided to try adding a simplifier run with | > | inlining between specialization and floating out: | > | | > | runWhen do_specialise CoreDoSpecialising, | > | | > | runWhen full_laziness $ CoreDoSimplify max_iter | > | (base_mode { sm_phase = InitialPhase | > | , sm_names = ["PostGentle"] | > | , sm_rules = rules_on | > | , sm_inline = True | > | , sm_case_case = False }), | > | | > | runWhen full_laziness $ | > | CoreDoFloatOutwards FloatOutSwitches { | > | floatOutLambdas = Just 0, | > | floatOutConstants = True, | > | floatOutPartialApplications = False | }, | > | | > | The weird thing is that for some reason this doesn't inline ($), even | > | though it appears to be saturated. Using the modified thing with (my | > | version of) unfoldr: | > | | > | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato | 10) | > | (-9) | > | | > | potato :: Int -> Int -> Maybe (Int, Int) | > | potato n m | m <= n = Just (m, m) | > | | otherwise = Nothing | > | | > | | > | I get this out of the specializer: | > | | > | foo | > | foo = | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | > | $ (. (foldr c_a1HT n_a1HU) | > | (takeWhile | > | (let { | > | ds_s21z | > | ds_s21z = I# 1 } in | > | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) | > | (let { | > | n_s21x | > | n_s21x = I# 10 } in | > | unfoldr | > | (\ m_a1U7 -> | > | case leInt m_a1U7 n_s21x of _ { | > | False -> Nothing; | > | True -> Just (m_a1U7, m_a1U7) | > | }) | > | ($fNumInt_$cnegate (I# 9))) | > | | > | | > | and then I get this out of my extra simplifier run: | > | | > | foo | > | foo = | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | > | $ (\ x_a20f -> | > | foldr | > | (\ x_a1HR r_a1HS -> | > | case case x_a1HR of _ { I# x_a20R -> | > | tagToEnum# | > | (case x_a20R of _ { | > | __DEFAULT -> 1; | > | 1 -> 0 | > | }) | > | } | > | of _ { | > | False -> n_a1HU; | > | True -> c_a1HT x_a1HR r_a1HS | > | }) | > | n_a1HU | > | x_a20f) | > | (let { | > | b'_a1ZS | > | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in | > | $ (build) | > | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> | > | letrec { | > | go_a1ZX | > | go_a1ZX = | > | \ b2_a1ZY -> | > | case case case b2_a1ZY of _ { I# x_a218 -> | > | tagToEnum# (<=# x_a218 10) | > | } | > | of _ { | > | False -> Nothing; | > | True -> Just (b2_a1ZY, b2_a1ZY) | > | } | > | of _ { | > | Nothing -> n_a1ZW; | > | Just ds_a203 -> | > | case ds_a203 of _ { (a1_a207, new_b_a208) -> | > | c_a1ZV a1_a207 (go_a1ZX new_b_a208) | > | } | > | }; } in | > | go_a1ZX b'_a1ZS)) | > | | > | | > | That is, neither the $ in the code nor the $ that was inserted when | > | inlining unfoldr got inlined themselves, even though both appear to | be | > | saturated. As a result, foldr/build doesn't fire, and full laziness | > | tears things apart. Later on, in simplifier phase 2, $ gets inlined. | > | What's preventing this from happening in the PostGentle phase I | added? | > | | > | David Feuer | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 28 11:16:03 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 28 Aug 2014 11:16:03 +0000 Subject: Raft of optimiser changes Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F48AA@DB3PRD3001MB020.064d.mgd.msft.net> I've just pushed a bunch of Core-to-Core optimisation changes that have been sitting in my tree for ages. The aggregate effect on nofib is very modest, but they are mostly aimed at corner cases, and consolidation. Program Size Allocs Runtime Elapsed TotalMem Min -7.2% -3.1% -7.8% -7.8% -14.8% Max +5.6% +1.3% +20.0% +19.7% +50.0% Geometric Mean -0.3% -0.1% +1.7% +1.7% +0.2% The runtime increases are spurious - I checked. A couple of perf/compiler tests (i.e. GHC's own performance) improve significantly, which is a good sign. I have a few more to come but wanted to get this lot out of my hair. Simon a1a400ed * Testsuite wibbles 39ccdf91 * White space only 6c6b001e * Remove dead lookup_dfun_id (merge-o) a0b2897e * Simple refactor of the case-of-case transform bb877266 * Performance changes 082e41b4 * Testsuite wibbles 1122857e * Run float-inwards immediately before the strictness analyser. 86a2ebf8 * Comments only 6d48ce29 * Make tidyProgram discard speculative specialisation rules fa582cc4 * Fix an egregious bug in the NonRec case of bindFreeVars b9e49d3e * Add -fspecialise-aggressively dce70957 * Compiler performance increases -- yay! a3e207f6 * More SPEC rules fire baa3c9a3 * Wibbles to "...plus N others" error message about instances in scope 99178c1f * Specialise monad functions, and make them INLINEABLE 2ef997b8 * Slightly improve fusion rules for 'take' 949ad67e * Don't float out (classop dict e1 e2) 34363330 * Move the Enum Word instance into GHC.Enum 4c03791f * Specialise Eq, Ord, Read, Show at Int, Char, String 9cf5906b * Make worker/wrapper work on INLINEABLE things 8f099374 * Make maybeUnfoldingTemplate respond to DFunUnfoldings 3af1adf9 * Kill unused setUnfoldingTemplate 6e0f6ede * Refactor unfoldings e9cd1d5e * Less voluminous output when printing continuations -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Thu Aug 28 14:00:13 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 28 Aug 2014 16:00:13 +0200 Subject: GHC AST Annotations Message-ID: Now that the landmines have hopefully been cleared from the AST via [1] I would like to propose changing the location information in the AST. Right now the locations of syntactic markers such as do/let/where/in/of in the source are discarded from the AST, although they are retained in the rich token stream. The haskell-src-exts package deals with this by means of using the SrcSpanInfo data type [2] which contains the SrcSpan as per the current GHC Located type but also has a list of SrcSpan s for the syntactic markers, depending on the particular AST fragment being annotated. In addition, the annotation type is provided as a parameter to the AST, so that it can be changed as required, see [3]. The motivation for this change is then 1. Simplify the roundtripping and modification of source by explicitly capturing the missing location information for the syntactic markers. 2. Allow the annotation to be a parameter so that it can be replaced with a different one in tools, for example HaRe would include the tokens for the AST fragment leaves. 3. Aim for some level compatibility with haskell-src-exts so that tools developed for it could be easily ported to GHC, for example exactprint [4]. I would like feedback as to whether this would be acceptable, or if the same goals should be achieved a different way. Regards Alan [1] https://phabricator.haskell.org/D157 [2] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-SrcLoc.html#t:SrcSpanInfo [3] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-Syntax.html#t:Annotated [4] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-ExactPrint.html#v:exactPrint -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.doel at gmail.com Thu Aug 28 15:48:07 2014 From: dan.doel at gmail.com (Dan Doel) Date: Thu, 28 Aug 2014 11:48:07 -0400 Subject: Why isn't ($) inlining when I want? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221F3D8D@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F34E7@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F3D8D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Okay, so marking things as conlike will make GHC avoid floating them? I'm pretty sure that in most vector cases, this is a straight pessimization. There is no way to avoid the extra allocation of integers, because most intermediate vector types are unboxed, so the integer allocation will be performed regardless. Only boxed vectors might be an exception. On Thu, Aug 28, 2014 at 4:14 AM, Simon Peyton Jones wrote: > I remember doing some work on the ?floating of constant lists? question. > > > > First, [1..n] turns into (enumFromTo 1 n), and if enumFromTo was > expensive, then sharing it might be a good plan. So GHC would have to know > that it was cheap. > > > > I did experiment with ?cheapBuild? see > https://ghc.haskell.org/trac/ghc/ticket/7206, but as you?ll see there, > the results were equivocal. By duplicating the [1..n] we were allocating > two copies of (I# 4), (I# 5) etc, and that increased allocation and GC time. > > > > So it?s unclear, in general, whether in these examples it is better to > share the [1..n] between all calls of ?loop?, or to duplicate it. > > > > All that said, Dan?s question of why X fuses and very-similar Y doesn?t > was a surprise to me; I?ll look into that. > > > Simon > > > > *From:* John Lato [mailto:jwlato at gmail.com] > *Sent:* 28 August 2014 00:17 > *To:* Dan Doel > *Cc:* Simon Peyton Jones; David Feuer; ghc-devs > > *Subject:* Re: Why isn't ($) inlining when I want? > > > > I sometimes think the solution is to make let-floating apply in fewer > cases. I'm not sure we ever want to float out intermediate lists, the cost > of creating them is very small relative to the memory consumption if they > do happen to get shared. > > > > My approach is typically to mark loop INLINE. This very often results in > the code I want (with vector, which I use more than lists), but it is a big > hammer to apply. > > > > John > > > > On Thu, Aug 28, 2014 at 5:56 AM, Dan Doel wrote: > > I think talking about inlining of $ may not be addressing the crux of > the problem here. > > The issue seems to be about functions like the one in the first message. > For instance: > > loop :: (Int -> Int) -> Int > > loop g = sum . map g $ [1..1000000] > > Suppose for argument that we have a fusion framework that would handle > this. The problem is that this does not actually turn into a loop over > integers, because the constant [1..1000000] gets floated out. It instead > builds a list/vector/whatever. > > By contrast, if we write: > > loop' :: Int > > loop' = sum . map (+1) $ [1..1000000] > > this does turn into a loop over integers, with no intermediate list. > Presumably this is due to there being no work to be saved ever by floating > the list out. These are the examples people usually test fusion with. > > And if loop is small enough to inline, it turns out that the actual code > that gets run will be the same as loop', because everything will get > inlined and fused. But it is also possible to make loop big enough to not > inline, and then the floating will pessimize the overall code. > > So the core issue is that constant floating blocks some fusion > opportunities. It is trying to save the work of building the structure more > than once, but fusion can cause the structure to not be built at all. And > the floating happens before fusion can reasonably be expected to work. > > Can anything be done about this? > > I've verified that this kind of situation also affects vector. And it > seems to be an issue even if loop is written: > > loop g = sum (map g [1..1000000]) > > -- Dan > > > > On Wed, Aug 27, 2014 at 3:38 PM, Simon Peyton Jones > wrote: > > You'll have to do more detective work! In your dump I see "Inactive > unfolding $". So that's why it's not being inlined. That message comes > from CoreUnfold, line 941 or so. The Boolean active_unfolding is passed in > to callSiteInline from Simplify, line 1408 or so. It is generated by the > function activeUnfolding, defined in SimplUtils. > > But you have probably change the "CompilerPhase" data type, so I can't > guess what is happening. But if you just follow it through I'm sure you'll > find it. > > Simon > > > | -----Original Message----- > | From: David Feuer [mailto:david.feuer at gmail.com] > | Sent: 27 August 2014 17:22 > | To: Simon Peyton Jones > | Cc: ghc-devs > | Subject: Re: Why isn't ($) inlining when I want? > | > | I just ran that (results attached), and as far as I can tell, it > | doesn't even *consider* inlining ($) until phase 2. > | > | On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones > | wrote: > | > It's hard to tell since you are using a modified compiler. Try running > | with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will > | show you every inlining, whether failed or successful. You can see the > | attempt to inline ($) and there is some info with the output that may > | help to explain why it did or did not work. > | > > | > Try that > | > > | > Simon > | > > | > | -----Original Message----- > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | David > | > | Feuer > | > | Sent: 27 August 2014 04:50 > | > | To: ghc-devs; Carter Schonwald > | > | Subject: Why isn't ($) inlining when I want? > | > | > | > | tl;dr I added a simplifier run with inlining enabled between > | > | specialization and floating out. It seems incapable of inlining > | > | saturated applications of ($), and I can't figure out why. These are > | > | inlined later, when phase 2 runs. Am I running the simplifier wrong > | or > | > | something? > | > | > | > | > | > | I'm working on this simple little fusion pipeline: > | > | > | > | {-# INLINE takeWhile #-} > | > | takeWhile p xs = build builder > | > | where > | > | builder c n = foldr go n xs > | > | where > | > | go x r = if p x then x `c` r else n > | > | > | > | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] > | > | > | > | There are some issues with the enumFrom definition that break things. > | > | If I use a fusible unfoldr that produces some numbers instead, that > | > | issue goes away. Part of that problem (but not all of it) is that the > | > | simplifier doesn't run to apply rules between specialization and full > | > | laziness, so there's no opportunity for the specialization of > | > | enumFromTo to Int to trigger the rewrite to a build form and fusion > | > | with foldr before full laziness tears things apart. The other problem > | > | is that inlining doesn't happen at all before full laziness, so > | things > | > | defined using foldr and/or build aren't actually exposed as such > | until > | > | afterwards. Therefore I decided to try adding a simplifier run with > | > | inlining between specialization and floating out: > | > | > | > | runWhen do_specialise CoreDoSpecialising, > | > | > | > | runWhen full_laziness $ CoreDoSimplify max_iter > | > | (base_mode { sm_phase = InitialPhase > | > | , sm_names = ["PostGentle"] > | > | , sm_rules = rules_on > | > | , sm_inline = True > | > | , sm_case_case = False }), > | > | > | > | runWhen full_laziness $ > | > | CoreDoFloatOutwards FloatOutSwitches { > | > | floatOutLambdas = Just 0, > | > | floatOutConstants = True, > | > | floatOutPartialApplications = False > | }, > | > | > | > | The weird thing is that for some reason this doesn't inline ($), even > | > | though it appears to be saturated. Using the modified thing with (my > | > | version of) unfoldr: > | > | > | > | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato > | 10) > | > | (-9) > | > | > | > | potato :: Int -> Int -> Maybe (Int, Int) > | > | potato n m | m <= n = Just (m, m) > | > | | otherwise = Nothing > | > | > | > | > | > | I get this out of the specializer: > | > | > | > | foo > | > | foo = > | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> > | > | $ (. (foldr c_a1HT n_a1HU) > | > | (takeWhile > | > | (let { > | > | ds_s21z > | > | ds_s21z = I# 1 } in > | > | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) > | > | (let { > | > | n_s21x > | > | n_s21x = I# 10 } in > | > | unfoldr > | > | (\ m_a1U7 -> > | > | case leInt m_a1U7 n_s21x of _ { > | > | False -> Nothing; > | > | True -> Just (m_a1U7, m_a1U7) > | > | }) > | > | ($fNumInt_$cnegate (I# 9))) > | > | > | > | > | > | and then I get this out of my extra simplifier run: > | > | > | > | foo > | > | foo = > | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> > | > | $ (\ x_a20f -> > | > | foldr > | > | (\ x_a1HR r_a1HS -> > | > | case case x_a1HR of _ { I# x_a20R -> > | > | tagToEnum# > | > | (case x_a20R of _ { > | > | __DEFAULT -> 1; > | > | 1 -> 0 > | > | }) > | > | } > | > | of _ { > | > | False -> n_a1HU; > | > | True -> c_a1HT x_a1HR r_a1HS > | > | }) > | > | n_a1HU > | > | x_a20f) > | > | (let { > | > | b'_a1ZS > | > | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in > | > | $ (build) > | > | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> > | > | letrec { > | > | go_a1ZX > | > | go_a1ZX = > | > | \ b2_a1ZY -> > | > | case case case b2_a1ZY of _ { I# x_a218 -> > | > | tagToEnum# (<=# x_a218 10) > | > | } > | > | of _ { > | > | False -> Nothing; > | > | True -> Just (b2_a1ZY, b2_a1ZY) > | > | } > | > | of _ { > | > | Nothing -> n_a1ZW; > | > | Just ds_a203 -> > | > | case ds_a203 of _ { (a1_a207, new_b_a208) -> > | > | c_a1ZV a1_a207 (go_a1ZX new_b_a208) > | > | } > | > | }; } in > | > | go_a1ZX b'_a1ZS)) > | > | > | > | > | > | That is, neither the $ in the code nor the $ that was inserted when > | > | inlining unfoldr got inlined themselves, even though both appear to > | be > | > | saturated. As a result, foldr/build doesn't fire, and full laziness > | > | tears things apart. Later on, in simplifier phase 2, $ gets inlined. > | > | What's preventing this from happening in the PostGentle phase I > | added? > | > | > | > | David Feuer > | > | _______________________________________________ > | > | ghc-devs mailing list > | > | ghc-devs at haskell.org > | > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 28 15:54:28 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 28 Aug 2014 15:54:28 +0000 Subject: GHC AST Annotations In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F4F10@DB3PRD3001MB020.064d.mgd.msft.net> In general I?m fine with this direction of travel. Some specifics: ? You?d have to be careful to document, for every data constructor in HsSyn, what the association between the [SrcSpan] in the SrcSpanInfo and the ?sub-entities? ? Many of the sub-entities will have their own SrcSpanInfo wrapped around them, so there?s some unhelpful duplication. Maybe you only want the SrcSpanInfo to list the [SrcSpan]s for the sub-entities (like the syntactic keywords) that do not show up as children in the syntax tree? Anyway do by all means create a GHC Trac wiki page to describe your proposed design, concretely. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Alan & Kim Zimmerman Sent: 28 August 2014 15:00 To: ghc-devs at haskell.org Subject: GHC AST Annotations Now that the landmines have hopefully been cleared from the AST via [1] I would like to propose changing the location information in the AST. Right now the locations of syntactic markers such as do/let/where/in/of in the source are discarded from the AST, although they are retained in the rich token stream. The haskell-src-exts package deals with this by means of using the SrcSpanInfo data type [2] which contains the SrcSpan as per the current GHC Located type but also has a list of SrcSpan s for the syntactic markers, depending on the particular AST fragment being annotated. In addition, the annotation type is provided as a parameter to the AST, so that it can be changed as required, see [3]. The motivation for this change is then 1. Simplify the roundtripping and modification of source by explicitly capturing the missing location information for the syntactic markers. 2. Allow the annotation to be a parameter so that it can be replaced with a different one in tools, for example HaRe would include the tokens for the AST fragment leaves. 3. Aim for some level compatibility with haskell-src-exts so that tools developed for it could be easily ported to GHC, for example exactprint [4]. I would like feedback as to whether this would be acceptable, or if the same goals should be achieved a different way. Regards Alan [1] https://phabricator.haskell.org/D157 [2] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-SrcLoc.html#t:SrcSpanInfo [3] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-Syntax.html#t:Annotated [4] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-ExactPrint.html#v:exactPrint -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 28 15:56:03 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 28 Aug 2014 15:56:03 +0000 Subject: Why isn't ($) inlining when I want? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F34E7@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F3D8D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F4F25@DB3PRD3001MB020.064d.mgd.msft.net> Actually the CONLIKE thing still allows them to float, but makes RULES continue to work even though they?ve been floated. See the user manual. From: Dan Doel [mailto:dan.doel at gmail.com] Sent: 28 August 2014 16:48 To: Simon Peyton Jones Cc: John Lato; David Feuer; ghc-devs Subject: Re: Why isn't ($) inlining when I want? Okay, so marking things as conlike will make GHC avoid floating them? I'm pretty sure that in most vector cases, this is a straight pessimization. There is no way to avoid the extra allocation of integers, because most intermediate vector types are unboxed, so the integer allocation will be performed regardless. Only boxed vectors might be an exception. On Thu, Aug 28, 2014 at 4:14 AM, Simon Peyton Jones > wrote: I remember doing some work on the ?floating of constant lists? question. First, [1..n] turns into (enumFromTo 1 n), and if enumFromTo was expensive, then sharing it might be a good plan. So GHC would have to know that it was cheap. I did experiment with ?cheapBuild? see https://ghc.haskell.org/trac/ghc/ticket/7206, but as you?ll see there, the results were equivocal. By duplicating the [1..n] we were allocating two copies of (I# 4), (I# 5) etc, and that increased allocation and GC time. So it?s unclear, in general, whether in these examples it is better to share the [1..n] between all calls of ?loop?, or to duplicate it. All that said, Dan?s question of why X fuses and very-similar Y doesn?t was a surprise to me; I?ll look into that. Simon From: John Lato [mailto:jwlato at gmail.com] Sent: 28 August 2014 00:17 To: Dan Doel Cc: Simon Peyton Jones; David Feuer; ghc-devs Subject: Re: Why isn't ($) inlining when I want? I sometimes think the solution is to make let-floating apply in fewer cases. I'm not sure we ever want to float out intermediate lists, the cost of creating them is very small relative to the memory consumption if they do happen to get shared. My approach is typically to mark loop INLINE. This very often results in the code I want (with vector, which I use more than lists), but it is a big hammer to apply. John On Thu, Aug 28, 2014 at 5:56 AM, Dan Doel > wrote: I think talking about inlining of $ may not be addressing the crux of the problem here. The issue seems to be about functions like the one in the first message. For instance: loop :: (Int -> Int) -> Int loop g = sum . map g $ [1..1000000] Suppose for argument that we have a fusion framework that would handle this. The problem is that this does not actually turn into a loop over integers, because the constant [1..1000000] gets floated out. It instead builds a list/vector/whatever. By contrast, if we write: loop' :: Int loop' = sum . map (+1) $ [1..1000000] this does turn into a loop over integers, with no intermediate list. Presumably this is due to there being no work to be saved ever by floating the list out. These are the examples people usually test fusion with. And if loop is small enough to inline, it turns out that the actual code that gets run will be the same as loop', because everything will get inlined and fused. But it is also possible to make loop big enough to not inline, and then the floating will pessimize the overall code. So the core issue is that constant floating blocks some fusion opportunities. It is trying to save the work of building the structure more than once, but fusion can cause the structure to not be built at all. And the floating happens before fusion can reasonably be expected to work. Can anything be done about this? I've verified that this kind of situation also affects vector. And it seems to be an issue even if loop is written: loop g = sum (map g [1..1000000]) -- Dan On Wed, Aug 27, 2014 at 3:38 PM, Simon Peyton Jones > wrote: You'll have to do more detective work! In your dump I see "Inactive unfolding $". So that's why it's not being inlined. That message comes from CoreUnfold, line 941 or so. The Boolean active_unfolding is passed in to callSiteInline from Simplify, line 1408 or so. It is generated by the function activeUnfolding, defined in SimplUtils. But you have probably change the "CompilerPhase" data type, so I can't guess what is happening. But if you just follow it through I'm sure you'll find it. Simon | -----Original Message----- | From: David Feuer [mailto:david.feuer at gmail.com] | Sent: 27 August 2014 17:22 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: Why isn't ($) inlining when I want? | | I just ran that (results attached), and as far as I can tell, it | doesn't even *consider* inlining ($) until phase 2. | | On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones | > wrote: | > It's hard to tell since you are using a modified compiler. Try running | with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will | show you every inlining, whether failed or successful. You can see the | attempt to inline ($) and there is some info with the output that may | help to explain why it did or did not work. | > | > Try that | > | > Simon | > | > | -----Original Message----- | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | David | > | Feuer | > | Sent: 27 August 2014 04:50 | > | To: ghc-devs; Carter Schonwald | > | Subject: Why isn't ($) inlining when I want? | > | | > | tl;dr I added a simplifier run with inlining enabled between | > | specialization and floating out. It seems incapable of inlining | > | saturated applications of ($), and I can't figure out why. These are | > | inlined later, when phase 2 runs. Am I running the simplifier wrong | or | > | something? | > | | > | | > | I'm working on this simple little fusion pipeline: | > | | > | {-# INLINE takeWhile #-} | > | takeWhile p xs = build builder | > | where | > | builder c n = foldr go n xs | > | where | > | go x r = if p x then x `c` r else n | > | | > | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] | > | | > | There are some issues with the enumFrom definition that break things. | > | If I use a fusible unfoldr that produces some numbers instead, that | > | issue goes away. Part of that problem (but not all of it) is that the | > | simplifier doesn't run to apply rules between specialization and full | > | laziness, so there's no opportunity for the specialization of | > | enumFromTo to Int to trigger the rewrite to a build form and fusion | > | with foldr before full laziness tears things apart. The other problem | > | is that inlining doesn't happen at all before full laziness, so | things | > | defined using foldr and/or build aren't actually exposed as such | until | > | afterwards. Therefore I decided to try adding a simplifier run with | > | inlining between specialization and floating out: | > | | > | runWhen do_specialise CoreDoSpecialising, | > | | > | runWhen full_laziness $ CoreDoSimplify max_iter | > | (base_mode { sm_phase = InitialPhase | > | , sm_names = ["PostGentle"] | > | , sm_rules = rules_on | > | , sm_inline = True | > | , sm_case_case = False }), | > | | > | runWhen full_laziness $ | > | CoreDoFloatOutwards FloatOutSwitches { | > | floatOutLambdas = Just 0, | > | floatOutConstants = True, | > | floatOutPartialApplications = False | }, | > | | > | The weird thing is that for some reason this doesn't inline ($), even | > | though it appears to be saturated. Using the modified thing with (my | > | version of) unfoldr: | > | | > | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato | 10) | > | (-9) | > | | > | potato :: Int -> Int -> Maybe (Int, Int) | > | potato n m | m <= n = Just (m, m) | > | | otherwise = Nothing | > | | > | | > | I get this out of the specializer: | > | | > | foo | > | foo = | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | > | $ (. (foldr c_a1HT n_a1HU) | > | (takeWhile | > | (let { | > | ds_s21z | > | ds_s21z = I# 1 } in | > | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) | > | (let { | > | n_s21x | > | n_s21x = I# 10 } in | > | unfoldr | > | (\ m_a1U7 -> | > | case leInt m_a1U7 n_s21x of _ { | > | False -> Nothing; | > | True -> Just (m_a1U7, m_a1U7) | > | }) | > | ($fNumInt_$cnegate (I# 9))) | > | | > | | > | and then I get this out of my extra simplifier run: | > | | > | foo | > | foo = | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | > | $ (\ x_a20f -> | > | foldr | > | (\ x_a1HR r_a1HS -> | > | case case x_a1HR of _ { I# x_a20R -> | > | tagToEnum# | > | (case x_a20R of _ { | > | __DEFAULT -> 1; | > | 1 -> 0 | > | }) | > | } | > | of _ { | > | False -> n_a1HU; | > | True -> c_a1HT x_a1HR r_a1HS | > | }) | > | n_a1HU | > | x_a20f) | > | (let { | > | b'_a1ZS | > | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in | > | $ (build) | > | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> | > | letrec { | > | go_a1ZX | > | go_a1ZX = | > | \ b2_a1ZY -> | > | case case case b2_a1ZY of _ { I# x_a218 -> | > | tagToEnum# (<=# x_a218 10) | > | } | > | of _ { | > | False -> Nothing; | > | True -> Just (b2_a1ZY, b2_a1ZY) | > | } | > | of _ { | > | Nothing -> n_a1ZW; | > | Just ds_a203 -> | > | case ds_a203 of _ { (a1_a207, new_b_a208) -> | > | c_a1ZV a1_a207 (go_a1ZX new_b_a208) | > | } | > | }; } in | > | go_a1ZX b'_a1ZS)) | > | | > | | > | That is, neither the $ in the code nor the $ that was inserted when | > | inlining unfoldr got inlined themselves, even though both appear to | be | > | saturated. As a result, foldr/build doesn't fire, and full laziness | > | tears things apart. Later on, in simplifier phase 2, $ gets inlined. | > | What's preventing this from happening in the PostGentle phase I | added? | > | | > | David Feuer | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 28 16:34:51 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 28 Aug 2014 16:34:51 +0000 Subject: Haddock build fails Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F504E@DB3PRD3001MB020.064d.mgd.msft.net> Phab that I may have committed something that makes haddock fail to build (will teach me, again, to do a completely clean validate!) I'll look into this . Sorry Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Thu Aug 28 16:50:57 2014 From: david.feuer at gmail.com (David Feuer) Date: Thu, 28 Aug 2014 12:50:57 -0400 Subject: Why isn't (.) CONLIKE? Message-ID: Speaking of CONLIKE, I'd have expected (.) to be CONLIKE, since it looks much like a constructor. Would that be bad for some reason? Or is it already treated well enough not to need that? On Aug 28, 2014 11:56 AM, "Simon Peyton Jones" wrote: > Actually the CONLIKE thing still allows them to float, but makes RULES > continue to work even though they?ve been floated. See the user manual. > > > > *From:* Dan Doel [mailto:dan.doel at gmail.com] > *Sent:* 28 August 2014 16:48 > *To:* Simon Peyton Jones > *Cc:* John Lato; David Feuer; ghc-devs > *Subject:* Re: Why isn't ($) inlining when I want? > > > > Okay, so marking things as conlike will make GHC avoid floating them? > > I'm pretty sure that in most vector cases, this is a straight > pessimization. There is no way to avoid the extra allocation of integers, > because most intermediate vector types are unboxed, so the integer > allocation will be performed regardless. Only boxed vectors might be an > exception. > > > > On Thu, Aug 28, 2014 at 4:14 AM, Simon Peyton Jones > wrote: > > I remember doing some work on the ?floating of constant lists? question. > > > > First, [1..n] turns into (enumFromTo 1 n), and if enumFromTo was > expensive, then sharing it might be a good plan. So GHC would have to know > that it was cheap. > > > > I did experiment with ?cheapBuild? see > https://ghc.haskell.org/trac/ghc/ticket/7206, but as you?ll see there, > the results were equivocal. By duplicating the [1..n] we were allocating > two copies of (I# 4), (I# 5) etc, and that increased allocation and GC time. > > > > So it?s unclear, in general, whether in these examples it is better to > share the [1..n] between all calls of ?loop?, or to duplicate it. > > > > All that said, Dan?s question of why X fuses and very-similar Y doesn?t > was a surprise to me; I?ll look into that. > > > Simon > > > > *From:* John Lato [mailto:jwlato at gmail.com] > *Sent:* 28 August 2014 00:17 > *To:* Dan Doel > *Cc:* Simon Peyton Jones; David Feuer; ghc-devs > > > *Subject:* Re: Why isn't ($) inlining when I want? > > > > I sometimes think the solution is to make let-floating apply in fewer > cases. I'm not sure we ever want to float out intermediate lists, the cost > of creating them is very small relative to the memory consumption if they > do happen to get shared. > > > > My approach is typically to mark loop INLINE. This very often results in > the code I want (with vector, which I use more than lists), but it is a big > hammer to apply. > > > > John > > > > On Thu, Aug 28, 2014 at 5:56 AM, Dan Doel wrote: > > I think talking about inlining of $ may not be addressing the crux of > the problem here. > > The issue seems to be about functions like the one in the first message. > For instance: > > loop :: (Int -> Int) -> Int > > loop g = sum . map g $ [1..1000000] > > Suppose for argument that we have a fusion framework that would handle > this. The problem is that this does not actually turn into a loop over > integers, because the constant [1..1000000] gets floated out. It instead > builds a list/vector/whatever. > > By contrast, if we write: > > loop' :: Int > > loop' = sum . map (+1) $ [1..1000000] > > this does turn into a loop over integers, with no intermediate list. > Presumably this is due to there being no work to be saved ever by floating > the list out. These are the examples people usually test fusion with. > > And if loop is small enough to inline, it turns out that the actual code > that gets run will be the same as loop', because everything will get > inlined and fused. But it is also possible to make loop big enough to not > inline, and then the floating will pessimize the overall code. > > So the core issue is that constant floating blocks some fusion > opportunities. It is trying to save the work of building the structure more > than once, but fusion can cause the structure to not be built at all. And > the floating happens before fusion can reasonably be expected to work. > > Can anything be done about this? > > I've verified that this kind of situation also affects vector. And it > seems to be an issue even if loop is written: > > loop g = sum (map g [1..1000000]) > > -- Dan > > > > On Wed, Aug 27, 2014 at 3:38 PM, Simon Peyton Jones > wrote: > > You'll have to do more detective work! In your dump I see "Inactive > unfolding $". So that's why it's not being inlined. That message comes > from CoreUnfold, line 941 or so. The Boolean active_unfolding is passed in > to callSiteInline from Simplify, line 1408 or so. It is generated by the > function activeUnfolding, defined in SimplUtils. > > But you have probably change the "CompilerPhase" data type, so I can't > guess what is happening. But if you just follow it through I'm sure you'll > find it. > > Simon > > > | -----Original Message----- > | From: David Feuer [mailto:david.feuer at gmail.com] > | Sent: 27 August 2014 17:22 > | To: Simon Peyton Jones > | Cc: ghc-devs > | Subject: Re: Why isn't ($) inlining when I want? > | > | I just ran that (results attached), and as far as I can tell, it > | doesn't even *consider* inlining ($) until phase 2. > | > | On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones > | wrote: > | > It's hard to tell since you are using a modified compiler. Try running > | with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will > | show you every inlining, whether failed or successful. You can see the > | attempt to inline ($) and there is some info with the output that may > | help to explain why it did or did not work. > | > > | > Try that > | > > | > Simon > | > > | > | -----Original Message----- > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | David > | > | Feuer > | > | Sent: 27 August 2014 04:50 > | > | To: ghc-devs; Carter Schonwald > | > | Subject: Why isn't ($) inlining when I want? > | > | > | > | tl;dr I added a simplifier run with inlining enabled between > | > | specialization and floating out. It seems incapable of inlining > | > | saturated applications of ($), and I can't figure out why. These are > | > | inlined later, when phase 2 runs. Am I running the simplifier wrong > | or > | > | something? > | > | > | > | > | > | I'm working on this simple little fusion pipeline: > | > | > | > | {-# INLINE takeWhile #-} > | > | takeWhile p xs = build builder > | > | where > | > | builder c n = foldr go n xs > | > | where > | > | go x r = if p x then x `c` r else n > | > | > | > | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] > | > | > | > | There are some issues with the enumFrom definition that break things. > | > | If I use a fusible unfoldr that produces some numbers instead, that > | > | issue goes away. Part of that problem (but not all of it) is that the > | > | simplifier doesn't run to apply rules between specialization and full > | > | laziness, so there's no opportunity for the specialization of > | > | enumFromTo to Int to trigger the rewrite to a build form and fusion > | > | with foldr before full laziness tears things apart. The other problem > | > | is that inlining doesn't happen at all before full laziness, so > | things > | > | defined using foldr and/or build aren't actually exposed as such > | until > | > | afterwards. Therefore I decided to try adding a simplifier run with > | > | inlining between specialization and floating out: > | > | > | > | runWhen do_specialise CoreDoSpecialising, > | > | > | > | runWhen full_laziness $ CoreDoSimplify max_iter > | > | (base_mode { sm_phase = InitialPhase > | > | , sm_names = ["PostGentle"] > | > | , sm_rules = rules_on > | > | , sm_inline = True > | > | , sm_case_case = False }), > | > | > | > | runWhen full_laziness $ > | > | CoreDoFloatOutwards FloatOutSwitches { > | > | floatOutLambdas = Just 0, > | > | floatOutConstants = True, > | > | floatOutPartialApplications = False > | }, > | > | > | > | The weird thing is that for some reason this doesn't inline ($), even > | > | though it appears to be saturated. Using the modified thing with (my > | > | version of) unfoldr: > | > | > | > | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato > | 10) > | > | (-9) > | > | > | > | potato :: Int -> Int -> Maybe (Int, Int) > | > | potato n m | m <= n = Just (m, m) > | > | | otherwise = Nothing > | > | > | > | > | > | I get this out of the specializer: > | > | > | > | foo > | > | foo = > | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> > | > | $ (. (foldr c_a1HT n_a1HU) > | > | (takeWhile > | > | (let { > | > | ds_s21z > | > | ds_s21z = I# 1 } in > | > | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) > | > | (let { > | > | n_s21x > | > | n_s21x = I# 10 } in > | > | unfoldr > | > | (\ m_a1U7 -> > | > | case leInt m_a1U7 n_s21x of _ { > | > | False -> Nothing; > | > | True -> Just (m_a1U7, m_a1U7) > | > | }) > | > | ($fNumInt_$cnegate (I# 9))) > | > | > | > | > | > | and then I get this out of my extra simplifier run: > | > | > | > | foo > | > | foo = > | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> > | > | $ (\ x_a20f -> > | > | foldr > | > | (\ x_a1HR r_a1HS -> > | > | case case x_a1HR of _ { I# x_a20R -> > | > | tagToEnum# > | > | (case x_a20R of _ { > | > | __DEFAULT -> 1; > | > | 1 -> 0 > | > | }) > | > | } > | > | of _ { > | > | False -> n_a1HU; > | > | True -> c_a1HT x_a1HR r_a1HS > | > | }) > | > | n_a1HU > | > | x_a20f) > | > | (let { > | > | b'_a1ZS > | > | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in > | > | $ (build) > | > | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> > | > | letrec { > | > | go_a1ZX > | > | go_a1ZX = > | > | \ b2_a1ZY -> > | > | case case case b2_a1ZY of _ { I# x_a218 -> > | > | tagToEnum# (<=# x_a218 10) > | > | } > | > | of _ { > | > | False -> Nothing; > | > | True -> Just (b2_a1ZY, b2_a1ZY) > | > | } > | > | of _ { > | > | Nothing -> n_a1ZW; > | > | Just ds_a203 -> > | > | case ds_a203 of _ { (a1_a207, new_b_a208) -> > | > | c_a1ZV a1_a207 (go_a1ZX new_b_a208) > | > | } > | > | }; } in > | > | go_a1ZX b'_a1ZS)) > | > | > | > | > | > | That is, neither the $ in the code nor the $ that was inserted when > | > | inlining unfoldr got inlined themselves, even though both appear to > | be > | > | saturated. As a result, foldr/build doesn't fire, and full laziness > | > | tears things apart. Later on, in simplifier phase 2, $ gets inlined. > | > | What's preventing this from happening in the PostGentle phase I > | added? > | > | > | > | David Feuer > | > | _______________________________________________ > | > | ghc-devs mailing list > | > | ghc-devs at haskell.org > | > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 28 16:51:57 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 28 Aug 2014 16:51:57 +0000 Subject: Haddock build fails In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221F504E@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221F504E@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F508C@DB3PRD3001MB020.064d.mgd.msft.net> I've pushed a temporary fix From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon Peyton Jones Sent: 28 August 2014 17:35 To: ghc-devs Subject: Haddock build fails Phab that I may have committed something that makes haddock fail to build (will teach me, again, to do a completely clean validate!) I'll look into this . Sorry Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Thu Aug 28 17:11:41 2014 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Thu, 28 Aug 2014 13:11:41 -0400 Subject: GHC AST Annotations In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221F4F10@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221F4F10@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: For what it's worth, my thought is not to use SrcSpanInfo (which, to me, is the wrong way to slice the abstraction) but instead to add SrcSpan fields to the relevant nodes. For example: | HsDo SrcSpan -- of the word "do" BlockSrcSpans (HsStmtContext Name) -- The parameterisation is unimportant -- because in this context we never use -- the PatGuard or ParStmt variant [ExprLStmt id] -- "do":one or more stmts PostTcType -- Type of the whole expression ... data BlockSrcSpans = LayoutBlock Int -- the parameter is the indentation level ... -- stuff to track the appearance of any semicolons | BracesBlock ... -- stuff to track the braces and semicolons The way I understand it, the SrcSpanInfo proposal means that we would have lots of empty SrcSpanInfos, no? Most interior nodes don't need one, I think. Popping up a level, I do support the idea of including this info in the AST. Richard On Aug 28, 2014, at 11:54 AM, Simon Peyton Jones wrote: > In general I?m fine with this direction of travel. Some specifics: > > ? You?d have to be careful to document, for every data constructor in HsSyn, what the association between the [SrcSpan] in the SrcSpanInfo and the ?sub-entities? > ? Many of the sub-entities will have their own SrcSpanInfo wrapped around them, so there?s some unhelpful duplication. Maybe you only want the SrcSpanInfo to list the [SrcSpan]s for the sub-entities (like the syntactic keywords) that do not show up as children in the syntax tree? > Anyway do by all means create a GHC Trac wiki page to describe your proposed design, concretely. > > Simon > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Alan & Kim Zimmerman > Sent: 28 August 2014 15:00 > To: ghc-devs at haskell.org > Subject: GHC AST Annotations > > Now that the landmines have hopefully been cleared from the AST via [1] I would like to propose changing the location information in the AST. > > Right now the locations of syntactic markers such as do/let/where/in/of in the source are discarded from the AST, although they are retained in the rich token stream. > > The haskell-src-exts package deals with this by means of using the SrcSpanInfo data type [2] which contains the SrcSpan as per the current GHC Located type but also has a list of SrcSpan s for the syntactic markers, depending on the particular AST fragment being annotated. > > In addition, the annotation type is provided as a parameter to the AST, so that it can be changed as required, see [3]. > > The motivation for this change is then > > 1. Simplify the roundtripping and modification of source by explicitly capturing the missing location information for the syntactic markers. > > 2. Allow the annotation to be a parameter so that it can be replaced with a different one in tools, for example HaRe would include the tokens for the AST fragment leaves. > > 3. Aim for some level compatibility with haskell-src-exts so that tools developed for it could be easily ported to GHC, for example exactprint [4]. > > > > I would like feedback as to whether this would be acceptable, or if the same goals should be achieved a different way. > > > > Regards > > Alan > > > > > [1] https://phabricator.haskell.org/D157 > > [2] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-SrcLoc.html#t:SrcSpanInfo > > [3] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-Syntax.html#t:Annotated > > [4] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-ExactPrint.html#v:exactPrint > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From alan.zimm at gmail.com Thu Aug 28 18:34:34 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 28 Aug 2014 20:34:34 +0200 Subject: GHC AST Annotations In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F4F10@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: This does have the advantage of being explicit. I modelled the initial proposal on HSE as a proven solution, and I think that they were trying to keep it non-invasive, to allow both an annotated and non-annoted AST. I thiink the key question is whether it is acceptable to sprinkle this kind of information throughout the AST. For someone interested in source-to-source conversions (like me) this is great, others may find it intrusive. The other question, which is probably orthogonal to this, is whether we want the annotation to be a parameter to the AST, which allows it to be overridden by various tools for various purposes, or fixed as in Richard's suggestion. A parameterised annotation allows the annotations to be manipulated via something like for HSE: -- |AST nodes are annotated, and this class allows manipulation of the annotations. class Functor ast => Annotated ast where -- |Retrieve the annotation of an AST node. ann :: ast l -> l -- |Change the annotation of an AST node. Note that only the annotation of the node itself is affected, and not -- the annotations of any child nodes. if all nodes in the AST tree are to be affected, use fmap. amap :: (l -> l) -> ast l -> ast l Alan On Thu, Aug 28, 2014 at 7:11 PM, Richard Eisenberg wrote: > For what it's worth, my thought is not to use SrcSpanInfo (which, to me, > is the wrong way to slice the abstraction) but instead to add SrcSpan > fields to the relevant nodes. For example: > > | HsDo SrcSpan -- of the word "do" > BlockSrcSpans > (HsStmtContext Name) -- The parameterisation is unimportant > -- because in this context we never > use > -- the PatGuard or ParStmt variant > [ExprLStmt id] -- "do":one or more stmts > PostTcType -- Type of the whole expression > > ... > > data BlockSrcSpans = LayoutBlock Int -- the parameter is the indentation > level > ... -- stuff to track the appearance of > any semicolons > | BracesBlock ... -- stuff to track the braces and > semicolons > > > The way I understand it, the SrcSpanInfo proposal means that we would have > lots of empty SrcSpanInfos, no? Most interior nodes don't need one, I think. > > Popping up a level, I do support the idea of including this info in the > AST. > > Richard > > On Aug 28, 2014, at 11:54 AM, Simon Peyton Jones > wrote: > > > In general I?m fine with this direction of travel. Some specifics: > > > > ? You?d have to be careful to document, for every data > constructor in HsSyn, what the association between the [SrcSpan] in the > SrcSpanInfo and the ?sub-entities? > > ? Many of the sub-entities will have their own SrcSpanInfo > wrapped around them, so there?s some unhelpful duplication. Maybe you only > want the SrcSpanInfo to list the [SrcSpan]s for the sub-entities (like the > syntactic keywords) that do not show up as children in the syntax tree? > > Anyway do by all means create a GHC Trac wiki page to describe your > proposed design, concretely. > > > > Simon > > > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Alan > & Kim Zimmerman > > Sent: 28 August 2014 15:00 > > To: ghc-devs at haskell.org > > Subject: GHC AST Annotations > > > > Now that the landmines have hopefully been cleared from the AST via [1] > I would like to propose changing the location information in the AST. > > > > Right now the locations of syntactic markers such as do/let/where/in/of > in the source are discarded from the AST, although they are retained in the > rich token stream. > > > > The haskell-src-exts package deals with this by means of using the > SrcSpanInfo data type [2] which contains the SrcSpan as per the current GHC > Located type but also has a list of SrcSpan s for the syntactic markers, > depending on the particular AST fragment being annotated. > > > > In addition, the annotation type is provided as a parameter to the AST, > so that it can be changed as required, see [3]. > > > > The motivation for this change is then > > > > 1. Simplify the roundtripping and modification of source by explicitly > capturing the missing location information for the syntactic markers. > > > > 2. Allow the annotation to be a parameter so that it can be replaced > with a different one in tools, for example HaRe would include the tokens > for the AST fragment leaves. > > > > 3. Aim for some level compatibility with haskell-src-exts so that tools > developed for it could be easily ported to GHC, for example exactprint [4]. > > > > > > > > I would like feedback as to whether this would be acceptable, or if the > same goals should be achieved a different way. > > > > > > > > Regards > > > > Alan > > > > > > > > > > [1] https://phabricator.haskell.org/D157 > > > > [2] > http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-SrcLoc.html#t:SrcSpanInfo > > > > [3] > http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-Syntax.html#t:Annotated > > > > [4] > http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-ExactPrint.html#v:exactPrint > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Aug 28 19:15:32 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 28 Aug 2014 12:15:32 -0700 Subject: Contributing To Haskell talk Message-ID: <1409253332.7761.1.camel@joachim-breitner.de> Hi list, you might know that was I talked into^W^Wvolunteered to holding the ?Contributing to GHC?? talk at HIW in 9 days. I?ll do it, but for some of the intended topics, I don?t feel like the best person to do it on my own. In particular, I was wondering if any of the active Phabricator proponents or users would be available to help me there, either during the preparation (which will happen at G?teborg; I?m currently distracted by the Debian Conference in Portland), or by performing a duet. Thanks, Joachim ? http://www.haskell.org/haskellwiki/HaskellImplementorsWorkshop/2014#Contributing_to_GHC -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From alan.zimm at gmail.com Thu Aug 28 19:32:18 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 28 Aug 2014 21:32:18 +0200 Subject: GHC AST Annotations In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F4F10@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I have started capturing the discussion here https://ghc.haskell.org/trac/ghc/wiki/GhcAstAnnotations. On Thu, Aug 28, 2014 at 8:34 PM, Alan & Kim Zimmerman wrote: > This does have the advantage of being explicit. I modelled the initial > proposal on HSE as a proven solution, and I think that they were trying to > keep it non-invasive, to allow both an annotated and non-annoted AST. > > I thiink the key question is whether it is acceptable to sprinkle this > kind of information throughout the AST. For someone interested in > source-to-source conversions (like me) this is great, others may find it > intrusive. > > The other question, which is probably orthogonal to this, is whether we > want the annotation to be a parameter to the AST, which allows it to be > overridden by various tools for various purposes, or fixed as in Richard's > suggestion. > > A parameterised annotation allows the annotations to be manipulated via > something like for HSE: > > -- |AST nodes are annotated, and this class allows manipulation of the > annotations. > class Functor ast => Annotated ast where > > -- |Retrieve the annotation of an AST node. > ann :: ast l -> l > > -- |Change the annotation of an AST node. Note that only the annotation > of the node itself is affected, and not > -- the annotations of any child nodes. if all nodes in the AST tree are > to be affected, use fmap. > amap :: (l -> l) -> ast l -> ast l > > Alan > > > On Thu, Aug 28, 2014 at 7:11 PM, Richard Eisenberg > wrote: > >> For what it's worth, my thought is not to use SrcSpanInfo (which, to me, >> is the wrong way to slice the abstraction) but instead to add SrcSpan >> fields to the relevant nodes. For example: >> >> | HsDo SrcSpan -- of the word "do" >> BlockSrcSpans >> (HsStmtContext Name) -- The parameterisation is >> unimportant >> -- because in this context we never >> use >> -- the PatGuard or ParStmt variant >> [ExprLStmt id] -- "do":one or more stmts >> PostTcType -- Type of the whole expression >> >> ... >> >> data BlockSrcSpans = LayoutBlock Int -- the parameter is the indentation >> level >> ... -- stuff to track the appearance of >> any semicolons >> | BracesBlock ... -- stuff to track the braces and >> semicolons >> >> >> The way I understand it, the SrcSpanInfo proposal means that we would >> have lots of empty SrcSpanInfos, no? Most interior nodes don't need one, I >> think. >> >> Popping up a level, I do support the idea of including this info in the >> AST. >> >> Richard >> >> On Aug 28, 2014, at 11:54 AM, Simon Peyton Jones >> wrote: >> >> > In general I?m fine with this direction of travel. Some specifics: >> > >> > ? You?d have to be careful to document, for every data >> constructor in HsSyn, what the association between the [SrcSpan] in the >> SrcSpanInfo and the ?sub-entities? >> > ? Many of the sub-entities will have their own SrcSpanInfo >> wrapped around them, so there?s some unhelpful duplication. Maybe you only >> want the SrcSpanInfo to list the [SrcSpan]s for the sub-entities (like the >> syntactic keywords) that do not show up as children in the syntax tree? >> > Anyway do by all means create a GHC Trac wiki page to describe your >> proposed design, concretely. >> > >> > Simon >> > >> > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Alan >> & Kim Zimmerman >> > Sent: 28 August 2014 15:00 >> > To: ghc-devs at haskell.org >> > Subject: GHC AST Annotations >> > >> > Now that the landmines have hopefully been cleared from the AST via [1] >> I would like to propose changing the location information in the AST. >> > >> > Right now the locations of syntactic markers such as do/let/where/in/of >> in the source are discarded from the AST, although they are retained in the >> rich token stream. >> > >> > The haskell-src-exts package deals with this by means of using the >> SrcSpanInfo data type [2] which contains the SrcSpan as per the current GHC >> Located type but also has a list of SrcSpan s for the syntactic markers, >> depending on the particular AST fragment being annotated. >> > >> > In addition, the annotation type is provided as a parameter to the AST, >> so that it can be changed as required, see [3]. >> > >> > The motivation for this change is then >> > >> > 1. Simplify the roundtripping and modification of source by explicitly >> capturing the missing location information for the syntactic markers. >> > >> > 2. Allow the annotation to be a parameter so that it can be replaced >> with a different one in tools, for example HaRe would include the tokens >> for the AST fragment leaves. >> > >> > 3. Aim for some level compatibility with haskell-src-exts so that tools >> developed for it could be easily ported to GHC, for example exactprint [4]. >> > >> > >> > >> > I would like feedback as to whether this would be acceptable, or if the >> same goals should be achieved a different way. >> > >> > >> > >> > Regards >> > >> > Alan >> > >> > >> > >> > >> > [1] https://phabricator.haskell.org/D157 >> > >> > [2] >> http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-SrcLoc.html#t:SrcSpanInfo >> > >> > [3] >> http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-Syntax.html#t:Annotated >> > >> > [4] >> http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-ExactPrint.html#v:exactPrint >> > >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 28 20:38:45 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 28 Aug 2014 20:38:45 +0000 Subject: GHC AST Annotations In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F4F10@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F53EE@DB3PRD3001MB020.064d.mgd.msft.net> I thiink the key question is whether it is acceptable to sprinkle this kind of information throughout the AST. For someone interested in source-to-source conversions (like me) this is great, others may find it intrusive. It?s probably not too bad if you use record syntax; thus | HsDo { hsdo_do_loc :: SrcSpan -- of the word "do" , hsdo_blocks :: BlockSrcSpans , hsdo_ctxt :: HsStmtContext Name , hsdo_stmts :: [ExprLStmt id] , hsdo_type :: PostTcType } Simon From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 28 August 2014 19:35 To: Richard Eisenberg Cc: Simon Peyton Jones; ghc-devs at haskell.org Subject: Re: GHC AST Annotations This does have the advantage of being explicit. I modelled the initial proposal on HSE as a proven solution, and I think that they were trying to keep it non-invasive, to allow both an annotated and non-annoted AST. I thiink the key question is whether it is acceptable to sprinkle this kind of information throughout the AST. For someone interested in source-to-source conversions (like me) this is great, others may find it intrusive. The other question, which is probably orthogonal to this, is whether we want the annotation to be a parameter to the AST, which allows it to be overridden by various tools for various purposes, or fixed as in Richard's suggestion. A parameterised annotation allows the annotations to be manipulated via something like for HSE: -- |AST nodes are annotated, and this class allows manipulation of the annotations. class Functor ast => Annotated ast where -- |Retrieve the annotation of an AST node. ann :: ast l -> l -- |Change the annotation of an AST node. Note that only the annotation of the node itself is affected, and not -- the annotations of any child nodes. if all nodes in the AST tree are to be affected, use fmap. amap :: (l -> l) -> ast l -> ast l Alan On Thu, Aug 28, 2014 at 7:11 PM, Richard Eisenberg > wrote: For what it's worth, my thought is not to use SrcSpanInfo (which, to me, is the wrong way to slice the abstraction) but instead to add SrcSpan fields to the relevant nodes. For example: | HsDo SrcSpan -- of the word "do" BlockSrcSpans (HsStmtContext Name) -- The parameterisation is unimportant -- because in this context we never use -- the PatGuard or ParStmt variant [ExprLStmt id] -- "do":one or more stmts PostTcType -- Type of the whole expression ... data BlockSrcSpans = LayoutBlock Int -- the parameter is the indentation level ... -- stuff to track the appearance of any semicolons | BracesBlock ... -- stuff to track the braces and semicolons The way I understand it, the SrcSpanInfo proposal means that we would have lots of empty SrcSpanInfos, no? Most interior nodes don't need one, I think. Popping up a level, I do support the idea of including this info in the AST. Richard On Aug 28, 2014, at 11:54 AM, Simon Peyton Jones > wrote: > In general I?m fine with this direction of travel. Some specifics: > > ? You?d have to be careful to document, for every data constructor in HsSyn, what the association between the [SrcSpan] in the SrcSpanInfo and the ?sub-entities? > ? Many of the sub-entities will have their own SrcSpanInfo wrapped around them, so there?s some unhelpful duplication. Maybe you only want the SrcSpanInfo to list the [SrcSpan]s for the sub-entities (like the syntactic keywords) that do not show up as children in the syntax tree? > Anyway do by all means create a GHC Trac wiki page to describe your proposed design, concretely. > > Simon > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Alan & Kim Zimmerman > Sent: 28 August 2014 15:00 > To: ghc-devs at haskell.org > Subject: GHC AST Annotations > > Now that the landmines have hopefully been cleared from the AST via [1] I would like to propose changing the location information in the AST. > > Right now the locations of syntactic markers such as do/let/where/in/of in the source are discarded from the AST, although they are retained in the rich token stream. > > The haskell-src-exts package deals with this by means of using the SrcSpanInfo data type [2] which contains the SrcSpan as per the current GHC Located type but also has a list of SrcSpan s for the syntactic markers, depending on the particular AST fragment being annotated. > > In addition, the annotation type is provided as a parameter to the AST, so that it can be changed as required, see [3]. > > The motivation for this change is then > > 1. Simplify the roundtripping and modification of source by explicitly capturing the missing location information for the syntactic markers. > > 2. Allow the annotation to be a parameter so that it can be replaced with a different one in tools, for example HaRe would include the tokens for the AST fragment leaves. > > 3. Aim for some level compatibility with haskell-src-exts so that tools developed for it could be easily ported to GHC, for example exactprint [4]. > > > > I would like feedback as to whether this would be acceptable, or if the same goals should be achieved a different way. > > > > Regards > > Alan > > > > > [1] https://phabricator.haskell.org/D157 > > [2] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-SrcLoc.html#t:SrcSpanInfo > > [3] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-Syntax.html#t:Annotated > > [4] http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-ExactPrint.html#v:exactPrint > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Aug 28 20:42:14 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 28 Aug 2014 20:42:14 +0000 Subject: Why isn't (.) CONLIKE? In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F5429@DB3PRD3001MB020.064d.mgd.msft.net> Maybe. But to use on the LHS of a rule (which would be the motivation, I assume) you?d also need to make sure it was not inlined in phase 2. Perhaps do-able, but you?d need some compelling examples to motivate Simon From: David Feuer [mailto:david.feuer at gmail.com] Sent: 28 August 2014 17:51 To: Simon Peyton Jones Cc: ghc-devs Subject: Why isn't (.) CONLIKE? Speaking of CONLIKE, I'd have expected (.) to be CONLIKE, since it looks much like a constructor. Would that be bad for some reason? Or is it already treated well enough not to need that? On Aug 28, 2014 11:56 AM, "Simon Peyton Jones" > wrote: Actually the CONLIKE thing still allows them to float, but makes RULES continue to work even though they?ve been floated. See the user manual. From: Dan Doel [mailto:dan.doel at gmail.com] Sent: 28 August 2014 16:48 To: Simon Peyton Jones Cc: John Lato; David Feuer; ghc-devs Subject: Re: Why isn't ($) inlining when I want? Okay, so marking things as conlike will make GHC avoid floating them? I'm pretty sure that in most vector cases, this is a straight pessimization. There is no way to avoid the extra allocation of integers, because most intermediate vector types are unboxed, so the integer allocation will be performed regardless. Only boxed vectors might be an exception. On Thu, Aug 28, 2014 at 4:14 AM, Simon Peyton Jones > wrote: I remember doing some work on the ?floating of constant lists? question. First, [1..n] turns into (enumFromTo 1 n), and if enumFromTo was expensive, then sharing it might be a good plan. So GHC would have to know that it was cheap. I did experiment with ?cheapBuild? see https://ghc.haskell.org/trac/ghc/ticket/7206, but as you?ll see there, the results were equivocal. By duplicating the [1..n] we were allocating two copies of (I# 4), (I# 5) etc, and that increased allocation and GC time. So it?s unclear, in general, whether in these examples it is better to share the [1..n] between all calls of ?loop?, or to duplicate it. All that said, Dan?s question of why X fuses and very-similar Y doesn?t was a surprise to me; I?ll look into that. Simon From: John Lato [mailto:jwlato at gmail.com] Sent: 28 August 2014 00:17 To: Dan Doel Cc: Simon Peyton Jones; David Feuer; ghc-devs Subject: Re: Why isn't ($) inlining when I want? I sometimes think the solution is to make let-floating apply in fewer cases. I'm not sure we ever want to float out intermediate lists, the cost of creating them is very small relative to the memory consumption if they do happen to get shared. My approach is typically to mark loop INLINE. This very often results in the code I want (with vector, which I use more than lists), but it is a big hammer to apply. John On Thu, Aug 28, 2014 at 5:56 AM, Dan Doel > wrote: I think talking about inlining of $ may not be addressing the crux of the problem here. The issue seems to be about functions like the one in the first message. For instance: loop :: (Int -> Int) -> Int loop g = sum . map g $ [1..1000000] Suppose for argument that we have a fusion framework that would handle this. The problem is that this does not actually turn into a loop over integers, because the constant [1..1000000] gets floated out. It instead builds a list/vector/whatever. By contrast, if we write: loop' :: Int loop' = sum . map (+1) $ [1..1000000] this does turn into a loop over integers, with no intermediate list. Presumably this is due to there being no work to be saved ever by floating the list out. These are the examples people usually test fusion with. And if loop is small enough to inline, it turns out that the actual code that gets run will be the same as loop', because everything will get inlined and fused. But it is also possible to make loop big enough to not inline, and then the floating will pessimize the overall code. So the core issue is that constant floating blocks some fusion opportunities. It is trying to save the work of building the structure more than once, but fusion can cause the structure to not be built at all. And the floating happens before fusion can reasonably be expected to work. Can anything be done about this? I've verified that this kind of situation also affects vector. And it seems to be an issue even if loop is written: loop g = sum (map g [1..1000000]) -- Dan On Wed, Aug 27, 2014 at 3:38 PM, Simon Peyton Jones > wrote: You'll have to do more detective work! In your dump I see "Inactive unfolding $". So that's why it's not being inlined. That message comes from CoreUnfold, line 941 or so. The Boolean active_unfolding is passed in to callSiteInline from Simplify, line 1408 or so. It is generated by the function activeUnfolding, defined in SimplUtils. But you have probably change the "CompilerPhase" data type, so I can't guess what is happening. But if you just follow it through I'm sure you'll find it. Simon | -----Original Message----- | From: David Feuer [mailto:david.feuer at gmail.com] | Sent: 27 August 2014 17:22 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: Why isn't ($) inlining when I want? | | I just ran that (results attached), and as far as I can tell, it | doesn't even *consider* inlining ($) until phase 2. | | On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones | > wrote: | > It's hard to tell since you are using a modified compiler. Try running | with -ddump-occur-anal -dverbose-core2core -ddump-inlinings. That will | show you every inlining, whether failed or successful. You can see the | attempt to inline ($) and there is some info with the output that may | help to explain why it did or did not work. | > | > Try that | > | > Simon | > | > | -----Original Message----- | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | David | > | Feuer | > | Sent: 27 August 2014 04:50 | > | To: ghc-devs; Carter Schonwald | > | Subject: Why isn't ($) inlining when I want? | > | | > | tl;dr I added a simplifier run with inlining enabled between | > | specialization and floating out. It seems incapable of inlining | > | saturated applications of ($), and I can't figure out why. These are | > | inlined later, when phase 2 runs. Am I running the simplifier wrong | or | > | something? | > | | > | | > | I'm working on this simple little fusion pipeline: | > | | > | {-# INLINE takeWhile #-} | > | takeWhile p xs = build builder | > | where | > | builder c n = foldr go n xs | > | where | > | go x r = if p x then x `c` r else n | > | | > | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10] | > | | > | There are some issues with the enumFrom definition that break things. | > | If I use a fusible unfoldr that produces some numbers instead, that | > | issue goes away. Part of that problem (but not all of it) is that the | > | simplifier doesn't run to apply rules between specialization and full | > | laziness, so there's no opportunity for the specialization of | > | enumFromTo to Int to trigger the rewrite to a build form and fusion | > | with foldr before full laziness tears things apart. The other problem | > | is that inlining doesn't happen at all before full laziness, so | things | > | defined using foldr and/or build aren't actually exposed as such | until | > | afterwards. Therefore I decided to try adding a simplifier run with | > | inlining between specialization and floating out: | > | | > | runWhen do_specialise CoreDoSpecialising, | > | | > | runWhen full_laziness $ CoreDoSimplify max_iter | > | (base_mode { sm_phase = InitialPhase | > | , sm_names = ["PostGentle"] | > | , sm_rules = rules_on | > | , sm_inline = True | > | , sm_case_case = False }), | > | | > | runWhen full_laziness $ | > | CoreDoFloatOutwards FloatOutSwitches { | > | floatOutLambdas = Just 0, | > | floatOutConstants = True, | > | floatOutPartialApplications = False | }, | > | | > | The weird thing is that for some reason this doesn't inline ($), even | > | though it appears to be saturated. Using the modified thing with (my | > | version of) unfoldr: | > | | > | foo c n x = (foldr c n . takeWhile (/= (1::Int))) $ unfoldr (potato | 10) | > | (-9) | > | | > | potato :: Int -> Int -> Maybe (Int, Int) | > | potato n m | m <= n = Just (m, m) | > | | otherwise = Nothing | > | | > | | > | I get this out of the specializer: | > | | > | foo | > | foo = | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | > | $ (. (foldr c_a1HT n_a1HU) | > | (takeWhile | > | (let { | > | ds_s21z | > | ds_s21z = I# 1 } in | > | \ ds_d1Zw -> neInt ds_d1Zw ds_s21z))) | > | (let { | > | n_s21x | > | n_s21x = I# 10 } in | > | unfoldr | > | (\ m_a1U7 -> | > | case leInt m_a1U7 n_s21x of _ { | > | False -> Nothing; | > | True -> Just (m_a1U7, m_a1U7) | > | }) | > | ($fNumInt_$cnegate (I# 9))) | > | | > | | > | and then I get this out of my extra simplifier run: | > | | > | foo | > | foo = | > | \ @ t_a1Za @ c_a1Zb c_a1HT n_a1HU _ -> | > | $ (\ x_a20f -> | > | foldr | > | (\ x_a1HR r_a1HS -> | > | case case x_a1HR of _ { I# x_a20R -> | > | tagToEnum# | > | (case x_a20R of _ { | > | __DEFAULT -> 1; | > | 1 -> 0 | > | }) | > | } | > | of _ { | > | False -> n_a1HU; | > | True -> c_a1HT x_a1HR r_a1HS | > | }) | > | n_a1HU | > | x_a20f) | > | (let { | > | b'_a1ZS | > | b'_a1ZS = $fNumInt_$cnegate (I# 9) } in | > | $ (build) | > | (\ @ b1_a1ZU c_a1ZV n_a1ZW -> | > | letrec { | > | go_a1ZX | > | go_a1ZX = | > | \ b2_a1ZY -> | > | case case case b2_a1ZY of _ { I# x_a218 -> | > | tagToEnum# (<=# x_a218 10) | > | } | > | of _ { | > | False -> Nothing; | > | True -> Just (b2_a1ZY, b2_a1ZY) | > | } | > | of _ { | > | Nothing -> n_a1ZW; | > | Just ds_a203 -> | > | case ds_a203 of _ { (a1_a207, new_b_a208) -> | > | c_a1ZV a1_a207 (go_a1ZX new_b_a208) | > | } | > | }; } in | > | go_a1ZX b'_a1ZS)) | > | | > | | > | That is, neither the $ in the code nor the $ that was inserted when | > | inlining unfoldr got inlined themselves, even though both appear to | be | > | saturated. As a result, foldr/build doesn't fire, and full laziness | > | tears things apart. Later on, in simplifier phase 2, $ gets inlined. | > | What's preventing this from happening in the PostGentle phase I | added? | > | | > | David Feuer | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggreif at gmail.com Fri Aug 29 00:56:01 2014 From: ggreif at gmail.com (Gabor Greif) Date: Fri, 29 Aug 2014 02:56:01 +0200 Subject: clang warnings with unregisterised Message-ID: Devs, I have built an UNREGISTERISED GHC, and the C-compiler used behind the scenes is clang. Now I get literally millions of warnings of the below kind: +/var/folders/k9/fj_1d5h17m7c4gbyp2srqrhm0000gq/T/ghc11601_0/ghc11601_4.hc:688:1: + warning: attribute declaration must precede definition [-Wignored-attributes] +II_(s4Vv_closure); +^ + +/Users/ggreif/ghc-head/includes/Stg.h:213:63: + note: expanded from macro 'II_' +#define II_(X) static StgWordArray (X) GNU_ATTRIBUTE(aligned (8)) + ^ + +/Users/ggreif/ghc-head/includes/Stg.h:175:42: + note: expanded from macro 'GNU_ATTRIBUTE' +#define GNU_ATTRIBUTE(at) __attribute__((at)) + ^ + +/var/folders/k9/fj_1d5h17m7c4gbyp2srqrhm0000gq/T/ghc11601_0/ghc11601_4.hc:588:16: + note: previous definition is here +static StgWord s4Vv_closure[] = { + ^ It seems like the "II_" and "EI_" prototypes *follow* the real thing, and because clang is more picky with attribute placement, we get all those warnings. compiler/cmm/PprC.hs:pprExternDecl is the function that puts together the "II_(...)" and "EI_(...)", but where does the "static StgWord s4Vv_closure[] = {" come from? I just want to flip the order of their occurrence. Thanks, Gabor From david.feuer at gmail.com Fri Aug 29 05:10:37 2014 From: david.feuer at gmail.com (David Feuer) Date: Fri, 29 Aug 2014 01:10:37 -0400 Subject: Raft of optimizer changes Message-ID: On Thu, Aug 28, 2014 at 8:00 AM, simonpj wrote > I've just pushed a bunch of Core-to-Core optimisation changes that have been sitting in my tree for ages. The aggregate effect on nofib is very modest, but they are mostly aimed at corner cases, and consolidation. Thanks for trying to do that. Unfortunately, this seems to have introduced some other sort of corner case. Making reverse (and unfoldr, but I'm pretty sure that's unused and hence irrelevant) fusible now makes n-body allocate 1100% more. I haven't looked into why yet. David From slyich at gmail.com Fri Aug 29 05:45:07 2014 From: slyich at gmail.com (Sergei Trofimovich) Date: Fri, 29 Aug 2014 08:45:07 +0300 Subject: clang warnings with unregisterised In-Reply-To: References: Message-ID: <20140829084507.1c9936a5@sf> On Fri, 29 Aug 2014 02:56:01 +0200 Gabor Greif wrote: > Devs, > > I have built an UNREGISTERISED GHC, and the C-compiler used behind the > scenes is clang. Now I get literally millions of warnings of the below > kind: > > > +/var/folders/k9/fj_1d5h17m7c4gbyp2srqrhm0000gq/T/ghc11601_0/ghc11601_4.hc:688:1: > + warning: attribute declaration must precede definition > [-Wignored-attributes] > +II_(s4Vv_closure); > +^ > + > +/Users/ggreif/ghc-head/includes/Stg.h:213:63: > + note: expanded from macro 'II_' > +#define II_(X) static StgWordArray (X) GNU_ATTRIBUTE(aligned (8)) > + ^ > + > +/Users/ggreif/ghc-head/includes/Stg.h:175:42: > + note: expanded from macro 'GNU_ATTRIBUTE' > +#define GNU_ATTRIBUTE(at) __attribute__((at)) > + ^ > + > +/var/folders/k9/fj_1d5h17m7c4gbyp2srqrhm0000gq/T/ghc11601_0/ghc11601_4.hc:588:16: > + note: previous definition is here > +static StgWord s4Vv_closure[] = { > + ^ > > It seems like the "II_" and "EI_" prototypes *follow* the real thing, > and because clang is more picky with attribute placement, we get all > those warnings. They just occur many times in the source, thus not only before but also after definition. > compiler/cmm/PprC.hs:pprExternDecl is the function that puts together > the "II_(...)" and "EI_(...)", but where does the "static StgWord > s4Vv_closure[] = {" come from? pprWordArray :: CLabel -> [CmmStatic] -> SDoc > I just want to flip the order of their occurrence. I think it would be a good thing to split .hc file lifting all external and local declarations up (and print only unique ones). It should shrink .hc file size a bit and make it nicer to read. -- Sergei -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From simonpj at microsoft.com Fri Aug 29 09:49:27 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 29 Aug 2014 09:49:27 +0000 Subject: Fusion In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221CD20C@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E1B78@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F5D02@DB3PRD3001MB020.064d.mgd.msft.net> I have added a section "Ticky-ticky quick start" to our ticky-ticky profiling page, to explain how I go about dealing with the problem you describe https://ghc.haskell.org/trac/ghc/wiki/Debugging/TickyTicky Simon | -----Original Message----- | From: David Feuer [mailto:david.feuer at gmail.com] | Sent: 20 August 2014 09:33 | To: Simon Peyton Jones | Subject: Re: Fusion | | I'll be happy to try to expand it with some examples. I'm wondering if | you could help me figure something out: the (simple) cons/build rule | we discussed, along with the similar cons/augment rule, | | "cons/build" forall (x::a) (g::forall b . (a->b->b)->b->b) . x : | build g = build (\c n -> c x (g c n)) | "cons/augment" forall (x::a) (g::forall b . (a->b->b)->b->b) | (xs::[a]) . x : augment g xs = augment (\c n -> c x (g c n)) xs | | somehow *increase* allocation substantially (11.7%) in the "event" | NoFib test, and also significantly (3.6%) in constraints, somewhat | (2.4%) in nucleic2 [remember this] and 1.4% in ansi. I am having a | heck of a time trying to figure out how to track these down, and | burning loads of time recompiling GHC over and over again. On the flip | side of things, the wang test reduces allocation by 45.8% if I use | these rules (both of which fire), but only when I also use -fsimple- | list-literals for nofib/spectral/hartel/nucleic2/Main.hs. | nucleic2 still performs a little more allocation. | | On Wed, Aug 20, 2014 at 2:56 AM, Simon Peyton Jones | wrote: | > Great start, thank you. Can I suggest that for each question you | give | > a concrete example? Otherwise only experts, who already know a lot | about | > rules, will understand the question, let alone the answer. I?d be | happy to | > take another look then. | > | > | > Simon | > | > | > | > From: David Feuer [mailto:david.feuer at gmail.com] | > Sent: 19 August 2014 23:30 | > To: Simon Peyton Jones | > Cc: Haskell Libraries | > Subject: Re: Fusion | > | > | > | > I've started a page at | > https://ghc.haskell.org/trac/ghc/wiki/FoldrBuildNotes | > Please feel free to add, correct, etc. | > | > On Aug 19, 2014 3:10 AM, "Simon Peyton Jones" | wrote: | > | > David | > | > You've been doing all this work on improving fusion, and you | probably | > have a very good idea now about how it works, and how GHC's | libraries | > use phases and RULES to achieve it. A kind of design pattern, if you | > like; tips and tricks. | > | > I wonder if you'd feel able to write a GHC wiki page describing what | > you have learned, with examples and explanation about why it is done | that way. | > If you did this, someone who follows in your footsteps wouldn't need | > to re-learn everything. And maybe someone will say "oh, there's one | > pattern you have omitted, here it is". | > | > Thanks | > | > Simon From simonpj at microsoft.com Fri Aug 29 10:04:14 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 29 Aug 2014 10:04:14 +0000 Subject: build fixes Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F5D54@DB3PRD3001MB020.064d.mgd.msft.net> I've pushed patches that should finally fix the build. (including improved performance in the compiler itself!) Sorry about the breakage yesterday Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergo at erdi.hu Fri Aug 29 11:44:07 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Fri, 29 Aug 2014 19:44:07 +0800 (SGT) Subject: Does the 'stage=2' setting not work anymore in build.mk? Message-ID: Hi, I tried setting 'stage=2' in my mk/build.mk file, but the stage 1 compiler is still getting rebuilt (and of course this causes the stage 2 compiler to be rebuilt from scratch...). I am using BuildFlavour=devel2. What am I missing? Thanks, Gergo From ezyang at mit.edu Fri Aug 29 11:53:05 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 29 Aug 2014 12:53:05 +0100 Subject: Does the 'stage=2' setting not work anymore in build.mk? In-Reply-To: References: Message-ID: <1409313152-sup-5973@sabre> I don't see any relevant change in the last week. I'll give it a try and see if I can reproduce. Edward Excerpts from Dr. ERDI Gergo's message of 2014-08-29 12:44:07 +0100: > Hi, > > I tried setting 'stage=2' in my mk/build.mk file, but the stage 1 compiler > is still getting rebuilt (and of course this causes the stage 2 compiler > to be rebuilt from scratch...). I am using BuildFlavour=devel2. What am I > missing? > > Thanks, > Gergo From gergo at erdi.hu Fri Aug 29 12:01:44 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Fri, 29 Aug 2014 20:01:44 +0800 (SGT) Subject: Does the 'stage=2' setting not work anymore in build.mk? In-Reply-To: <1409313152-sup-5973@sabre> References: <1409313152-sup-5973@sabre> Message-ID: On Fri, 29 Aug 2014, Edward Z. Yang wrote: > I don't see any relevant change in the last week. I'll give it a > try and see if I can reproduce. I don't think it's been working properly for me for months now. From ezyang at mit.edu Fri Aug 29 12:04:31 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 29 Aug 2014 13:04:31 +0100 Subject: Does the 'stage=2' setting not work anymore in build.mk? In-Reply-To: References: <1409313152-sup-5973@sabre> Message-ID: <1409313831-sup-1347@sabre> OK, it's definitely worked for me in that time. Do you have mk/are-validating.mk in your tree? Also do try on a fresh working copy. Edward Excerpts from Dr. ERDI Gergo's message of 2014-08-29 13:01:44 +0100: > On Fri, 29 Aug 2014, Edward Z. Yang wrote: > > > I don't see any relevant change in the last week. I'll give it a > > try and see if I can reproduce. > > I don't think it's been working properly for me for months now. From gergo at erdi.hu Fri Aug 29 12:14:04 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Fri, 29 Aug 2014 20:14:04 +0800 (SGT) Subject: Does the 'stage=2' setting not work anymore in build.mk? In-Reply-To: <1409313831-sup-1347@sabre> References: <1409313152-sup-5973@sabre> <1409313831-sup-1347@sabre> Message-ID: On Fri, 29 Aug 2014, Edward Z. Yang wrote: > OK, it's definitely worked for me in that time. Do you have > mk/are-validating.mk in your tree? Also do try on a fresh working copy. Yes I do! Is that the remnant of some failed validation process? Should I remove it? From johan.tibell at gmail.com Fri Aug 29 12:15:26 2014 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 29 Aug 2014 14:15:26 +0200 Subject: Does the 'stage=2' setting not work anymore in build.mk? In-Reply-To: References: <1409313152-sup-5973@sabre> <1409313831-sup-1347@sabre> Message-ID: I think make maintainer-clean removes it. I find this hidden pieces of state annoying. It has definitely tripped me up in the past. Perhaps we can have make clean remove it? On Fri, Aug 29, 2014 at 2:14 PM, Dr. ERDI Gergo wrote: > On Fri, 29 Aug 2014, Edward Z. Yang wrote: > > OK, it's definitely worked for me in that time. Do you have >> mk/are-validating.mk in your tree? Also do try on a fresh working copy. >> > > Yes I do! Is that the remnant of some failed validation process? Should I > remove it? > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Fri Aug 29 12:20:58 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 29 Aug 2014 13:20:58 +0100 Subject: Does the 'stage=2' setting not work anymore in build.mk? In-Reply-To: References: <1409313152-sup-5973@sabre> <1409313831-sup-1347@sabre> Message-ID: <1409314833-sup-2572@sabre> are-validating.mk flips GHC tree into "validating mode", which makes it ignore mk/build.mk. Remember to delete it when you are done validating. Edward Excerpts from Dr. ERDI Gergo's message of 2014-08-29 13:14:04 +0100: > On Fri, 29 Aug 2014, Edward Z. Yang wrote: > > > OK, it's definitely worked for me in that time. Do you have > > mk/are-validating.mk in your tree? Also do try on a fresh working copy. > > Yes I do! Is that the remnant of some failed validation process? Should I > remove it? From gergo at erdi.hu Fri Aug 29 12:23:38 2014 From: gergo at erdi.hu (Dr. ERDI Gergo) Date: Fri, 29 Aug 2014 20:23:38 +0800 (SGT) Subject: Does the 'stage=2' setting not work anymore in build.mk? In-Reply-To: References: <1409313152-sup-5973@sabre> <1409313831-sup-1347@sabre> Message-ID: On Fri, 29 Aug 2014, Johan Tibell wrote: > I think make maintainer-clean removes it. I find this hidden pieces of state annoying. > It has definitely tripped me up in the past. Perhaps we can have make clean remove it? I've removed it manually and now 'stage=2' works as it used to. Thanks both of you! From ezyang at mit.edu Fri Aug 29 12:56:39 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 29 Aug 2014 13:56:39 +0100 Subject: Moving Haddock *development* out of GHC tree In-Reply-To: <87ha1ca1nt.fsf@gmail.com> References: <53E45F2D.9000806@fuuzetsu.co.uk> <53EBE224.1060103@fuuzetsu.co.uk> <618BE556AADD624C9C918AA5D5911BEF221AE385@DB3PRD3001MB020.064d.mgd.msft.net> <53EF71E7.5090804@fuuzetsu.co.uk> <87ha1ca1nt.fsf@gmail.com> Message-ID: <1409316962-sup-4405@sabre> Hello Herbert, I think the pre-commit hook needs to be adjusted; I used to have push rights on master, but I cannot seem to push to ghc-head. Thanks, Edward Excerpts from Herbert Valerio Riedel's message of 2014-08-16 16:34:46 +0100: > On 2014-08-16 at 16:59:51 +0200, Mateusz Kowalczyk wrote: > > [...] > > > Herbert kindly updated the sync-all script that > > defaults to the new branch so I think we're covered. > > Minor correction: I did not touch the sync-all script at all. I merely > declared a default branch in the .gitmodules file: > > http://git.haskell.org/ghc.git/commitdiff/03a8003e5d3aec97b3a14b2d3c774aad43e0456e > From marek.28.93 at gmail.com Fri Aug 29 13:50:04 2014 From: marek.28.93 at gmail.com (Marek Wawrzos) Date: Fri, 29 Aug 2014 15:50:04 +0200 Subject: Problems with building GHC 7.8.3 on Windows Message-ID: Hello, I am trying to compile GHC 7.8.3 on Windows. I was following the instructions from the GHC wiki, but I have encountered errors during the make process. I have filed a bug report describing my issue: https://ghc.haskell.org/trac/ghc/ticket/9513 Does anyone had working setup for building GHC and would be willing to share information on how to achieve it? -- Best regards, Marek Wawrzos -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Fri Aug 29 14:11:52 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 29 Aug 2014 15:11:52 +0100 Subject: HEADS UP: full rebuild necessary Message-ID: <1409321491-sup-2316@sabre> Duncan has landed his changes to remove GHC's dep on Cabal, so you'll need to do a clean and full rebuild once you pull from master. Cheers, Edward From kyrab at mail.ru Fri Aug 29 15:59:08 2014 From: kyrab at mail.ru (kyra) Date: Fri, 29 Aug 2014 19:59:08 +0400 Subject: Problems with building GHC 7.8.3 on Windows In-Reply-To: References: Message-ID: <5400A34C.5030401@mail.ru> 1. I see you've set ticket's 'Architecture' field to be 'x86_64' while make output suggests you try to build 32-bit ghc. 2. You are using 'old' MSys which is known to be problematic when building GHC. It's much better to use MSYS2 now: https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows/MSYS2. Also, remember, MSYS2 is only a *build environment*, so you can use 64-bit MSYS2 to build 32-bit GHC on 64-bit Windows. My experience is that 64-bit MSYS2 is more solid and stable than 32-bit MSYS2. And it's extremely important to remember you must *not* use msys2_shell.bat to start MSYS2 shell, only mingwXX_shell.bat (XX stands for 32 or 64) shall be used to start MSYS2 shell -- otherwise GHC make system would not recognize build triplet. Cheers, Kyra On 8/29/2014 17:50, Marek Wawrzos wrote: > Hello, > > I am trying to compile GHC 7.8.3 on Windows. I was following the > instructions from the GHC wiki, but I have encountered errors during > the make process. > > I have filed a bug report describing my issue: > https://ghc.haskell.org/trac/ghc/ticket/9513 > > Does anyone had working setup for building GHC and would be willing to > share information on how to achieve it? > > -- > Best regards, > Marek Wawrzos > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From david.feuer at gmail.com Fri Aug 29 16:11:07 2014 From: david.feuer at gmail.com (David Feuer) Date: Fri, 29 Aug 2014 12:11:07 -0400 Subject: Fusion In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221F5D02@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221CD20C@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221E1B78@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F5D02@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Thank you! I will dig into using that ASAP now that I can understand the directions. On Aug 29, 2014 5:50 AM, "Simon Peyton Jones" wrote: > I have added a section "Ticky-ticky quick start" to our ticky-ticky > profiling page, to explain how I go about dealing with the problem you > describe > https://ghc.haskell.org/trac/ghc/wiki/Debugging/TickyTicky > > Simon > > | -----Original Message----- > | From: David Feuer [mailto:david.feuer at gmail.com] > | Sent: 20 August 2014 09:33 > | To: Simon Peyton Jones > | Subject: Re: Fusion > | > | I'll be happy to try to expand it with some examples. I'm wondering if > | you could help me figure something out: the (simple) cons/build rule > | we discussed, along with the similar cons/augment rule, > | > | "cons/build" forall (x::a) (g::forall b . (a->b->b)->b->b) . x : > | build g = build (\c n -> c x (g c n)) > | "cons/augment" forall (x::a) (g::forall b . (a->b->b)->b->b) > | (xs::[a]) . x : augment g xs = augment (\c n -> c x (g c n)) xs > | > | somehow *increase* allocation substantially (11.7%) in the "event" > | NoFib test, and also significantly (3.6%) in constraints, somewhat > | (2.4%) in nucleic2 [remember this] and 1.4% in ansi. I am having a > | heck of a time trying to figure out how to track these down, and > | burning loads of time recompiling GHC over and over again. On the flip > | side of things, the wang test reduces allocation by 45.8% if I use > | these rules (both of which fire), but only when I also use -fsimple- > | list-literals for nofib/spectral/hartel/nucleic2/Main.hs. > | nucleic2 still performs a little more allocation. > | > | On Wed, Aug 20, 2014 at 2:56 AM, Simon Peyton Jones > | wrote: > | > Great start, thank you. Can I suggest that for each question you > | give > | > a concrete example? Otherwise only experts, who already know a lot > | about > | > rules, will understand the question, let alone the answer. I?d be > | happy to > | > take another look then. > | > > | > > | > Simon > | > > | > > | > > | > From: David Feuer [mailto:david.feuer at gmail.com] > | > Sent: 19 August 2014 23:30 > | > To: Simon Peyton Jones > | > Cc: Haskell Libraries > | > Subject: Re: Fusion > | > > | > > | > > | > I've started a page at > | > https://ghc.haskell.org/trac/ghc/wiki/FoldrBuildNotes > | > Please feel free to add, correct, etc. > | > > | > On Aug 19, 2014 3:10 AM, "Simon Peyton Jones" > | wrote: > | > > | > David > | > > | > You've been doing all this work on improving fusion, and you > | probably > | > have a very good idea now about how it works, and how GHC's > | libraries > | > use phases and RULES to achieve it. A kind of design pattern, if you > | > like; tips and tricks. > | > > | > I wonder if you'd feel able to write a GHC wiki page describing what > | > you have learned, with examples and explanation about why it is done > | that way. > | > If you did this, someone who follows in your footsteps wouldn't need > | > to re-learn everything. And maybe someone will say "oh, there's one > | > pattern you have omitted, here it is". > | > > | > Thanks > | > > | > Simon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pali.gabor at gmail.com Fri Aug 29 16:17:47 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Fri, 29 Aug 2014 18:17:47 +0200 Subject: HEADS UP: full rebuild necessary In-Reply-To: <1409321491-sup-2316@sabre> References: <1409321491-sup-2316@sabre> Message-ID: Hi Edward, 2014-08-29 16:11 GMT+02:00 Edward Z. Yang : > Duncan has landed his changes to remove GHC's dep on Cabal, so you'll > need to do a clean and full rebuild once you pull from master. Do you know if this is related: [..] libraries\bin-package-db\GHC\PackageDb.hs:264:11: Not in scope: `catchIO' Perhaps you meant one of these: `catch' (imported from Control.Exception), `catches' (imported from Control.Exception) libraries\bin-package-db\GHC\PackageDb.hs:269:34: Not in scope: `newFile' libraries\bin-package-db\GHC\PackageDb.hs:272:20: Not in scope: `throwIOIO' Perhaps you meant `throwIO' (imported from Control.Exception) libraries/bin-package-db/ghc.mk:3: recipe for target 'libraries/bin-package-db/dist-boot/build/GHC/PackageDb.o' failed make[1]: *** [libraries/bin-package-db/dist-boot/build/GHC/PackageDb.o] Error 1 Makefile:71: recipe for target 'all' failed For more information, please see my recent build attempt on Windows/x86_64 [1]. [1] http://haskell.inf.elte.hu/builders/windows-x86_64-head/9/10.html From pali.gabor at gmail.com Fri Aug 29 16:22:18 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Fri, 29 Aug 2014 18:22:18 +0200 Subject: Problems with building GHC 7.8.3 on Windows In-Reply-To: <5400A34C.5030401@mail.ru> References: <5400A34C.5030401@mail.ru> Message-ID: 2014-08-29 17:59 GMT+02:00 kyra : > it's extremely > important to remember you must *not* use msys2_shell.bat to start MSYS2 > shell, only mingwXX_shell.bat (XX stands for 32 or 64) shall be used to > start MSYS2 shell -- otherwise GHC make system would not recognize build > triplet. For what it is worth -- to my experience --, one could build GHC successfully on Windows without using mingwXX_shell.bat. Only the MSYSTEM environment variable has to be set properly (to either MINGW32 or MINGW64). That is what mingwXX_shell.bat also does. From ezyang at mit.edu Fri Aug 29 16:23:20 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 29 Aug 2014 17:23:20 +0100 Subject: HEADS UP: full rebuild necessary In-Reply-To: References: <1409321491-sup-2316@sabre> Message-ID: <1409329362-sup-4628@sabre> Yes, this is a bug, it looks like the Windows code bitrotted. Could you go ahead and fix them (looks straightforward) and post your patch? Thanks, Edward Excerpts from P?li G?bor J?nos's message of 2014-08-29 17:17:47 +0100: > Hi Edward, > > 2014-08-29 16:11 GMT+02:00 Edward Z. Yang : > > Duncan has landed his changes to remove GHC's dep on Cabal, so you'll > > need to do a clean and full rebuild once you pull from master. > > Do you know if this is related: > > [..] > libraries\bin-package-db\GHC\PackageDb.hs:264:11: > Not in scope: `catchIO' > Perhaps you meant one of these: > `catch' (imported from Control.Exception), > `catches' (imported from Control.Exception) > libraries\bin-package-db\GHC\PackageDb.hs:269:34: > Not in scope: `newFile' > libraries\bin-package-db\GHC\PackageDb.hs:272:20: > Not in scope: `throwIOIO' > Perhaps you meant `throwIO' (imported from Control.Exception) > libraries/bin-package-db/ghc.mk:3: recipe for target > 'libraries/bin-package-db/dist-boot/build/GHC/PackageDb.o' failed > make[1]: *** [libraries/bin-package-db/dist-boot/build/GHC/PackageDb.o] Error 1 > Makefile:71: recipe for target 'all' failed > > For more information, please see my recent build attempt on Windows/x86_64 [1]. > > > [1] http://haskell.inf.elte.hu/builders/windows-x86_64-head/9/10.html From mail at joachim-breitner.de Fri Aug 29 16:53:17 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 29 Aug 2014 09:53:17 -0700 Subject: build fixes In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221F5D54@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221F5D54@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1409331197.2288.3.camel@joachim-breitner.de> Hi, Am Freitag, den 29.08.2014, 10:04 +0000 schrieb Simon Peyton Jones: > I?ve pushed patches that should finally fix the build. (including > improved performance in the compiler itself!) oh wow, what a huge number of patches. It?ll take a few days for http://ghcspeed-nomeata.rhcloud.com/ to catch up. (And unfortunately the graphs will be less useful because of a bug? in codespeed, which orders commits by CommitDate, which is often wrong after rebasing. Maybe I should re-invent the wheel and write my own tool after all...) Greetings, Joachim ? https://github.com/tobami/codespeed -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Fri Aug 29 16:54:51 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 29 Aug 2014 09:54:51 -0700 Subject: build fixes In-Reply-To: <1409331197.2288.3.camel@joachim-breitner.de> References: <618BE556AADD624C9C918AA5D5911BEF221F5D54@DB3PRD3001MB020.064d.mgd.msft.net> <1409331197.2288.3.camel@joachim-breitner.de> Message-ID: <1409331291.2288.4.camel@joachim-breitner.de> Sorry, Am Freitag, den 29.08.2014, 09:53 -0700 schrieb Joachim Breitner: > ? https://github.com/tobami/codespeed should have been https://github.com/tobami/codespeed/issues/173 Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From pali.gabor at gmail.com Fri Aug 29 17:28:40 2014 From: pali.gabor at gmail.com (=?UTF-8?B?UMOhbGkgR8OhYm9yIErDoW5vcw==?=) Date: Fri, 29 Aug 2014 19:28:40 +0200 Subject: HEADS UP: full rebuild necessary In-Reply-To: <1409329362-sup-4628@sabre> References: <1409321491-sup-2316@sabre> <1409329362-sup-4628@sabre> Message-ID: 2014-08-29 18:23 GMT+02:00 Edward Z. Yang : > Could you go ahead and fix them [..] and post your patch? Sure. Please find it attached. -------------- next part -------------- diff --git a/libraries/bin-package-db/GHC/PackageDb.hs b/libraries/bin-package-db/GHC/PackageDb.hs index 5039a01..76fa697 100644 --- a/libraries/bin-package-db/GHC/PackageDb.hs +++ b/libraries/bin-package-db/GHC/PackageDb.hs @@ -261,15 +261,15 @@ writeFileAtomic targetPath content = do #if mingw32_HOST_OS || mingw32_TARGET_OS renameFile tmpPath targetPath -- If the targetPath exists then renameFile will fail - `catchIO` \err -> do + `catch` \err -> do exists <- doesFileExist targetPath if exists then do removeFile targetPath -- Big fat hairy race condition - renameFile newFile targetPath + renameFile tmpPath targetPath -- If the removeFile succeeds and the renameFile fails -- then we've lost the atomic property. - else throwIOIO err + else throwIO (err :: IOException) #else renameFile tmpPath targetPath #endif From mainland at apeiron.net Fri Aug 29 17:29:44 2014 From: mainland at apeiron.net (Geoffrey Mainland) Date: Fri, 29 Aug 2014 13:29:44 -0400 Subject: I'm going to disable DPH until someone starts maintaining it In-Reply-To: References: <2F09BA4C-DADD-490F-9C29-D74BBD449A85@ouroborus.net> <53DF8F72.7080105@apeiron.net> Message-ID: <5400B888.4000508@apeiron.net> Hi Austin, I've pushed wip/dph-fix branches to the dph and ghc repos. dph is not in Phabricator, so I didn't submit anything for review. I think this is small enough that we can probably just merge it directly, but it would be nice to have DPH in Phabricator eventually. I only validated on Linux x64. Is there an easy way for me to validate on other platforms? Thanks, Geoff On 08/04/2014 10:07 AM, Austin Seipp wrote: > On Mon, Aug 4, 2014 at 8:49 AM, Geoffrey Mainland wrote: >> I have patches for DPH that let it work with vector 0.11 as of a few >> months ago. I would be happy to submit them via phabricator if that is >> agreeable (we have to coordinate with the import of vector 0.11 >> though...I can instead leave them in a wip branch for Austin to merge as >> he sees fit). I am also willing to commit some time to keep DPH at least >> working in its current state. > That would be quite nice if you could submit patches to get it to > work! Thanks so much. > > As we've moved to submodules, having our own forks is becoming less > palatable; we'd like to start tracking upstream closely, and having > people submit changes there first and foremost. This creates a bit of > a lag time between changes, but I think this is acceptable (and most > of our maintainers are quite responsive to GHC needs!) > > It's also great you're willing to help maintain DPH a bit - but based > on what Ben said, it seems like a significant rewrite will happen > eventually. > > Geoff, here's my proposal: > > 1) I'll disable DPH for right now, so it won't pop up during > ./validate. This will probably happen today. > 2) We can coordinate the update of vector to 0.11, making it track > the official master. (Perhaps an email thread or even Skype would > work) > 3) We can fix DPH at the same time. > 4) Afterwords, we can re-enable it for ./validate > > If you submit Phabricator patches, that would be fantastic - we can > add the DPH repository to Phabricator with little issue. > > In the long run, I think we should sync up with Ben and perhaps Simon > & Co to see what will happen long-term for the DPH libraries. > >> Geoff >> >> On 8/4/14 8:18 AM, Ben Lippmeier wrote: >>> On 4 Aug 2014, at 21:47 , Austin Seipp wrote: >>> >>>> Why? Because I'm afraid I just don't have any more patience for DPH, >>>> I'm tired of fixing it, and it takes up a lot of extra time to build, >>>> and time to maintain. >>> I'm not going to argue against cutting it lose. >>> >>> >>>> So - why are we still building it, exactly? >>> It can be a good stress test for the simplifier, especially the SpecConstr transform. The fact that it takes so long to build is part of the reason it's a good stress test. >>> >>> >>>> [1] And by 'speak up', I mean I'd like to see someone actively step >>>> forward address my concerns above in a decisive manner. With patches. >>> I thought that in the original conversation we agreed that if the DPH code became too much of a burden it was fine to switch it off and let it become unmaintained. I don't have time to maintain it anymore myself. >>> >>> The original DPH project has fractured into a few different research streams, none of which work directly with the implementation in GHC, or with the DPH libraries that are bundled with the GHC build. >>> >>> The short of it is that the array fusion mechanism implemented in DPH (based on stream fusion) is inadequate for the task. A few people are working on replacement fusion systems that aim to solve this problem, but merging this work back into DPH will entail an almost complete rewrite of the backend libraries. If it the existing code has become a maintenance burden then it's fine to switch it off. >>> >>> Sorry for the trouble. >>> Ben. >>> > > From kyrab at mail.ru Fri Aug 29 18:15:17 2014 From: kyrab at mail.ru (kyra) Date: Fri, 29 Aug 2014 22:15:17 +0400 Subject: Problems with building GHC 7.8.3 on Windows In-Reply-To: References: <5400A34C.5030401@mail.ru> Message-ID: <5400C335.2040704@mail.ru> Yup. This is the MSYSTEM env. variable which determines 'uname' output for client, see: http://www.haskell.org/pipermail/ghc-devs/2013-October/002920.html. Cheers, Kyra On 8/29/2014 20:22, P?li G?bor J?nos wrote: > 2014-08-29 17:59 GMT+02:00 kyra : >> it's extremely >> important to remember you must *not* use msys2_shell.bat to start MSYS2 >> shell, only mingwXX_shell.bat (XX stands for 32 or 64) shall be used to >> start MSYS2 shell -- otherwise GHC make system would not recognize build >> triplet. > For what it is worth -- to my experience --, one could build GHC > successfully on Windows without using mingwXX_shell.bat. Only the > MSYSTEM environment variable has to be set properly (to either MINGW32 > or MINGW64). That is what mingwXX_shell.bat also does. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From ggreif at gmail.com Fri Aug 29 18:40:58 2014 From: ggreif at gmail.com (Gabor Greif) Date: Fri, 29 Aug 2014 20:40:58 +0200 Subject: build fixes In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221F5D54@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221F5D54@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Phabricator complains about these two: Unexpected failures: ghc-api ghcApi [exit code non-0] (normal) simplCore/should_compile T6056 [stderr mismatch] (optasm) I doubt they are from me, as I only changed a (different) test and fixed some comments. Cheers, Gabor On 8/29/14, Simon Peyton Jones wrote: > I've pushed patches that should finally fix the build. (including improved > performance in the compiler itself!) > Sorry about the breakage yesterday > Simon > From simonpj at microsoft.com Fri Aug 29 20:29:48 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 29 Aug 2014 20:29:48 +0000 Subject: build fixes In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F5D54@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F6C97@DB3PRD3001MB020.064d.mgd.msft.net> Hmm. Did not fail for me when I validated but I'll try again. On my laptop. S | -----Original Message----- | From: Gabor Greif [mailto:ggreif at gmail.com] | Sent: 29 August 2014 19:41 | To: Simon Peyton Jones | Cc: ghc-devs | Subject: Re: build fixes | | Phabricator complains about these two: | | Unexpected failures: | ghc-api ghcApi [exit code non-0] (normal) | simplCore/should_compile T6056 [stderr mismatch] (optasm) | | I doubt they are from me, as I only changed a (different) test and | fixed some comments. | | Cheers, | | Gabor | | On 8/29/14, Simon Peyton Jones wrote: | > I've pushed patches that should finally fix the build. (including | improved | > performance in the compiler itself!) | > Sorry about the breakage yesterday | > Simon | > From simonpj at microsoft.com Fri Aug 29 21:08:42 2014 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 29 Aug 2014 21:08:42 +0000 Subject: HEADS UP: full rebuild necessary In-Reply-To: References: <1409321491-sup-2316@sabre> <1409329362-sup-4628@sabre> Message-ID: <618BE556AADD624C9C918AA5D5911BEF221F6D20@DB3PRD3001MB020.064d.mgd.msft.net> Thank you! I've pushed it. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of P?li | G?bor J?nos | Sent: 29 August 2014 18:29 | To: Edward Z. Yang | Cc: ghc-devs | Subject: Re: HEADS UP: full rebuild necessary | | 2014-08-29 18:23 GMT+02:00 Edward Z. Yang : | > Could you go ahead and fix them [..] and post your patch? | | Sure. Please find it attached. From ezyang at mit.edu Fri Aug 29 22:48:29 2014 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 29 Aug 2014 23:48:29 +0100 Subject: build fixes In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F5D54@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1409352482-sup-5883@sabre> The ghc-api one is a known issue, I've been meaning to fix it but haven't gotten around to it (the ghc-api test doesn't like being run in parallel.) Edward Excerpts from Gabor Greif's message of 2014-08-29 19:40:58 +0100: > Phabricator complains about these two: > > Unexpected failures: > ghc-api ghcApi [exit code non-0] (normal) > simplCore/should_compile T6056 [stderr mismatch] (optasm) > > I doubt they are from me, as I only changed a (different) test and > fixed some comments. > > Cheers, > > Gabor > > On 8/29/14, Simon Peyton Jones wrote: > > I've pushed patches that should finally fix the build. (including improved > > performance in the compiler itself!) > > Sorry about the breakage yesterday > > Simon > > From david.feuer at gmail.com Sat Aug 30 01:44:03 2014 From: david.feuer at gmail.com (David Feuer) Date: Fri, 29 Aug 2014 21:44:03 -0400 Subject: Why isn't ($) inlining when I want? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221F4768@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221F29B7@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F34E7@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F4768@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Thu, Aug 28, 2014 at 6:22 AM, Simon Peyton Jones wrote: > Oh, now I understand. In > > loop g = sum . map g $ [1..1000000] > > GHC can share [1..100000] across all calls to loop, although that nixes > fusion. Because each call of loop may have a different g. > > But in > > loop' = sum . map (+1) $ [1..1000000] > > GHC can share (sum . map (+1) $ [1..1000]) across all calls to loop?, so it > can readily fuse the sum, map, and [1..n]. > > I hope that explains it. > > Simon To my mind, that's a great argument against full laziness. If I wanted to share [1..100000] across all calls to loop, I would surely write either giantList = [1..100000] loop g = sum . map g $ giantList or loop = \g -> sum . map g $ giantList where giantList = [1..100000] If we bump that list up to a few hundred megabytes, the floated version probably just destroyed our cache performance. If we bump it to a few gigabytes?oops, we just ran out of memory. David From mikolaj+haskell-lists at well-typed.com Sat Aug 30 07:17:58 2014 From: mikolaj+haskell-lists at well-typed.com (Mikolaj Konarski) Date: Sat, 30 Aug 2014 08:17:58 +0100 Subject: Can't install 32-bit ghc-7.8.1 on 64-bit xubuntu 14.04 In-Reply-To: References: <5347BA18.3040309@mail.ru> <5347BC83.7090807@centrum.cz> <5347C4EB.2020209@mail.ru> <07B85024-5F9F-41EC-8E8F-4C2393CAE94F@gmail.com> <5347CB38.10508@mail.ru> <5347E4F9.7080108@mail.ru> <5347EB51.6020208@mail.ru> Message-ID: <20140830071758.GA27513@mail.well-typed.com> The installation (and using the 32bit GHC on a 64bit Linux, to an extent) should now be possible, as described at https://ghc.haskell.org/trac/ghc/wiki/Building/Compiling32on64 On Fri, Apr 11, 2014 at 08:58:51AM -0500, Austin Seipp wrote: > Kyrill, > > I think that at the moment, you can't really install a 32-bit GHC on a > 64-bit platform. I've actually had a few reports 'in the wild' about > there being problems with this, but I'm not sure if there's actually > an official ticket regarding it. We should dig one up or file one if > there isn't. > > In theory I see no reason why this should not be doable, but I can't > imagine off the top of my head what might really be going wrong. > > Simon, do you perhaps have an idea? Or have you heard of this/tried it maybe? > > On Fri, Apr 11, 2014 at 8:17 AM, kyra wrote: > > Sorry for flood, but it turned out the problem remains. My previous message > > was a mistake. > > Now I've removed all GHC installations from paths but this does not help. > > Did anybody successfully install 32-bit ghc-7.8.1 on 64-bit linux? > > > > Regards, > > Kyra > > > > > > On 4/11/2014 16:50, kyra wrote: > >> > >> Don't bother. That was a usual 32/64-bit mess when installer picked up > >> something from 64-bit ghc, which was in a path. > >> > >> Cheers, > >> Kyra > >> > >> On 4/11/2014 15:00, kyra wrote: > >>> > >>> ia32-libs is absent on modern Ubuntus. > >>> > >>> But if anyone is interested installing lib32ncurses5 and libgmp10:i386 > >>> did the 'configure' trick. > >>> > >>> But now 'make install' fails with: > >>> "utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist" register > >>> libraries/ghc-prim dist-install > >>> "/home/awson/data/ghc-7.8.1-i386/lib/ghc-7.8.1/bin/ghc" > >>> "/home/awson/data/ghc-7.8.1-i386/lib/ghc-7.8.1/bin/ghc-pkg" > >>> "/home/awson/data/ghc-7.8.1-i386/lib/ghc-7.8.1" '' > >>> '/home/awson/data/ghc-7.8.1-i386' > >>> '/home/awson/data/ghc-7.8.1-i386/lib/ghc-7.8.1' > >>> '/home/awson/data/ghc-7.8.1-i386/share/doc/ghc/html/libraries' NO > >>> ghc-cabal: Bad interface file: dist-install/build/GHC/CString.hi > >>> magic number mismatch: old/corrupt interface file? (wanted 33214052, got > >>> 129742) > >>> > >>> Kyra > >> > >> > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > >> > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > From slyich at gmail.com Sat Aug 30 10:38:32 2014 From: slyich at gmail.com (Sergei Trofimovich) Date: Sat, 30 Aug 2014 13:38:32 +0300 Subject: Raft of optimiser changes In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221F48AA@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221F48AA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <20140830133832.3661ae70@sf> On Thu, 28 Aug 2014 11:16:03 +0000 Simon Peyton Jones wrote: > I've just pushed a bunch of Core-to-Core optimisation changes that have been sitting in my tree for ages. The aggregate effect on nofib is very modest, but they are mostly aimed at corner cases, and consolidation. > > Program Size Allocs Runtime Elapsed TotalMem > > Min -7.2% -3.1% -7.8% -7.8% -14.8% > > Max +5.6% +1.3% +20.0% +19.7% +50.0% > > Geometric Mean -0.3% -0.1% +1.7% +1.7% +0.2% > The runtime increases are spurious - I checked. > A couple of perf/compiler tests (i.e. GHC's own performance) improve significantly, which is a good sign. > I have a few more to come but wanted to get this lot out of my hair. Hello Simon! The compiler improvements look great! Although running 'fulltest' one test caught core lint error: > typecheck/should_compile T7891 [exit code non-0] (hpc,optasm,profasm,optllvm) It can be reran as: $ make fulltest THREADS=12 TEST=T7891 The result of optasm run: =====> T7891(optasm) 3365 of 4096 [0, 0, 0] cd ./typecheck/should_compile && '/home/slyfox/dev/git/ghc-validate/inplace/bin/ghc-stage2' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-ghci-history -c T7891.hs -O -fasm -fno-warn-incomplete-patterns >T7891.comp.stderr 2>&1 Compile failed (status 256) errors were: *** Core Lint errors : in result of Simplifier *** : Warning: In the type ?a_12 -> t_aiE -> t_aiE? @ a_12 is out of scope (attached it's complete output) Thank you! -- Sergei -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: T7891-optasm-failure.txt URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From alan.zimm at gmail.com Sat Aug 30 14:32:33 2014 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sat, 30 Aug 2014 16:32:33 +0200 Subject: GHC AST Annotations In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF221F53EE@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF221F4F10@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F53EE@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: A further use case would be to be able to convert all the locations to be relative, or include a relative portion, so that as tools manipulate the AST by adding or removing parts the layout can be preserved. I think I may need to make a wip branch for this and experiment, it is always easier to comment on concrete things. Alan On Thu, Aug 28, 2014 at 10:38 PM, Simon Peyton Jones wrote: > I thiink the key question is whether it is acceptable to sprinkle this > kind of information throughout the AST. For someone interested in > source-to-source conversions (like me) this is great, others may find it > intrusive. > > It?s probably not too bad if you use record syntax; thus > > | HsDo { hsdo_do_loc :: SrcSpan -- of the word "do" > > , hsdo_blocks :: BlockSrcSpans > > , hsdo_ctxt :: HsStmtContext Name > > , hsdo_stmts :: [ExprLStmt id] > > , hsdo_type :: PostTcType } > > > > Simon > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* 28 August 2014 19:35 > *To:* Richard Eisenberg > *Cc:* Simon Peyton Jones; ghc-devs at haskell.org > *Subject:* Re: GHC AST Annotations > > > > This does have the advantage of being explicit. I modelled the initial > proposal on HSE as a proven solution, and I think that they were trying to > keep it non-invasive, to allow both an annotated and non-annoted AST. > > I thiink the key question is whether it is acceptable to sprinkle this > kind of information throughout the AST. For someone interested in > source-to-source conversions (like me) this is great, others may find it > intrusive. > > The other question, which is probably orthogonal to this, is whether we > want the annotation to be a parameter to the AST, which allows it to be > overridden by various tools for various purposes, or fixed as in Richard's > suggestion. > > A parameterised annotation allows the annotations to be manipulated via > something like for HSE: > > -- |AST nodes are annotated, and this class allows manipulation of the > annotations. > class Functor ast => Annotated ast where > > -- |Retrieve the annotation of an AST node. > ann :: ast l -> l > > -- |Change the annotation of an AST node. Note that only the annotation > of the node itself is affected, and not > -- the annotations of any child nodes. if all nodes in the AST tree are > to be affected, use fmap. > > amap :: (l -> l) -> ast l -> ast l > > > > Alan > > > > On Thu, Aug 28, 2014 at 7:11 PM, Richard Eisenberg > wrote: > > For what it's worth, my thought is not to use SrcSpanInfo (which, to me, > is the wrong way to slice the abstraction) but instead to add SrcSpan > fields to the relevant nodes. For example: > > | HsDo SrcSpan -- of the word "do" > BlockSrcSpans > (HsStmtContext Name) -- The parameterisation is unimportant > -- because in this context we never > use > -- the PatGuard or ParStmt variant > [ExprLStmt id] -- "do":one or more stmts > PostTcType -- Type of the whole expression > > ... > > data BlockSrcSpans = LayoutBlock Int -- the parameter is the indentation > level > ... -- stuff to track the appearance of > any semicolons > | BracesBlock ... -- stuff to track the braces and > semicolons > > > The way I understand it, the SrcSpanInfo proposal means that we would have > lots of empty SrcSpanInfos, no? Most interior nodes don't need one, I think. > > Popping up a level, I do support the idea of including this info in the > AST. > > Richard > > > On Aug 28, 2014, at 11:54 AM, Simon Peyton Jones > wrote: > > > In general I?m fine with this direction of travel. Some specifics: > > > > ? You?d have to be careful to document, for every data > constructor in HsSyn, what the association between the [SrcSpan] in the > SrcSpanInfo and the ?sub-entities? > > ? Many of the sub-entities will have their own SrcSpanInfo > wrapped around them, so there?s some unhelpful duplication. Maybe you only > want the SrcSpanInfo to list the [SrcSpan]s for the sub-entities (like the > syntactic keywords) that do not show up as children in the syntax tree? > > Anyway do by all means create a GHC Trac wiki page to describe your > proposed design, concretely. > > > > Simon > > > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Alan > & Kim Zimmerman > > Sent: 28 August 2014 15:00 > > To: ghc-devs at haskell.org > > Subject: GHC AST Annotations > > > > Now that the landmines have hopefully been cleared from the AST via [1] > I would like to propose changing the location information in the AST. > > > > Right now the locations of syntactic markers such as do/let/where/in/of > in the source are discarded from the AST, although they are retained in the > rich token stream. > > > > The haskell-src-exts package deals with this by means of using the > SrcSpanInfo data type [2] which contains the SrcSpan as per the current GHC > Located type but also has a list of SrcSpan s for the syntactic markers, > depending on the particular AST fragment being annotated. > > > > In addition, the annotation type is provided as a parameter to the AST, > so that it can be changed as required, see [3]. > > > > The motivation for this change is then > > > > 1. Simplify the roundtripping and modification of source by explicitly > capturing the missing location information for the syntactic markers. > > > > 2. Allow the annotation to be a parameter so that it can be replaced > with a different one in tools, for example HaRe would include the tokens > for the AST fragment leaves. > > > > 3. Aim for some level compatibility with haskell-src-exts so that tools > developed for it could be easily ported to GHC, for example exactprint [4]. > > > > > > > > I would like feedback as to whether this would be acceptable, or if the > same goals should be achieved a different way. > > > > > > > > Regards > > > > Alan > > > > > > > > > > [1] https://phabricator.haskell.org/D157 > > > > [2] > http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-SrcLoc.html#t:SrcSpanInfo > > > > [3] > http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-Syntax.html#t:Annotated > > > > [4] > http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-ExactPrint.html#v:exactPrint > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndmitchell at gmail.com Sat Aug 30 21:18:03 2014 From: ndmitchell at gmail.com (Neil Mitchell) Date: Sat, 30 Aug 2014 22:18:03 +0100 Subject: GHC AST Annotations In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF221F4F10@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF221F53EE@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Since Alan is trying to do something for HaRe that I want for HLint on top of haskell-src-exts, he asked me for my opinions on the proposal. There seem to be two approaches to take: * Add SrcSpan's throughout. The HSE approach of having a list of inner source spans is nasty - the details of which source space goes where is entirely undocumented and hard to discover. Even worse, for things like instance, which may or may not have a where after, the number of inner SrcSpan's changes. Simon's idea of hsdo_do_loc is much cleaner, and easily extends to Maybe SrcSpan if the keyword is optional. * Having the annotation be a type parameter gives much greater flexibility. In particular, it would let you mark certain nodes as being added/deleted. However, since SrcSpan has an Int in it, you can always pass around a separate IntMap and make the SrcSpan really be an index into more detailed information. It's nasty, but only the people who use it pay for it. Both approaches have disadvantages. You could always combine both ideas, and have a SrcSpan and entirely separately an annotation (which defaults to (), rather than SrcSpanInfo), but maybe that's too much extra baggage on the AST. Thanks, Neil On Sat, Aug 30, 2014 at 3:32 PM, Alan & Kim Zimmerman wrote: > A further use case would be to be able to convert all the locations to be > relative, or include a relative portion, so that as tools manipulate the AST > by adding or removing parts the layout can be preserved. > > I think I may need to make a wip branch for this and experiment, it is > always easier to comment on concrete things. > > Alan > > > On Thu, Aug 28, 2014 at 10:38 PM, Simon Peyton Jones > wrote: >> >> I thiink the key question is whether it is acceptable to sprinkle this >> kind of information throughout the AST. For someone interested in >> source-to-source conversions (like me) this is great, others may find it >> intrusive. >> >> It?s probably not too bad if you use record syntax; thus >> >> | HsDo { hsdo_do_loc :: SrcSpan -- of the word "do" >> >> , hsdo_blocks :: BlockSrcSpans >> >> , hsdo_ctxt :: HsStmtContext Name >> >> , hsdo_stmts :: [ExprLStmt id] >> >> , hsdo_type :: PostTcType } >> >> >> >> Simon >> >> >> >> From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] >> Sent: 28 August 2014 19:35 >> To: Richard Eisenberg >> Cc: Simon Peyton Jones; ghc-devs at haskell.org >> Subject: Re: GHC AST Annotations >> >> >> >> This does have the advantage of being explicit. I modelled the initial >> proposal on HSE as a proven solution, and I think that they were trying to >> keep it non-invasive, to allow both an annotated and non-annoted AST. >> >> I thiink the key question is whether it is acceptable to sprinkle this >> kind of information throughout the AST. For someone interested in >> source-to-source conversions (like me) this is great, others may find it >> intrusive. >> >> The other question, which is probably orthogonal to this, is whether we >> want the annotation to be a parameter to the AST, which allows it to be >> overridden by various tools for various purposes, or fixed as in Richard's >> suggestion. >> >> A parameterised annotation allows the annotations to be manipulated via >> something like for HSE: >> >> -- |AST nodes are annotated, and this class allows manipulation of the >> annotations. >> class Functor ast => Annotated ast where >> >> -- |Retrieve the annotation of an AST node. >> ann :: ast l -> l >> >> -- |Change the annotation of an AST node. Note that only the annotation >> of the node itself is affected, and not >> -- the annotations of any child nodes. if all nodes in the AST tree are >> to be affected, use fmap. >> >> amap :: (l -> l) -> ast l -> ast l >> >> >> >> Alan >> >> >> >> On Thu, Aug 28, 2014 at 7:11 PM, Richard Eisenberg >> wrote: >> >> For what it's worth, my thought is not to use SrcSpanInfo (which, to me, >> is the wrong way to slice the abstraction) but instead to add SrcSpan fields >> to the relevant nodes. For example: >> >> | HsDo SrcSpan -- of the word "do" >> BlockSrcSpans >> (HsStmtContext Name) -- The parameterisation is >> unimportant >> -- because in this context we never >> use >> -- the PatGuard or ParStmt variant >> [ExprLStmt id] -- "do":one or more stmts >> PostTcType -- Type of the whole expression >> >> ... >> >> data BlockSrcSpans = LayoutBlock Int -- the parameter is the indentation >> level >> ... -- stuff to track the appearance of >> any semicolons >> | BracesBlock ... -- stuff to track the braces and >> semicolons >> >> >> The way I understand it, the SrcSpanInfo proposal means that we would have >> lots of empty SrcSpanInfos, no? Most interior nodes don't need one, I think. >> >> Popping up a level, I do support the idea of including this info in the >> AST. >> >> Richard >> >> >> On Aug 28, 2014, at 11:54 AM, Simon Peyton Jones >> wrote: >> >> > In general I?m fine with this direction of travel. Some specifics: >> > >> > ? You?d have to be careful to document, for every data >> > constructor in HsSyn, what the association between the [SrcSpan] in the >> > SrcSpanInfo and the ?sub-entities? >> > ? Many of the sub-entities will have their own SrcSpanInfo >> > wrapped around them, so there?s some unhelpful duplication. Maybe you only >> > want the SrcSpanInfo to list the [SrcSpan]s for the sub-entities (like the >> > syntactic keywords) that do not show up as children in the syntax tree? >> > Anyway do by all means create a GHC Trac wiki page to describe your >> > proposed design, concretely. >> > >> > Simon >> > >> > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Alan & >> > Kim Zimmerman >> > Sent: 28 August 2014 15:00 >> > To: ghc-devs at haskell.org >> > Subject: GHC AST Annotations >> > >> > Now that the landmines have hopefully been cleared from the AST via [1] >> > I would like to propose changing the location information in the AST. >> > >> > Right now the locations of syntactic markers such as do/let/where/in/of >> > in the source are discarded from the AST, although they are retained in the >> > rich token stream. >> > >> > The haskell-src-exts package deals with this by means of using the >> > SrcSpanInfo data type [2] which contains the SrcSpan as per the current GHC >> > Located type but also has a list of SrcSpan s for the syntactic markers, >> > depending on the particular AST fragment being annotated. >> > >> > In addition, the annotation type is provided as a parameter to the AST, >> > so that it can be changed as required, see [3]. >> > >> > The motivation for this change is then >> > >> > 1. Simplify the roundtripping and modification of source by explicitly >> > capturing the missing location information for the syntactic markers. >> > >> > 2. Allow the annotation to be a parameter so that it can be replaced >> > with a different one in tools, for example HaRe would include the tokens for >> > the AST fragment leaves. >> > >> > 3. Aim for some level compatibility with haskell-src-exts so that tools >> > developed for it could be easily ported to GHC, for example exactprint [4]. >> > >> > >> > >> > I would like feedback as to whether this would be acceptable, or if the >> > same goals should be achieved a different way. >> > >> > >> > >> > Regards >> > >> > Alan >> > >> > >> > >> > >> > [1] https://phabricator.haskell.org/D157 >> > >> > [2] >> > http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-SrcLoc.html#t:SrcSpanInfo >> > >> > [3] >> > http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-Syntax.html#t:Annotated >> > >> > [4] >> > http://hackage.haskell.org/package/haskell-src-exts-1.15.0.1/docs/Language-Haskell-Exts-Annotated-ExactPrint.html#v:exactPrint >> > >> >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From david.feuer at gmail.com Sat Aug 30 22:05:15 2014 From: david.feuer at gmail.com (David Feuer) Date: Sat, 30 Aug 2014 18:05:15 -0400 Subject: cons/build and making rules look boring Message-ID: I think I may have figured out at least part of the reason that cons/build gives bad results. I actually ran into a clue when working on scanl. It seems at least part of the problem is that a rule like x : build g = build (\c n -> c x (g c n)) makes (:) look "interesting" to the inliner. Unfortunately, as I discovered after much extreme puzzlement about why rules relating to scanl were affecting things that had nothing to do with scanl, it turns out that making (:) look interesting is really quite bad, and something that we probably never want to happen. As a result, the only ways I see to try to make rules like that work properly are 1. If constructors are *always* best treated as boring, and the inliner knows when's a constructor, make it treat them all as boring. 2. Offer a BORINGRULE annotation to indicate that the rule should not make its LHS "interesting", or 3. (I don't like this option much) Make a special case forcing (:) in particular to be boring. David From mail at joachim-breitner.de Sun Aug 31 07:24:13 2014 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 31 Aug 2014 00:24:13 -0700 Subject: cons/build and making rules look boring In-Reply-To: References: Message-ID: <1409469853.3612.1.camel@joachim-breitner.de> Dear Sven, glad to you are making progress! Am Samstag, den 30.08.2014, 18:05 -0400 schrieb David Feuer: > I think I may have figured out at least part of the reason that > cons/build gives bad results. I actually ran into a clue when working > on scanl. It seems at least part of the problem is that a rule like > > x : build g = build (\c n -> c x (g c n)) > > makes (:) look "interesting" to the inliner. I think that by now your know more about rules and the inliner than the average reader of ghc-devs, and not all of us know what it means if something is interesting to the inliner. So mostly out of curiosity: What happens with interesting things, and why is it bad for (:)? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From david.feuer at gmail.com Sun Aug 31 09:28:17 2014 From: david.feuer at gmail.com (David Feuer) Date: Sun, 31 Aug 2014 05:28:17 -0400 Subject: Trouble compiling fibon Message-ID: I'm trying to compile the fibon benchmark suite, but I'm getting a non-specific permission error. Can anyone give me a clue? == make boot - --no-print-directory; in /home/dfeuer/src/ghc-slowmod/nofib/fibon/Hackage/Bzlib ------------------------------------------------------------------------ // Codec/Compression/BZip/Stream.hsc make[2]: execvp: //: Permission denied make[2]: *** [Codec/Compression/BZip/Stream.hs] Error 127 Failed making boot in Bzlib: 1