From asteen at gmail.com Thu Dec 1 11:21:36 2016 From: asteen at gmail.com (Adam Steen) Date: Thu, 1 Dec 2016 19:21:36 +0800 Subject: Compiling on OpenBSD-current Message-ID: Hi When Compiling on OpenBSD-Current I get the follow error, what do i need to do to fix this? Cheers Adam ===--- building phase 0 gmake --no-print-directory -f ghc.mk phase=0 phase_0_builds gmake[1]: Nothing to be done for 'phase_0_builds'. ===--- building phase 1 gmake --no-print-directory -f ghc.mk phase=1 phase_1_builds "/usr/local/bin/ghc" -o utils/hsc2hs/dist/build/tmp/hsc2hs -hisuf hi -osuf o -hcsuf hc -static -O0 -H64m -Wall -package-db libraries/bootstrapping.conf -hide-all-packages -i -iutils/hsc2hs/. -iutils/hsc2hs/dist/build -Iutils/hsc2hs/dist/build -iutils/hsc2hs/dist/build/hsc2hs/autogen -Iutils/hsc2hs/dist/build/hsc2hs/autogen -optP-include -optPutils/hsc2hs/dist/build/hsc2hs/autogen/cabal_macros.h -package-id base-4.9.0.0 -package-id containers-0.5.7.1 -package-id directory-1.2.6.2 -package-id filepath-1.4.1.0 -package-id process-1.4.2.0 -XHaskell2010 -no-user-package-db -rtsopts -odir utils/hsc2hs/dist/build -hidir utils/hsc2hs/dist/build -stubdir utils/hsc2hs/dist/build -optl-z -optlwxneeded -static -O0 -H64m -Wall -package-db libraries/bootstrapping.conf -hide-all-packages -i -iutils/hsc2hs/. -iutils/hsc2hs/dist/build -Iutils/hsc2hs/dist/build -iutils/hsc2hs/dist/build/hsc2hs/autogen -Iutils/hsc2hs/dist/build/hsc2hs/autogen -optP-include -optPutils/hsc2hs/dist/build/hsc2hs/autogen/cabal_macros.h -package-id base-4.9.0.0 -package-id containers-0.5.7.1 -package-id directory-1.2.6.2 -package-id filepath-1.4.1.0 -package-id process-1.4.2.0 -XHaskell2010 -no-user-package-db -rtsopts utils/hsc2hs/dist/build/Main.o utils/hsc2hs/dist/build/C.o utils/hsc2hs/dist/build/Common.o utils/hsc2hs/dist/build/CrossCodegen.o utils/hsc2hs/dist/build/DirectCodegen.o utils/hsc2hs/dist/build/Flags.o utils/hsc2hs/dist/build/HSCParser.o utils/hsc2hs/dist/build/UtilsCodegen.o utils/hsc2hs/dist/build/Paths_hsc2hs.o : error: Warning: Couldn't figure out linker information! Make sure you're using GNU ld, GNU gold or the built in OS X linker, etc. cc: wxneeded: No such file or directory `cc' failed in phase `Linker'. (Exit code: 1) compiler/ghc.mk:580: compiler/stage1/build/.depend-v.haskell: No such file or directory gmake[1]: *** [utils/hsc2hs/ghc.mk:15: utils/hsc2hs/dist/build/tmp/hsc2hs] Error 1 gmake: *** [Makefile:125: all] Error 2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Thu Dec 1 11:58:28 2016 From: karel.gardas at centrum.cz (Karel Gardas) Date: Thu, 01 Dec 2016 12:58:28 +0100 Subject: Compiling on OpenBSD-current In-Reply-To: References: Message-ID: <58401064.50802@centrum.cz> I've been hit by this during 8.0.2 rc1 binary preparation so if nobody else nor you find a time to fix that sooner I'll hopefully find some time during this weekend to have a look into it. I'm pretty sure this is fairly recent breakage on OpenBSD... Cheers, Karel On 12/ 1/16 12:21 PM, Adam Steen wrote: > Hi > > When Compiling on OpenBSD-Current I get the follow error, what do i need > to do to fix this? > > Cheers > Adam > > ===--- building phase 0 > gmake --no-print-directory -f ghc.mk phase=0 phase_0_builds > gmake[1]: Nothing to be done for 'phase_0_builds'. > ===--- building phase 1 > gmake --no-print-directory -f ghc.mk phase=1 phase_1_builds > "/usr/local/bin/ghc" -o utils/hsc2hs/dist/build/tmp/hsc2hs -hisuf hi > -osuf o -hcsuf hc -static -O0 -H64m -Wall -package-db > libraries/bootstrapping.conf -hide-all-packages -i -iutils/hsc2hs/. > -iutils/hsc2hs/dist/build -Iutils/hsc2hs/dist/build > -iutils/hsc2hs/dist/build/hsc2hs/autogen > -Iutils/hsc2hs/dist/build/hsc2hs/autogen -optP-include > -optPutils/hsc2hs/dist/build/hsc2hs/autogen/cabal_macros.h -package-id > base-4.9.0.0 -package-id containers-0.5.7.1 -package-id > directory-1.2.6.2 -package-id filepath-1.4.1.0 -package-id > process-1.4.2.0 -XHaskell2010 -no-user-package-db -rtsopts -odir > utils/hsc2hs/dist/build -hidir utils/hsc2hs/dist/build -stubdir > utils/hsc2hs/dist/build -optl-z -optlwxneeded -static -O0 -H64m > -Wall -package-db libraries/bootstrapping.conf -hide-all-packages -i > -iutils/hsc2hs/. -iutils/hsc2hs/dist/build -Iutils/hsc2hs/dist/build > -iutils/hsc2hs/dist/build/hsc2hs/autogen > -Iutils/hsc2hs/dist/build/hsc2hs/autogen -optP-include > -optPutils/hsc2hs/dist/build/hsc2hs/autogen/cabal_macros.h -package-id > base-4.9.0.0 -package-id containers-0.5.7.1 -package-id > directory-1.2.6.2 -package-id filepath-1.4.1.0 -package-id > process-1.4.2.0 -XHaskell2010 -no-user-package-db -rtsopts > utils/hsc2hs/dist/build/Main.o utils/hsc2hs/dist/build/C.o > utils/hsc2hs/dist/build/Common.o utils/hsc2hs/dist/build/CrossCodegen.o > utils/hsc2hs/dist/build/DirectCodegen.o utils/hsc2hs/dist/build/Flags.o > utils/hsc2hs/dist/build/HSCParser.o > utils/hsc2hs/dist/build/UtilsCodegen.o > utils/hsc2hs/dist/build/Paths_hsc2hs.o > > : error: > Warning: Couldn't figure out linker information! > Make sure you're using GNU ld, GNU gold or the built in OS > X linker, etc. > cc: wxneeded: No such file or directory > `cc' failed in phase `Linker'. (Exit code: 1) > compiler/ghc.mk:580 : > compiler/stage1/build/.depend-v.haskell: No such file or directory > gmake[1]: *** [utils/hsc2hs/ghc.mk:15 : > utils/hsc2hs/dist/build/tmp/hsc2hs] Error 1 > gmake: *** [Makefile:125: all] Error 2 > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From rwbarton at gmail.com Thu Dec 1 14:16:46 2016 From: rwbarton at gmail.com (Reid Barton) Date: Thu, 1 Dec 2016 09:16:46 -0500 Subject: Compiling on OpenBSD-current In-Reply-To: <58401064.50802@centrum.cz> References: <58401064.50802@centrum.cz> Message-ID: https://phabricator.haskell.org/D2673 is responsible. It adds CONF_LD_LINKER_OPTS_STAGE0 to $1_$2_$3_ALL_LD_OPTS, which is documented as "Options for passing to plain ld", which is okay. But just below that the same variable $1_$2_$3_ALL_LD_OPTS is added (with -optl prefixes attached) to $1_$2_$3_GHC_LD_OPTS ("Options for passing to GHC when we use it for linking"), which is wrong because GHC uses gcc to do the link, not ld. Regards, Reid Barton On Thu, Dec 1, 2016 at 6:58 AM, Karel Gardas wrote: > > I've been hit by this during 8.0.2 rc1 binary preparation so if nobody else > nor you find a time to fix that sooner I'll hopefully find some time during > this weekend to have a look into it. I'm pretty sure this is fairly recent > breakage on OpenBSD... > > Cheers, > Karel > > On 12/ 1/16 12:21 PM, Adam Steen wrote: >> >> Hi >> >> When Compiling on OpenBSD-Current I get the follow error, what do i need >> to do to fix this? >> >> Cheers >> Adam >> >> ===--- building phase 0 >> gmake --no-print-directory -f ghc.mk phase=0 >> phase_0_builds >> gmake[1]: Nothing to be done for 'phase_0_builds'. >> ===--- building phase 1 >> gmake --no-print-directory -f ghc.mk phase=1 >> phase_1_builds >> >> "/usr/local/bin/ghc" -o utils/hsc2hs/dist/build/tmp/hsc2hs -hisuf hi >> -osuf o -hcsuf hc -static -O0 -H64m -Wall -package-db >> libraries/bootstrapping.conf -hide-all-packages -i -iutils/hsc2hs/. >> -iutils/hsc2hs/dist/build -Iutils/hsc2hs/dist/build >> -iutils/hsc2hs/dist/build/hsc2hs/autogen >> -Iutils/hsc2hs/dist/build/hsc2hs/autogen -optP-include >> -optPutils/hsc2hs/dist/build/hsc2hs/autogen/cabal_macros.h -package-id >> base-4.9.0.0 -package-id containers-0.5.7.1 -package-id >> directory-1.2.6.2 -package-id filepath-1.4.1.0 -package-id >> process-1.4.2.0 -XHaskell2010 -no-user-package-db -rtsopts -odir >> utils/hsc2hs/dist/build -hidir utils/hsc2hs/dist/build -stubdir >> utils/hsc2hs/dist/build -optl-z -optlwxneeded -static -O0 -H64m >> -Wall -package-db libraries/bootstrapping.conf -hide-all-packages -i >> -iutils/hsc2hs/. -iutils/hsc2hs/dist/build -Iutils/hsc2hs/dist/build >> -iutils/hsc2hs/dist/build/hsc2hs/autogen >> -Iutils/hsc2hs/dist/build/hsc2hs/autogen -optP-include >> -optPutils/hsc2hs/dist/build/hsc2hs/autogen/cabal_macros.h -package-id >> base-4.9.0.0 -package-id containers-0.5.7.1 -package-id >> directory-1.2.6.2 -package-id filepath-1.4.1.0 -package-id >> process-1.4.2.0 -XHaskell2010 -no-user-package-db -rtsopts >> utils/hsc2hs/dist/build/Main.o utils/hsc2hs/dist/build/C.o >> utils/hsc2hs/dist/build/Common.o utils/hsc2hs/dist/build/CrossCodegen.o >> utils/hsc2hs/dist/build/DirectCodegen.o utils/hsc2hs/dist/build/Flags.o >> utils/hsc2hs/dist/build/HSCParser.o >> utils/hsc2hs/dist/build/UtilsCodegen.o >> utils/hsc2hs/dist/build/Paths_hsc2hs.o >> >> : error: >> Warning: Couldn't figure out linker information! >> Make sure you're using GNU ld, GNU gold or the built in OS >> X linker, etc. >> cc: wxneeded: No such file or directory >> `cc' failed in phase `Linker'. (Exit code: 1) >> compiler/ghc.mk:580 : >> compiler/stage1/build/.depend-v.haskell: No such file or directory >> gmake[1]: *** [utils/hsc2hs/ghc.mk:15 : >> utils/hsc2hs/dist/build/tmp/hsc2hs] Error 1 >> gmake: *** [Makefile:125: all] Error 2 >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Thu Dec 1 15:27:11 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 1 Dec 2016 15:27:11 +0000 Subject: testsuite broken Message-ID: Yikes. I can’t run the testsuite on Linux (debian ? I think…). See below. I installed python3 by saying apt-get install python3 And indeed python3 --version Python 3.2.3 This is bad. Can anyone help? Simon PYTHON="python3" "python3" ../../driver/runtests.py -e ghc_compiler_always_flags="'-dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -dno-debug-output'" -e config.compiler_debugged=True -e ghc_with_native_codegen=1 -e config.have_vanilla=True -e config.have_dynamic=True -e config.have_profiling=False -e ghc_with_threaded_rts=1 -e ghc_with_dynamic_rts=1 -e config.have_interp=False -e config.unregisterised=False -e config.ghc_dynamic_by_default=False -e config.ghc_dynamic=False -e ghc_with_smp=1 -e ghc_with_llvm=0 -e windows=False -e darwin=False -e config.in_tree_compiler=True --threads=33 -e config.cleanup=True -e config.local=False --rootdir=. --configfile=../../config/ghc -e 'config.confdir="../../config"' -e 'config.platform="x86_64-unknown-linux"' -e 'config.os="linux"' -e 'config.arch="x86_64"' -e 'config.wordsize="64"' -e 'config.timeout=int() or config.timeout' -e 'config.exeext=""' -e 'config.top="/5playpen/simonpj/HEAD-4/testsuite"' --config 'compiler="/5playpen/simonpj/HEAD-4/inplace/test spaces/ghc-stage1"' --config 'ghc_pkg="/5playpen/simonpj/HEAD-4/inplace/test spaces/ghc-pkg"' --config 'haddock="/5playpen/simonpj/HEAD-4/inplace/test spaces/haddock"' --config 'hp2ps="/5playpen/simonpj/HEAD-4/inplace/test spaces/hp2ps"' --config 'hpc="/5playpen/simonpj/HEAD-4/inplace/test spaces/hpc"' --config 'gs="gs"' --config 'timeout_prog="../../timeout/install-inplace/bin/timeout"' -e "config.stage=1" --summary-file "../../../testsuite_summary_stage1.txt" --no-print-summary 1 \ \ \ \ \ \ -e config.speed="2" \ Traceback (most recent call last): File "../../driver/runtests.py", line 210, in from testlib import * File "/home/simonpj/code/HEAD-4/testsuite/driver/testlib.py", line 1286 f.write(u':set prog ' + name + u'\n') ^ SyntaxError: invalid syntax -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Dec 1 15:33:25 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 01 Dec 2016 10:33:25 -0500 Subject: Compiling on OpenBSD-current In-Reply-To: References: <58401064.50802@centrum.cz> Message-ID: <87lgvzslyy.fsf@ben-laptop.smart-cactus.org> Reid Barton writes: > https://phabricator.haskell.org/D2673 is responsible. It adds > CONF_LD_LINKER_OPTS_STAGE0 to $1_$2_$3_ALL_LD_OPTS, which is > documented as "Options for passing to plain ld", which is okay. But > just below that the same variable $1_$2_$3_ALL_LD_OPTS is added (with > -optl prefixes attached) to $1_$2_$3_GHC_LD_OPTS ("Options for passing > to GHC when we use it for linking"), which is wrong because GHC uses > gcc to do the link, not ld. > Great catch Reid and thanks for the report Adam! I have setup an OpenBSD VM and have reproduced the issue. Following Reid's logic I have proposed D2776 as a fix. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From lonetiger at gmail.com Thu Dec 1 15:49:22 2016 From: lonetiger at gmail.com (Phyx) Date: Thu, 01 Dec 2016 15:49:22 +0000 Subject: testsuite broken In-Reply-To: References: Message-ID: Bah, String handling in python is a complete mess. In any case, we dropped support for 2 so we can remove the u prefixes. It seems that the Unicode syntax in python 3 was dropped in python 3.0 and reintroduced on 3.3. We were all using 3.5 to test. To get you going again quickly, You can either revert the commit that made python 3 the default and use python 2, or open the python files runtests.py testlib.py and testutil.py and do a global search and replace for u' and drop the u. I will fix it properly later tonight. On Thu, 1 Dec 2016, 15:27 Simon Peyton Jones via ghc-devs, < ghc-devs at haskell.org> wrote: > Yikes. I can’t run the testsuite on Linux (debian ? I think…). See below. > > I installed python3 by saying > > apt-get install python3 > > And indeed > > python3 --version > > Python 3.2.3 > > This is bad. Can anyone help? > > > > Simon > > > > PYTHON="python3" "python3" ../../driver/runtests.py -e > ghc_compiler_always_flags="'-dcore-lint -dcmm-lint -no-user-package-db > -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups > -dno-debug-output'" -e config.compiler_debugged=True -e > ghc_with_native_codegen=1 -e config.have_vanilla=True -e > config.have_dynamic=True -e config.have_profiling=False -e > ghc_with_threaded_rts=1 -e ghc_with_dynamic_rts=1 -e > config.have_interp=False -e config.unregisterised=False -e > config.ghc_dynamic_by_default=False -e config.ghc_dynamic=False -e > ghc_with_smp=1 -e ghc_with_llvm=0 -e windows=False -e darwin=False -e > config.in_tree_compiler=True --threads=33 -e config.cleanup=True -e > config.local=False --rootdir=. --configfile=../../config/ghc -e > 'config.confdir="../../config"' -e 'config.platform="x86_64-unknown-linux"' > -e 'config.os="linux"' -e 'config.arch="x86_64"' -e 'config.wordsize="64"' > -e 'config.timeout=int() or config.timeout' -e 'config.exeext=""' -e > 'config.top="/5playpen/simonpj/HEAD-4/testsuite"' --config > 'compiler="/5playpen/simonpj/HEAD-4/inplace/test spaces/ghc-stage1"' > --config 'ghc_pkg="/5playpen/simonpj/HEAD-4/inplace/test spaces/ghc-pkg"' > --config 'haddock="/5playpen/simonpj/HEAD-4/inplace/test spaces/haddock"' > --config 'hp2ps="/5playpen/simonpj/HEAD-4/inplace/test spaces/hp2ps"' > --config 'hpc="/5playpen/simonpj/HEAD-4/inplace/test spaces/hpc"' > --config 'gs="gs"' --config > 'timeout_prog="../../timeout/install-inplace/bin/timeout"' -e > "config.stage=1" --summary-file "../../../testsuite_summary_stage1.txt" > --no-print-summary 1 \ > > \ > > \ > > \ > > \ > > \ > > -e config.speed="2" \ > > > > Traceback (most recent call last): > > File "../../driver/runtests.py", line 210, in > > from testlib import * > > File "/home/simonpj/code/HEAD-4/testsuite/driver/testlib.py", line 1286 > > f.write(u':set prog ' + name + u'\n') > > ^ > > SyntaxError: invalid syntax > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Dec 1 16:14:23 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 1 Dec 2016 16:14:23 +0000 Subject: testsuite broken In-Reply-To: References: Message-ID: Ben is on it too: #12909 From: Phyx [mailto:lonetiger at gmail.com] Sent: 01 December 2016 15:49 To: Simon Peyton Jones ; ghc-devs at haskell.org Subject: Re: testsuite broken Bah, String handling in python is a complete mess. In any case, we dropped support for 2 so we can remove the u prefixes. It seems that the Unicode syntax in python 3 was dropped in python 3.0 and reintroduced on 3.3. We were all using 3.5 to test. To get you going again quickly, You can either revert the commit that made python 3 the default and use python 2, or open the python files runtests.py testlib.py and testutil.py and do a global search and replace for u' and drop the u. I will fix it properly later tonight. On Thu, 1 Dec 2016, 15:27 Simon Peyton Jones via ghc-devs, > wrote: Yikes. I can’t run the testsuite on Linux (debian ? I think…). See below. I installed python3 by saying apt-get install python3 And indeed python3 --version Python 3.2.3 This is bad. Can anyone help? Simon PYTHON="python3" "python3" ../../driver/runtests.py -e ghc_compiler_always_flags="'-dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -dno-debug-output'" -e config.compiler_debugged=True -e ghc_with_native_codegen=1 -e config.have_vanilla=True -e config.have_dynamic=True -e config.have_profiling=False -e ghc_with_threaded_rts=1 -e ghc_with_dynamic_rts=1 -e config.have_interp=False -e config.unregisterised=False -e config.ghc_dynamic_by_default=False -e config.ghc_dynamic=False -e ghc_with_smp=1 -e ghc_with_llvm=0 -e windows=False -e darwin=False -e config.in_tree_compiler=True --threads=33 -e config.cleanup=True -e config.local=False --rootdir=. --configfile=../../config/ghc -e 'config.confdir="../../config"' -e 'config.platform="x86_64-unknown-linux"' -e 'config.os="linux"' -e 'config.arch="x86_64"' -e 'config.wordsize="64"' -e 'config.timeout=int() or config.timeout' -e 'config.exeext=""' -e 'config.top="/5playpen/simonpj/HEAD-4/testsuite"' --config 'compiler="/5playpen/simonpj/HEAD-4/inplace/test spaces/ghc-stage1"' --config 'ghc_pkg="/5playpen/simonpj/HEAD-4/inplace/test spaces/ghc-pkg"' --config 'haddock="/5playpen/simonpj/HEAD-4/inplace/test spaces/haddock"' --config 'hp2ps="/5playpen/simonpj/HEAD-4/inplace/test spaces/hp2ps"' --config 'hpc="/5playpen/simonpj/HEAD-4/inplace/test spaces/hpc"' --config 'gs="gs"' --config 'timeout_prog="../../timeout/install-inplace/bin/timeout"' -e "config.stage=1" --summary-file "../../../testsuite_summary_stage1.txt" --no-print-summary 1 \ \ \ \ \ \ -e config.speed="2" \ Traceback (most recent call last): File "../../driver/runtests.py", line 210, in from testlib import * File "/home/simonpj/code/HEAD-4/testsuite/driver/testlib.py", line 1286 f.write(u':set prog ' + name + u'\n') ^ SyntaxError: invalid syntax _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Thu Dec 1 16:29:58 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 01 Dec 2016 11:29:58 -0500 Subject: testsuite broken In-Reply-To: References: Message-ID: <87inr3sjcp.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Yikes. I can’t run the testsuite on Linux (debian ? I think…). See below. > I installed python3 by saying > apt-get install python3 > And indeed > > python3 --version > > Python 3.2.3 For the record I've opened #12909 to track this. There is a fix in D2778 which I'll merge shortly. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From conal at conal.net Thu Dec 1 21:51:25 2016 From: conal at conal.net (Conal Elliott) Date: Thu, 1 Dec 2016 13:51:25 -0800 Subject: How to inline early in a GHC plugin? Message-ID: I'm implementing a GHC plugin that installs a `BuiltInRule` that does the work, and I'd like to learn how to inline more flexibly. Given an identifier `v`, I'm using `maybeUnfoldingTemplate (realIdUnfolding v)` to get a `Maybe CoreExpr`. Sometimes this recipe yields `Nothing` until a later compiler phase. Meanwhile, I guess my variable `v` has been replaced by one with inlining info. First, am I understanding this mechanism correctly? A GHC source pointer to how inlining is made available would help me. Second, can I access the inlining info before it's made available to the rest of the simplifier? Thanks, - Conal -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Thu Dec 1 22:24:27 2016 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 01 Dec 2016 22:24:27 +0000 Subject: [GHC] #11744: Latest Xcode update violates POSIX compliance of `nm -P` In-Reply-To: <057.a9e853da6a98de67a38c7ff01f490381@haskell.org> References: <042.af01bf0c9281d3187ce47b8cda7a587e@haskell.org> <057.a9e853da6a98de67a38c7ff01f490381@haskell.org> Message-ID: I can confirm that what Ben says is true for me using XCode 8.1 On Thu, Dec 1, 2016 at 6:21 PM GHC wrote: > #11744: Latest Xcode update violates POSIX compliance of `nm -P` > ---------------------------------+-------------------------------------- > Reporter: hvr | Owner: > Type: bug | Status: new > Priority: highest | Milestone: > Component: Build System | Version: > Resolution: | Keywords: > Operating System: MacOS X | Architecture: x86_64 (amd64) > Type of failure: None/Unknown | Test Case: > Blocked By: | Blocking: > Related Tickets: | Differential Rev(s): phab:D2113 > Wiki Page: | > ---------------------------------+-------------------------------------- > > Comment (by bgamari): > > It actually appears that the latest XCode release (8.1) fixes this. It > shouldn't be necessary to use `nm-classic` with, > {{{ > $ nm --version > Apple LLVM version 8.0.0 (clang-800.0.38) > Optimized build. > Default target: x86_64-apple-darwin16.0.0 > Host CPU: haswell > }}} > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.bracker at googlemail.com Fri Dec 2 15:57:38 2016 From: jan.bracker at googlemail.com (Jan Bracker) Date: Fri, 2 Dec 2016 15:57:38 +0000 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> Message-ID: Simon, Richard, thank you for your answer! I don't have time to look into the GHC sources right now, but I will set aside some time after the holidays and take a close look at what the exact restrictions on proc-notation are and document them. Since you suggested a rewrite of GHC's handling of proc-syntax, are there any opinions on integrating generalized arrows (Joseph 2014) in the process? I think they would greatly improve arrows! I don't know if I have the time to attempt this, but if I find the time I would give it a try. Why wasn't this integrated while it was still actively developed? Best, Jan [Joseph 2014] https://www2.eecs.berkeley.edu/Pubs/TechRpts/ 2014/EECS-2014-130.pdf 2016-11-29 12:41 GMT+00:00 Simon Peyton Jones : > Jan, > > > > Type checking and desugaring for arrow syntax has received Absolutely No > Love for several years. I do not understand how it works very well, and I > would not be at all surprised if it is broken in corner cases. > > > > It really needs someone to look at it carefully, document it better, and > perhaps refactor it – esp by using a different data type rather than > piggy-backing on HsExpr. > > > > In the light of that understanding, I think rebindable syntax will be > easier. > > > > I don’t know if you are up for that, but it’s a rather un-tended part of > GHC. > > > > Thanks > > > > Simon > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Richard > Eisenberg > *Sent:* 28 November 2016 22:30 > *To:* Jan Bracker > *Cc:* ghc-devs at haskell.org > *Subject:* Help needed: Restrictions of proc-notation with > RebindableSyntax > > > > Jan’s question is a good one, but I don’t know enough about procs to be > able to answer. I do know that the answer can be found by looking for uses > of `tcSyntaxOp` in the TcArrows module.... but I just can’t translate it > all to source Haskell, having roughly 0 understanding of this end of the > language. > > > > Can anyone else help Jan here? > > > > Richard > > > > On Nov 23, 2016, at 4:34 AM, Jan Bracker via ghc-devs < > ghc-devs at haskell.org> wrote: > > > > Hello, > > > > I want to use the proc-notation together with RebindableSyntax. So far > what I am trying to do is working fine, but I would like to know what the > exact restrictions on the supplied functions are. I am introducing > additional indices and constraints on the operations. The documentation [1] > says the details are in flux and that I should ask directly. > > > > Best, > > Jan > > > > [1] https://downloads.haskell.org/~ghc/latest/docs/html/ > users_guide/glasgow_exts.html#rebindable-syntax-and-the- > implicit-prelude-import > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Dec 2 16:58:06 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 2 Dec 2016 16:58:06 +0000 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> Message-ID: Since you suggested a rewrite of GHC's handling of proc-syntax, are there any opinions on integrating generalized arrows (Joseph 2014) in the process? I think they would greatly improve arrows! I don't know if I have the time to attempt this, but if I find the time I would give it a try. Why wasn't this integrated while it was still actively developed? The arrow stuff was added to GHC years before this thesis (which I had not seen before – thanks). I don’t have an opinions about · the desirability · the difficulty of integrating generalised arrows. You’re in the driving seat! By all means give it a go. Simon From: Jan Bracker [mailto:jan.bracker at googlemail.com] Sent: 02 December 2016 15:58 To: Simon Peyton Jones Cc: Richard Eisenberg ; ghc-devs at haskell.org; Ross Paterson (ross at soi.city.ac.uk) ; Henrik Nilsson Subject: Re: Help needed: Restrictions of proc-notation with RebindableSyntax Simon, Richard, thank you for your answer! I don't have time to look into the GHC sources right now, but I will set aside some time after the holidays and take a close look at what the exact restrictions on proc-notation are and document them. Since you suggested a rewrite of GHC's handling of proc-syntax, are there any opinions on integrating generalized arrows (Joseph 2014) in the process? I think they would greatly improve arrows! I don't know if I have the time to attempt this, but if I find the time I would give it a try. Why wasn't this integrated while it was still actively developed? Best, Jan [Joseph 2014] https://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-130.pdf 2016-11-29 12:41 GMT+00:00 Simon Peyton Jones >: Jan, Type checking and desugaring for arrow syntax has received Absolutely No Love for several years. I do not understand how it works very well, and I would not be at all surprised if it is broken in corner cases. It really needs someone to look at it carefully, document it better, and perhaps refactor it – esp by using a different data type rather than piggy-backing on HsExpr. In the light of that understanding, I think rebindable syntax will be easier. I don’t know if you are up for that, but it’s a rather un-tended part of GHC. Thanks Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Richard Eisenberg Sent: 28 November 2016 22:30 To: Jan Bracker > Cc: ghc-devs at haskell.org Subject: Help needed: Restrictions of proc-notation with RebindableSyntax Jan’s question is a good one, but I don’t know enough about procs to be able to answer. I do know that the answer can be found by looking for uses of `tcSyntaxOp` in the TcArrows module.... but I just can’t translate it all to source Haskell, having roughly 0 understanding of this end of the language. Can anyone else help Jan here? Richard On Nov 23, 2016, at 4:34 AM, Jan Bracker via ghc-devs > wrote: Hello, I want to use the proc-notation together with RebindableSyntax. So far what I am trying to do is working fine, but I would like to know what the exact restrictions on the supplied functions are. I am introducing additional indices and constraints on the operations. The documentation [1] says the details are in flux and that I should ask directly. Best, Jan [1] https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/glasgow_exts.html#rebindable-syntax-and-the-implicit-prelude-import _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Dec 2 17:07:38 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 2 Dec 2016 17:07:38 +0000 Subject: How to inline early in a GHC plugin? In-Reply-To: References: Message-ID: I don’t really understand your question clearly. So I’ll guess Unfoldings are added to Ids in Simplify.completeBind (look for setUnfoldingInfo). Apart from INLINE pragmas, that’s about the only place it happens. Does that help? S From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Conal Elliott Sent: 01 December 2016 21:51 To: ghc-devs at haskell.org Subject: How to inline early in a GHC plugin? I'm implementing a GHC plugin that installs a `BuiltInRule` that does the work, and I'd like to learn how to inline more flexibly. Given an identifier `v`, I'm using `maybeUnfoldingTemplate (realIdUnfolding v)` to get a `Maybe CoreExpr`. Sometimes this recipe yields `Nothing` until a later compiler phase. Meanwhile, I guess my variable `v` has been replaced by one with inlining info. First, am I understanding this mechanism correctly? A GHC source pointer to how inlining is made available would help me. Second, can I access the inlining info before it's made available to the rest of the simplifier? Thanks, - Conal -------------- next part -------------- An HTML attachment was scrubbed... URL: From conal at conal.net Fri Dec 2 18:12:43 2016 From: conal at conal.net (Conal Elliott) Date: Fri, 2 Dec 2016 10:12:43 -0800 Subject: How to inline early in a GHC plugin? In-Reply-To: References: Message-ID: Thanks for the pointers, Simon. Some more specific questions: * To access an unfolding, is `maybeUnfoldingTemplate (idUnfolding v)` the recommended recipe? * Is it the case that this recipe succeeds (`Just`) in some compiler phases and not others? If so, is this difference due to Ids being altered (presumably via `setUnfoldingInfo` being called between phases)? * Before an Id is ready for general inlining by the simplifier, can I get the Id's unfolding another way so that I can substitute it early? A short Skype chat might easily clear up my questions and confusions if you have time and inclination. Regards, - Conal On Fri, Dec 2, 2016 at 9:07 AM, Simon Peyton Jones wrote: > I don’t really understand your question clearly. So I’ll guess > > > > Unfoldings are added to Ids in Simplify.completeBind (look for > setUnfoldingInfo). Apart from INLINE pragmas, that’s about the only place > it happens. > > > > Does that help? > > > > S > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Conal > Elliott > *Sent:* 01 December 2016 21:51 > *To:* ghc-devs at haskell.org > *Subject:* How to inline early in a GHC plugin? > > > > I'm implementing a GHC plugin that installs a `BuiltInRule` that does the > work, and I'd like to learn how to inline more flexibly. Given an > identifier `v`, I'm using `maybeUnfoldingTemplate (realIdUnfolding v)` to > get a `Maybe CoreExpr`. Sometimes this recipe yields `Nothing` until a > later compiler phase. Meanwhile, I guess my variable `v` has been replaced > by one with inlining info. First, am I understanding this mechanism > correctly? A GHC source pointer to how inlining is made available would > help me. Second, can I access the inlining info before it's made available > to the rest of the simplifier? > > > > Thanks, - Conal > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Fri Dec 2 19:12:14 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 02 Dec 2016 14:12:14 -0500 Subject: New perf.haskell.org/ghc builder up Message-ID: <1480705934.6351.11.camel@joachim-breitner.de> Hi, Brynmar, via Richard, has sponsored a new machine to build GHC commits for https://perf.haskell.org/ghc/ and I have set it up now. Since the numbers will be incomparable with the previous ones, I started the benchmarking from scratch, starting with 853cdaea7f8724cd071f4fa7ad6c5377a2a8a6e4 which was the last commit benchmarked before. The machine is churning through all commits of the last month at a speed of one commit per hour, so eventually https://perf.haskell.org/ghc/ will give information about up-to-date commits again. Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Fri Dec 2 23:22:33 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 02 Dec 2016 18:22:33 -0500 Subject: Please =?UTF-8?Q?don=E2=80=99t?= break travis Message-ID: <1480720953.13340.14.camel@joachim-breitner.de> Hi, again, Travis is failing to build master since a while. Unfortunately, only the author of commits get mailed by Travis, so I did not notice it so far. But usually, when Travis reports a build failure, this is something actionable! If in doubt, contact me. The breakage at the moment occurs only with -DDEBUG on: Compile failed (exit code 1) errors were: ghc-stage2: panic! (the 'impossible' happened)   (GHC version 8.1.20161118 for x86_64-unknown-linux): No match in record selector is_iloc Please report this as a GHC bug:   http://www.haskell.org/ghc/reportabug *** unexpected failure for rn017(normal) Compile failed (exit code 1) errors were: ghc-stage2: panic! (the 'impossible' happened)   (GHC version 8.1.20161118 for x86_64-unknown-linux): No match in record selector is_iloc Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug *** unexpected failure for T7672(normal) And started appearing, unless I am mistaken, with From: Matthew Pickering < matthewtpickering at gmail.com > Date: Fri, 18 Nov 2016 16:28:30 +0000 Subject: [PATCH] Optimise whole module exports We directly build up the correct AvailInfos rather than generating lots of singleton instances and combining them with expensive calls to unionLists. There are two other small changes. * Pushed the nubAvails call into the explicit export list   branch as we construct them correctly and uniquely ourselves. * fix_faminst only needs to check the first element of the export   list as we maintain the (yucky) invariant that the parent is the   first thing in it. Reviewers: simonpj, austin, bgamari Reviewed By: simonpj, bgamari Subscribers: simonpj, thomie, niteria Differential Revision: https://phabricator.haskell.org/D2657 Matthew, can you verify that this is a regression introduce here? Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From michal.terepeta at gmail.com Sun Dec 4 19:47:29 2016 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Sun, 04 Dec 2016 19:47:29 +0000 Subject: Measuring performance of GHC Message-ID: Hi everyone, I've been running nofib a few times recently to see the effect of some changes on compile time (not the runtime of the compiled program). And I've started wondering how representative nofib is when it comes to measuring compile time and compiler allocations? It seems that most of the nofib programs compile really quickly... Is there some collections of modules/libraries/applications that were put together with the purpose of benchmarking GHC itself and I just haven't seen/found it? If not, maybe we should create something? IMHO it sounds reasonable to have separate benchmarks for: - Performance of GHC itself. - Performance of the code generated by GHC. Thanks, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Sun Dec 4 19:50:54 2016 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sun, 4 Dec 2016 21:50:54 +0200 Subject: Measuring performance of GHC In-Reply-To: References: Message-ID: I agree. I find compilation time on things with large data structures, such as working with the GHC AST via the GHC API get pretty slow. To the point where I have had to explicitly disable optimisation on HaRe, otherwise the build takes too long. Alan On Sun, Dec 4, 2016 at 9:47 PM, Michal Terepeta wrote: > Hi everyone, > > I've been running nofib a few times recently to see the effect of some > changes > on compile time (not the runtime of the compiled program). And I've started > wondering how representative nofib is when it comes to measuring compile > time > and compiler allocations? It seems that most of the nofib programs compile > really quickly... > > Is there some collections of modules/libraries/applications that were put > together with the purpose of benchmarking GHC itself and I just haven't > seen/found it? > > If not, maybe we should create something? IMHO it sounds reasonable to have > separate benchmarks for: > - Performance of GHC itself. > - Performance of the code generated by GHC. > > Thanks, > Michal > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dct25-561bs at mythic-beasts.com Sun Dec 4 20:04:27 2016 From: dct25-561bs at mythic-beasts.com (David Turner) Date: Sun, 4 Dec 2016 20:04:27 +0000 Subject: Measuring performance of GHC In-Reply-To: References: Message-ID: Nod nod. amazonka-ec2 has a particularly painful module containing just a couple of hundred type definitions and associated instances and stuff. None of the types is enormous. There's an issue open on GitHub[1] where I've guessed at some possible better ways of splitting the types up to make GHC's life easier, but it'd be great if it didn't need any such shenanigans. It's a bit of a pathological case: auto-generated 15kLoC and lots of deriving, but I still feel it should be possible to compile with less than 2.8GB RSS. [1] https://github.com/brendanhay/amazonka/issues/304 Cheers, David On 4 Dec 2016 19:51, "Alan & Kim Zimmerman" wrote: I agree. I find compilation time on things with large data structures, such as working with the GHC AST via the GHC API get pretty slow. To the point where I have had to explicitly disable optimisation on HaRe, otherwise the build takes too long. Alan On Sun, Dec 4, 2016 at 9:47 PM, Michal Terepeta wrote: > Hi everyone, > > I've been running nofib a few times recently to see the effect of some > changes > on compile time (not the runtime of the compiled program). And I've started > wondering how representative nofib is when it comes to measuring compile > time > and compiler allocations? It seems that most of the nofib programs compile > really quickly... > > Is there some collections of modules/libraries/applications that were put > together with the purpose of benchmarking GHC itself and I just haven't > seen/found it? > > If not, maybe we should create something? IMHO it sounds reasonable to have > separate benchmarks for: > - Performance of GHC itself. > - Performance of the code generated by GHC. > > Thanks, > Michal > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sun Dec 4 21:52:12 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 04 Dec 2016 16:52:12 -0500 Subject: Measuring performance of GHC In-Reply-To: References: Message-ID: <1480888332.6052.7.camel@joachim-breitner.de> Hi, did you try to compile it with a profiled GHC and look at the report? I would not be surprised if it would point to some obvious sub-optimal algorithms in GHC. Greetings, Joachim Am Sonntag, den 04.12.2016, 20:04 +0000 schrieb David Turner: > Nod nod. > > amazonka-ec2 has a particularly painful module containing just a > couple of hundred type definitions and associated instances and > stuff. None of the types is enormous. There's an issue open on > GitHub[1] where I've guessed at some possible better ways of > splitting the types up to make GHC's life easier, but it'd be great > if it didn't need any such shenanigans. It's a bit of a pathological > case: auto-generated 15kLoC and lots of deriving, but I still feel it > should be possible to compile with less than 2.8GB RSS. >   > [1] https://github.com/brendanhay/amazonka/issues/304 > > Cheers, > > David > > On 4 Dec 2016 19:51, "Alan & Kim Zimmerman" > wrote: > I agree. > > I find compilation time on things with large data structures, such as > working with the GHC AST via the GHC API get pretty slow. > > To the point where I have had to explicitly disable optimisation on > HaRe, otherwise the build takes too long. > > Alan > > > On Sun, Dec 4, 2016 at 9:47 PM, Michal Terepeta l.com> wrote: > > Hi everyone, > > > > I've been running nofib a few times recently to see the effect of > > some changes > > on compile time (not the runtime of the compiled program). And I've > > started > > wondering how representative nofib is when it comes to measuring > > compile time > > and compiler allocations? It seems that most of the nofib programs > > compile > > really quickly... > > > > Is there some collections of modules/libraries/applications that > > were put > > together with the purpose of benchmarking GHC itself and I just > > haven't > > seen/found it? > > > > If not, maybe we should create something? IMHO it sounds reasonable > > to have > > separate benchmarks for: > > - Performance of GHC itself. > > - Performance of the code generated by GHC. > > > > Thanks, > > Michal > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From dct25-561bs at mythic-beasts.com Sun Dec 4 21:57:11 2016 From: dct25-561bs at mythic-beasts.com (David Turner) Date: Sun, 4 Dec 2016 21:57:11 +0000 Subject: Measuring performance of GHC In-Reply-To: <1480888332.6052.7.camel@joachim-breitner.de> References: <1480888332.6052.7.camel@joachim-breitner.de> Message-ID: Seems like a good idea, for sure. I have not, but I might eventually. On 4 Dec 2016 21:52, "Joachim Breitner" wrote: > Hi, > > did you try to compile it with a profiled GHC and look at the report? I > would not be surprised if it would point to some obvious sub-optimal > algorithms in GHC. > > Greetings, > Joachim > > Am Sonntag, den 04.12.2016, 20:04 +0000 schrieb David Turner: > > Nod nod. > > > > amazonka-ec2 has a particularly painful module containing just a > > couple of hundred type definitions and associated instances and > > stuff. None of the types is enormous. There's an issue open on > > GitHub[1] where I've guessed at some possible better ways of > > splitting the types up to make GHC's life easier, but it'd be great > > if it didn't need any such shenanigans. It's a bit of a pathological > > case: auto-generated 15kLoC and lots of deriving, but I still feel it > > should be possible to compile with less than 2.8GB RSS. > > > > [1] https://github.com/brendanhay/amazonka/issues/304 > > > > Cheers, > > > > David > > > > On 4 Dec 2016 19:51, "Alan & Kim Zimmerman" > > wrote: > > I agree. > > > > I find compilation time on things with large data structures, such as > > working with the GHC AST via the GHC API get pretty slow. > > > > To the point where I have had to explicitly disable optimisation on > > HaRe, otherwise the build takes too long. > > > > Alan > > > > > > On Sun, Dec 4, 2016 at 9:47 PM, Michal Terepeta > l.com> wrote: > > > Hi everyone, > > > > > > I've been running nofib a few times recently to see the effect of > > > some changes > > > on compile time (not the runtime of the compiled program). And I've > > > started > > > wondering how representative nofib is when it comes to measuring > > > compile time > > > and compiler allocations? It seems that most of the nofib programs > > > compile > > > really quickly... > > > > > > Is there some collections of modules/libraries/applications that > > > were put > > > together with the purpose of benchmarking GHC itself and I just > > > haven't > > > seen/found it? > > > > > > If not, maybe we should create something? IMHO it sounds reasonable > > > to have > > > separate benchmarks for: > > > - Performance of GHC itself. > > > - Performance of the code generated by GHC. > > > > > > Thanks, > > > Michal > > > > > > > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -- > Joachim “nomeata” Breitner > mail at joachim-breitner.de • https://www.joachim-breitner.de/ > XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Dec 5 10:31:59 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 5 Dec 2016 10:31:59 +0000 Subject: Measuring performance of GHC In-Reply-To: References: Message-ID: If not, maybe we should create something? IMHO it sounds reasonable to have separate benchmarks for: - Performance of GHC itself. - Performance of the code generated by GHC. I think that would be great, Michael. We have a small and unrepresentative sample in testsuite/tests/perf/compiler Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Michal Terepeta Sent: 04 December 2016 19:47 To: ghc-devs Subject: Measuring performance of GHC Hi everyone, I've been running nofib a few times recently to see the effect of some changes on compile time (not the runtime of the compiled program). And I've started wondering how representative nofib is when it comes to measuring compile time and compiler allocations? It seems that most of the nofib programs compile really quickly... Is there some collections of modules/libraries/applications that were put together with the purpose of benchmarking GHC itself and I just haven't seen/found it? If not, maybe we should create something? IMHO it sounds reasonable to have separate benchmarks for: - Performance of GHC itself. - Performance of the code generated by GHC. Thanks, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz at lichtzwerge.de Mon Dec 5 10:59:58 2016 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Mon, 5 Dec 2016 18:59:58 +0800 Subject: Measuring performance of GHC In-Reply-To: References: Message-ID: Hi, I’ve started the GHC Performance Regression Collection Proposal[1] (Rendered [2]) a while ago with the idea of having a trivially community curated set of small[3] real-world examples with performance regressions. I might be at fault here for not describing this to the best of my abilities. Thus if there is interested, and this sounds like an useful idea, maybe we should still pursue this proposal? Cheers, moritz [1]: https://github.com/ghc-proposals/ghc-proposals/pull/26 [2]: https://github.com/angerman/ghc-proposals/blob/prop/perf-regression/proposals/0000-perf-regression.rst [3]: for some definition of small > On Dec 5, 2016, at 6:31 PM, Simon Peyton Jones via ghc-devs wrote: > > If not, maybe we should create something? IMHO it sounds reasonable to have > > separate benchmarks for: > > - Performance of GHC itself. > > - Performance of the code generated by GHC. > > > I think that would be great, Michael. We have a small and unrepresentative sample in testsuite/tests/perf/compiler > > Simon > > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Michal Terepeta > Sent: 04 December 2016 19:47 > To: ghc-devs > Subject: Measuring performance of GHC > > Hi everyone, > > > > I've been running nofib a few times recently to see the effect of some changes > > on compile time (not the runtime of the compiled program). And I've started > > wondering how representative nofib is when it comes to measuring compile time > > and compiler allocations? It seems that most of the nofib programs compile > > really quickly... > > > > Is there some collections of modules/libraries/applications that were put > > together with the purpose of benchmarking GHC itself and I just haven't > > seen/found it? > > > > If not, maybe we should create something? IMHO it sounds reasonable to have > > separate benchmarks for: > > - Performance of GHC itself. > > - Performance of the code generated by GHC. > > > > Thanks, > > Michal > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From michal.terepeta at gmail.com Mon Dec 5 20:21:00 2016 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Mon, 05 Dec 2016 20:21:00 +0000 Subject: Measuring performance of GHC In-Reply-To: References: Message-ID: On Mon, Dec 5, 2016 at 12:00 PM Moritz Angermann wrote: > Hi, > > I’ve started the GHC Performance Regression Collection Proposal[1] > (Rendered [2]) > a while ago with the idea of having a trivially community curated set of > small[3] > real-world examples with performance regressions. I might be at fault here > for > not describing this to the best of my abilities. Thus if there is > interested, and > this sounds like an useful idea, maybe we should still pursue this > proposal? > > Cheers, > moritz > > [1]: https://github.com/ghc-proposals/ghc-proposals/pull/26 > [2]: > https://github.com/angerman/ghc-proposals/blob/prop/perf-regression/proposals/0000-perf-regression.rst > [3]: for some definition of small > Interesting! I must have missed this proposal. It seems that it didn't meet with much enthusiasm though (but it also proposes to have a completely separate repo on github). Personally, I'd be happy with something more modest: - A collection of modules/programs that are more representative of real Haskell programs and stress various aspects of the compiler. (this seems to be a weakness of nofib, where >90% of modules compile in less than 0.4s) - A way to compile all of those and do "before and after" comparisons easily. To measure the time, we should probably try to compile each module at least a few times. (it seems that this is not currently possible with `tests/perf/compiler` and nofib only compiles the programs once AFAICS) Looking at the comments on the proposal from Moritz, most people would prefer to extend/improve nofib or `tests/perf/compiler` tests. So I guess the main question is - what would be better: - Extending nofib with modules that are compile only (i.e., not runnable) and focus on stressing the compiler? - Extending `tests/perf/compiler` with ability to run all the tests and do easy "before and after" comparisons? Personally, I'm slightly leaning towards `tests/perf/compiler` since this would allow sharing the same module as a test for `validate` and to be used for comparing the performance of the compiler before and after a change. What do you think? Thanks, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Mon Dec 5 23:25:13 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Mon, 5 Dec 2016 23:25:13 +0000 Subject: =?UTF-8?Q?Re=3A_Please_don=E2=80=99t_break_travis?= In-Reply-To: <1480720953.13340.14.camel@joachim-breitner.de> References: <1480720953.13340.14.camel@joachim-breitner.de> Message-ID: I made #12930 to track this. Matt On Fri, Dec 2, 2016 at 11:22 PM, Joachim Breitner wrote: > Hi, > > again, Travis is failing to build master since a while. Unfortunately, > only the author of commits get mailed by Travis, so I did not notice it > so far. But usually, when Travis reports a build failure, this is > something actionable! If in doubt, contact me. > > The breakage at the moment occurs only with -DDEBUG on: > > Compile failed (exit code 1) errors were: > ghc-stage2: panic! (the 'impossible' happened) > (GHC version 8.1.20161118 for x86_64-unknown-linux): > No match in record selector is_iloc > > Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug > > > *** unexpected failure for rn017(normal) > Compile failed (exit code 1) errors were: > ghc-stage2: panic! (the 'impossible' happened) > (GHC version 8.1.20161118 for x86_64-unknown-linux): > No match in record selector is_iloc > > Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug > > > *** unexpected failure for T7672(normal) > > And started appearing, unless I am mistaken, with > > From: Matthew Pickering < matthewtpickering at gmail.com > > Date: Fri, 18 Nov 2016 16:28:30 +0000 > Subject: [PATCH] Optimise whole module exports > > We directly build up the correct AvailInfos rather than generating > lots of singleton instances and combining them with expensive calls to > unionLists. > > There are two other small changes. > > * Pushed the nubAvails call into the explicit export list > branch as we construct them correctly and uniquely ourselves. > * fix_faminst only needs to check the first element of the export > list as we maintain the (yucky) invariant that the parent is the > first thing in it. > > Reviewers: simonpj, austin, bgamari > > Reviewed By: simonpj, bgamari > > Subscribers: simonpj, thomie, niteria > > Differential Revision: https://phabricator.haskell.org/D2657 > > Matthew, can you verify that this is a regression introduce here? > > Greetings, > Joachim > > -- > Joachim “nomeata” Breitner > mail at joachim-breitner.de • https://www.joachim-breitner.de/ > XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From ben at smart-cactus.org Tue Dec 6 01:30:03 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 05 Dec 2016 20:30:03 -0500 Subject: Measuring performance of GHC In-Reply-To: References: Message-ID: <87pol5rgis.fsf@ben-laptop.smart-cactus.org> Michal Terepeta writes: > Hi everyone, > > I've been running nofib a few times recently to see the effect of some > changes > on compile time (not the runtime of the compiled program). And I've started > wondering how representative nofib is when it comes to measuring compile > time > and compiler allocations? It seems that most of the nofib programs compile > really quickly... > > Is there some collections of modules/libraries/applications that were put > together with the purpose of benchmarking GHC itself and I just haven't > seen/found it? > Sadly no; I've put out a number of calls for minimal programs (e.g. small, fairly free-standing real-world applications) but the response hasn't been terribly strong. I frankly can't blame people for not wanting to take the time to strip out dependencies from their working programs. Joachim and I have previously discussed the possibility of manually collecting a set of popular Hackage libraries on a regular basis for use in compiler performance characterization. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Tue Dec 6 01:44:00 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 05 Dec 2016 20:44:00 -0500 Subject: Measuring performance of GHC In-Reply-To: References: Message-ID: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> Michal Terepeta writes: > Interesting! I must have missed this proposal. It seems that it didn't meet > with much enthusiasm though (but it also proposes to have a completely > separate > repo on github). > > Personally, I'd be happy with something more modest: > - A collection of modules/programs that are more representative of real > Haskell programs and stress various aspects of the compiler. > (this seems to be a weakness of nofib, where >90% of modules compile > in less than 0.4s) This would be great. > - A way to compile all of those and do "before and after" comparisons > easily. To measure the time, we should probably try to compile each > module at least a few times. (it seems that this is not currently > possible with `tests/perf/compiler` and > nofib only compiles the programs once AFAICS) > > Looking at the comments on the proposal from Moritz, most people would > prefer to > extend/improve nofib or `tests/perf/compiler` tests. So I guess the main > question is - what would be better: > - Extending nofib with modules that are compile only (i.e., not > runnable) and focus on stressing the compiler? > - Extending `tests/perf/compiler` with ability to run all the tests and do > easy "before and after" comparisons? > I don't have a strong opinion on which of these would be better. However, I would point out that currently the tests/perf/compiler tests are extremely labor-intensive to maintain while doing relatively little to catch performance regressions. There are a few issues here: * some tests aren't very reproducible between runs, meaning that contributors sometimes don't catch regressions in their local validations * many tests aren't very reproducible between platforms and all tests are inconsistent between differing word sizes. This means that we end up having many sets of expected performance numbers in the testsuite. In practice nearly all of these except 64-bit Linux are out-of-date. * our window-based acceptance criterion for performance metrics doesn't catch most regressions, which typically bump allocations by a couple percent or less (whereas the acceptance thresholds range from 5% to 20%). This means that the testsuite fails to catch many deltas, only failing when some unlucky person finally pushes the number over the threshold. Joachim and I discussed this issue a few months ago at Hac Phi; he had an interesting approach to tracking expected performance numbers which may both alleviate these issues and reduce the maintenance burden that the tests pose. I wrote down some terse notes in #12758. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From moritz at lichtzwerge.de Tue Dec 6 04:07:23 2016 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Tue, 6 Dec 2016 12:07:23 +0800 Subject: Measuring performance of GHC In-Reply-To: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> References: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> Message-ID: <06091065-6EF7-4A31-8793-05F1FAA3F494@lichtzwerge.de> Hi, I see the following challenges here, which have partially be touched by the discussion in the mentioned proposal. - The tests we are looking at, might be quite time intensive (lots of modules that take substantial time to compile). Is this practical to run when people locally execute nofib to get *some* idea of the performance implications? Where is the threshold for the total execution time on running nofib? - One of the core issues I see in day to day programming (even though not necessarily with haskell right now) is that the spare time I have to file bug reports, boil down performance regressions etc. and file them with open source projects is not paid for and hence minimal. Hence whenever the tools I use make it really easy for me to file a bug, performance regression or fix something that takes the least time the chances of me being able to help out increase greatly. This was one of the ideas behind using just pull requests. E.g. This code seems to be really slow, or has subjectively regressed in compilation time. I also feel confident I can legally share this code snipped. So I just create a quick pull request with a short description, and then carry on with what ever pressing task I’m trying to solve right now. - Making sure that measurements are reliable. (E.g. running on a dedicated machine with no other applications interfering.) I assume Joachim has quite some experience here. Thanks. Cheers, Moritz > On Dec 6, 2016, at 9:44 AM, Ben Gamari wrote: > > Michal Terepeta writes: > >> Interesting! I must have missed this proposal. It seems that it didn't meet >> with much enthusiasm though (but it also proposes to have a completely >> separate >> repo on github). >> >> Personally, I'd be happy with something more modest: >> - A collection of modules/programs that are more representative of real >> Haskell programs and stress various aspects of the compiler. >> (this seems to be a weakness of nofib, where >90% of modules compile >> in less than 0.4s) > > This would be great. > >> - A way to compile all of those and do "before and after" comparisons >> easily. To measure the time, we should probably try to compile each >> module at least a few times. (it seems that this is not currently >> possible with `tests/perf/compiler` and >> nofib only compiles the programs once AFAICS) >> >> Looking at the comments on the proposal from Moritz, most people would >> prefer to >> extend/improve nofib or `tests/perf/compiler` tests. So I guess the main >> question is - what would be better: >> - Extending nofib with modules that are compile only (i.e., not >> runnable) and focus on stressing the compiler? >> - Extending `tests/perf/compiler` with ability to run all the tests and do >> easy "before and after" comparisons? >> > I don't have a strong opinion on which of these would be better. > However, I would point out that currently the tests/perf/compiler tests > are extremely labor-intensive to maintain while doing relatively little > to catch performance regressions. There are a few issues here: > > * some tests aren't very reproducible between runs, meaning that > contributors sometimes don't catch regressions in their local > validations > * many tests aren't very reproducible between platforms and all tests > are inconsistent between differing word sizes. This means that we end > up having many sets of expected performance numbers in the testsuite. > In practice nearly all of these except 64-bit Linux are out-of-date. > * our window-based acceptance criterion for performance metrics doesn't > catch most regressions, which typically bump allocations by a couple > percent or less (whereas the acceptance thresholds range from 5% to > 20%). This means that the testsuite fails to catch many deltas, only > failing when some unlucky person finally pushes the number over the > threshold. > > Joachim and I discussed this issue a few months ago at Hac Phi; he had > an interesting approach to tracking expected performance numbers which > may both alleviate these issues and reduce the maintenance burden that > the tests pose. I wrote down some terse notes in #12758. > > Cheers, > > - Ben From simonpj at microsoft.com Tue Dec 6 08:31:30 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 6 Dec 2016 08:31:30 +0000 Subject: Measuring performance of GHC In-Reply-To: <06091065-6EF7-4A31-8793-05F1FAA3F494@lichtzwerge.de> References: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> <06091065-6EF7-4A31-8793-05F1FAA3F494@lichtzwerge.de> Message-ID: | - One of the core issues I see in day to day programming (even though | not necessarily with haskell right now) is that the spare time I | have | to file bug reports, boil down performance regressions etc. and file | them with open source projects is not paid for and hence minimal. | Hence whenever the tools I use make it really easy for me to file a | bug, performance regression or fix something that takes the least | time | the chances of me being able to help out increase greatly. This was | one | of the ideas behind using just pull requests. | E.g. This code seems to be really slow, or has subjectively | regressed in | compilation time. I also feel confident I can legally share this | code | snipped. So I just create a quick pull request with a short | description, | and then carry on with what ever pressing task I’m trying to solve | right | now. There's the same difficulty at the other end too - people who might fix perf regressions are typically not paid for either. So they (eg me) tend to focus on things where there is a small repro case, which in turn costs work to produce. Eg #12745 which I fixed recently in part because thomie found a lovely small example. So I'm a bit concerned that lowering the barrier to entry for perf reports might not actually lead to better perf. (But undeniably the suite we built up would be a Good Thing, so we'd be a bit further forward.) Simon From moritz at lichtzwerge.de Tue Dec 6 09:00:22 2016 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Tue, 6 Dec 2016 17:00:22 +0800 Subject: Measuring performance of GHC In-Reply-To: References: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> <06091065-6EF7-4A31-8793-05F1FAA3F494@lichtzwerge.de> Message-ID: <36C375A8-FB82-47C3-8E87-F1B65593B9E2@lichtzwerge.de> > | - One of the core issues I see in day to day programming (even though > | not necessarily with haskell right now) is that the spare time I > | have > | to file bug reports, boil down performance regressions etc. and file > | them with open source projects is not paid for and hence minimal. > | Hence whenever the tools I use make it really easy for me to file a > | bug, performance regression or fix something that takes the least > | time > | the chances of me being able to help out increase greatly. This was > | one > | of the ideas behind using just pull requests. > | E.g. This code seems to be really slow, or has subjectively > | regressed in > | compilation time. I also feel confident I can legally share this > | code > | snipped. So I just create a quick pull request with a short > | description, > | and then carry on with what ever pressing task I’m trying to solve > | right > | now. > > There's the same difficulty at the other end too - people who might fix perf regressions are typically not paid for either. So they (eg me) tend to focus on things where there is a small repro case, which in turn costs work to produce. Eg #12745 which I fixed recently in part because thomie found a lovely small example. > > So I'm a bit concerned that lowering the barrier to entry for perf reports might not actually lead to better perf. (But undeniably the suite we built up would be a Good Thing, so we'd be a bit further forward.) > > Simon I did not intend to imply that there was a surplus of time on the other end :) If this would result in a bunch of tiny test cases that can pinpoint the underlying issue, I’m not certain. Say we would tag the test cases though (e.g. uses TH, uses GADTs, uses X, Y and Z) and run these samples on every commit or every other commit (what ever the available hardware would allow the test suite to run on (and maybe even backtest where possible)) regressions w.r.t. subsets might be identifiable. E.g. commit made testcases predominantly with GADTs spike. Worst case scenario we have to declare defeat and decide that this approach has not produced any viable results, and we wasted time of contributes providing the samples. On the other hand we would never know without the samples, as they would have never been provided in the first place? Cheers, moritz From johannes.waldmann at htwk-leipzig.de Tue Dec 6 10:14:26 2016 From: johannes.waldmann at htwk-leipzig.de (Johannes Waldmann) Date: Tue, 6 Dec 2016 11:14:26 +0100 Subject: Measuring performance of GHC In-Reply-To: <871t69zayc.fsf@smart-cactus.org> References: <570CF467.4060001@htwk-leipzig.de> <871t69zayc.fsf@smart-cactus.org> Message-ID: <4e9e4089-fd7f-12a7-4d03-3c4fd9cfa625@htwk-leipzig.de> Hi, > ... to compile it with a profiled GHC and look at the report? How hard is it to build hackage or stackage with a profiled ghc? (Does it require ghc magic, or can I do it?) > ... some obvious sub-optimal algorithms in GHC. obvious to whom? you mean sub-optimality is already known, or that it would become obvious once the reports are there? Even without profiling - does hackage collect timing information from its automated builds? What needs to be done to add timing information in places like https://hackage.haskell.org/package/obdd-0.6.1/reports/1 ? - J.W. From simonpj at microsoft.com Tue Dec 6 14:02:30 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 6 Dec 2016 14:02:30 +0000 Subject: How to inline early in a GHC plugin? In-Reply-To: References: Message-ID: * To access an unfolding, is `maybeUnfoldingTemplate (idUnfolding v)` the recommended recipe? You can see by looking at the code that idUnfolding returns nothing for a loop breaker. You have to decide if that’s what you want; if not, use realIdUnfolding. * Is it the case that this recipe succeeds (`Just`) in some compiler phases and not others? It fails for loop breakers. An Id might be a loop breaker in some phases but not others; e.g. the loop might be broken by some optimisation. * Before an Id is ready for general inlining by the simplifier, can I get the Id's unfolding another way so that I can substitute it early? realIdUnfolding always works. As the code shows idUnfolding :: Id -> Unfolding -- Do not expose the unfolding of a loop breaker! idUnfolding id | isStrongLoopBreaker (occInfo info) = NoUnfolding | otherwise = unfoldingInfo info where info = idInfo id realIdUnfolding :: Id -> Unfolding -- Expose the unfolding if there is one, including for loop breakers realIdUnfolding id = unfoldingInfo (idInfo id) Does that help? Simon From: conal.elliott at gmail.com [mailto:conal.elliott at gmail.com] On Behalf Of Conal Elliott Sent: 02 December 2016 18:13 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: How to inline early in a GHC plugin? Thanks for the pointers, Simon. Some more specific questions: * To access an unfolding, is `maybeUnfoldingTemplate (idUnfolding v)` the recommended recipe? * Is it the case that this recipe succeeds (`Just`) in some compiler phases and not others? If so, is this difference due to Ids being altered (presumably via `setUnfoldingInfo` being called between phases)? * Before an Id is ready for general inlining by the simplifier, can I get the Id's unfolding another way so that I can substitute it early? A short Skype chat might easily clear up my questions and confusions if you have time and inclination. Regards, - Conal On Fri, Dec 2, 2016 at 9:07 AM, Simon Peyton Jones > wrote: I don’t really understand your question clearly. So I’ll guess Unfoldings are added to Ids in Simplify.completeBind (look for setUnfoldingInfo). Apart from INLINE pragmas, that’s about the only place it happens. Does that help? S From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Conal Elliott Sent: 01 December 2016 21:51 To: ghc-devs at haskell.org Subject: How to inline early in a GHC plugin? I'm implementing a GHC plugin that installs a `BuiltInRule` that does the work, and I'd like to learn how to inline more flexibly. Given an identifier `v`, I'm using `maybeUnfoldingTemplate (realIdUnfolding v)` to get a `Maybe CoreExpr`. Sometimes this recipe yields `Nothing` until a later compiler phase. Meanwhile, I guess my variable `v` has been replaced by one with inlining info. First, am I understanding this mechanism correctly? A GHC source pointer to how inlining is made available would help me. Second, can I access the inlining info before it's made available to the rest of the simplifier? Thanks, - Conal -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.terepeta at gmail.com Tue Dec 6 19:27:13 2016 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Tue, 06 Dec 2016 19:27:13 +0000 Subject: Measuring performance of GHC In-Reply-To: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> References: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> Message-ID: > On Tue, Dec 6, 2016 at 2:44 AM Ben Gamari wrote: > Michal Terepeta writes: > > [...] >> >> Looking at the comments on the proposal from Moritz, most people would >> prefer to >> extend/improve nofib or `tests/perf/compiler` tests. So I guess the main >> question is - what would be better: >> - Extending nofib with modules that are compile only (i.e., not >> runnable) and focus on stressing the compiler? >> - Extending `tests/perf/compiler` with ability to run all the tests and do >> easy "before and after" comparisons? >> >I don't have a strong opinion on which of these would be better. >However, I would point out that currently the tests/perf/compiler tests >are extremely labor-intensive to maintain while doing relatively little >to catch performance regressions. There are a few issues here: > > * some tests aren't very reproducible between runs, meaning that > contributors sometimes don't catch regressions in their local > validations > * many tests aren't very reproducible between platforms and all tests > are inconsistent between differing word sizes. This means that we end > up having many sets of expected performance numbers in the testsuite. > In practice nearly all of these except 64-bit Linux are out-of-date. > * our window-based acceptance criterion for performance metrics doesn't > catch most regressions, which typically bump allocations by a couple > percent or less (whereas the acceptance thresholds range from 5% to > 20%). This means that the testsuite fails to catch many deltas, only > failing when some unlucky person finally pushes the number over the > threshold. > > Joachim and I discussed this issue a few months ago at Hac Phi; he had > an interesting approach to tracking expected performance numbers which > may both alleviate these issues and reduce the maintenance burden that > the tests pose. I wrote down some terse notes in #12758. Thanks for mentioning the ticket! To be honest, I'm not a huge fan of having performance tests being treated the same as any other tests. IMHO they are quite different: - They usually need a quiet environment (e.g., cannot run two different tests at the same time). But with ordinary correctness tests, I can run as many as I want concurrently. - The output is not really binary (correct vs incorrect) but some kind of a number (or collection of numbers) that we want to track over time. - The decision whether to fail is harder. Since output might be noisy, you need to have either quite relaxed bounds (and miss small regressions) or try to enforce stronger bounds (and suffer from the flakiness and maintenance overhead). So for the purpose of: "I have a small change and want to check its effect on compiler performance and expect, e.g., ~1% difference" the model running of benchmarks separately from tests is much nicer. I can run them when I'm not doing anything else on the computer and then easily compare the results. (that's what I usually do for nofib). For tracking the performance over time, one could set something up to run the benchmarks when idle. (isn't that's what perf.haskell.org is doing?) Due to that, if we want to extend tests/perf/compiler to support this use case, I think we should include there benchmarks that are *not* tests (and are not included in ./validate), but there's some easy tool to run all of them and give you a quick comparison of what's changed. To a certain degree this would be then orthogonal to the improvements suggested in the ticket. But we could probably reuse some things (e.g., dumping .csv files for perf metrics?) How should we proceed? Should I open a new ticket focused on this? (maybe we could try to figure out all the details there?) Thanks, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at smart-cactus.org Tue Dec 6 21:09:52 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 06 Dec 2016 16:09:52 -0500 Subject: Measuring performance of GHC In-Reply-To: References: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> Message-ID: <87zik8pxwf.fsf@ben-laptop.smart-cactus.org> Michal Terepeta writes: >> On Tue, Dec 6, 2016 at 2:44 AM Ben Gamari wrote: >> >>I don't have a strong opinion on which of these would be better. >>However, I would point out that currently the tests/perf/compiler tests >>are extremely labor-intensive to maintain while doing relatively little >>to catch performance regressions. There are a few issues here: >> >> * some tests aren't very reproducible between runs, meaning that >> contributors sometimes don't catch regressions in their local >> validations >> * many tests aren't very reproducible between platforms and all tests >> are inconsistent between differing word sizes. This means that we end >> up having many sets of expected performance numbers in the testsuite. >> In practice nearly all of these except 64-bit Linux are out-of-date. >> * our window-based acceptance criterion for performance metrics doesn't >> catch most regressions, which typically bump allocations by a couple >> percent or less (whereas the acceptance thresholds range from 5% to >> 20%). This means that the testsuite fails to catch many deltas, only >> failing when some unlucky person finally pushes the number over the >> threshold. >> >> Joachim and I discussed this issue a few months ago at Hac Phi; he had >> an interesting approach to tracking expected performance numbers which >> may both alleviate these issues and reduce the maintenance burden that >> the tests pose. I wrote down some terse notes in #12758. > > Thanks for mentioning the ticket! > Sure! > To be honest, I'm not a huge fan of having performance tests being > treated the same as any other tests. IMHO they are quite different: > > - They usually need a quiet environment (e.g., cannot run two different > tests at the same time). But with ordinary correctness tests, I can > run as many as I want concurrently. > This is absolutely true; if I had a nickel for every time I saw the testsuite fail, only to pass upon re-running I would be able to fund a great deal of GHC development ;) > - The output is not really binary (correct vs incorrect) but some kind of a > number (or collection of numbers) that we want to track over time. > Yes, and this is more or less the idea which the ticket is supposed to capture; we track performance numbers in the GHC repository in git notes and have Harbormaster (or some other stable test environment) maintain them. Exact metrics would be recorded for every commit and we could warn during validate if something changes suspiciously (e.g. look at the mean and variance of the metric over the past N commits and squawk if the commit bumps the metric more than some number of sigmas). This sort of scheme could be implemented in either the testsuite or nofib. It's not clear that one is better than the other (although we would want to teach the testsuite driver to run performance tests serially). > - The decision whether to fail is harder. Since output might be noisy, you > need to have either quite relaxed bounds (and miss small > regressions) or try to enforce stronger bounds (and suffer from the > flakiness and maintenance overhead). > Yep. That is right. > So for the purpose of: > "I have a small change and want to check its effect on compiler > performance and expect, e.g., ~1% difference" > the model running of benchmarks separately from tests is much nicer. I > can run them when I'm not doing anything else on the computer and then > easily compare the results. (that's what I usually do for nofib). For > tracking the performance over time, one could set something up to run > the benchmarks when idle. (isn't that's what perf.haskell.org is > doing?) > > Due to that, if we want to extend tests/perf/compiler to support this > use case, I think we should include there benchmarks that are *not* > tests (and are not included in ./validate), but there's some easy tool > to run all of them and give you a quick comparison of what's changed. > When you put it like this it does sound like nofib is the natural choice here. > To a certain degree this would be then orthogonal to the improvements > suggested in the ticket. But we could probably reuse some things > (e.g., dumping .csv files for perf metrics?) > Indeed. > How should we proceed? Should I open a new ticket focused on this? > (maybe we could try to figure out all the details there?) > That sounds good to me. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From mail at joachim-breitner.de Tue Dec 6 21:20:27 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 06 Dec 2016 16:20:27 -0500 Subject: Measuring performance of GHC In-Reply-To: References: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> Message-ID: <1481059227.9850.1.camel@joachim-breitner.de> Hi, Am Dienstag, den 06.12.2016, 19:27 +0000 schrieb Michal Terepeta: > (isn't that's what perf.haskell.org is doing?) for compiler performance, it only reports the test suite perf test number so far. If someone modifies the nofib runner to give usable timing results for the compiler, I can easily track these numbers as well. Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at well-typed.com Tue Dec 6 22:08:09 2016 From: ben at well-typed.com (Ben Gamari) Date: Tue, 06 Dec 2016 17:08:09 -0500 Subject: Measuring performance of GHC In-Reply-To: <4e9e4089-fd7f-12a7-4d03-3c4fd9cfa625@htwk-leipzig.de> References: <570CF467.4060001@htwk-leipzig.de> <871t69zayc.fsf@smart-cactus.org> <4e9e4089-fd7f-12a7-4d03-3c4fd9cfa625@htwk-leipzig.de> Message-ID: <87twagpv7a.fsf@ben-laptop.smart-cactus.org> Johannes Waldmann writes: > Hi, > >> ... to compile it with a profiled GHC and look at the report? > > How hard is it to build hackage or stackage > with a profiled ghc? (Does it require ghc magic, or can I do it?) > Not terribly hard although it could be made smoother. To start you'll need to compile a profiled GHC. To do this you simply want to something like the following, 1. install the necessary build dependencies [1] 2. get the sources [2] 3. configure the tree to produce a profiled compiler: a. cp mk/build.mk.sample mk/build.mk b. uncomment the line `BuildFlavour=prof` in mk/build.mk 4. `./boot && ./configure --prefix=$dest && make && make install` Then for a particular package, 1. get a working directory: `cabal unpack $pkg && cd $pkg-*` 2. `args="--with-ghc=$dest/bin/ghc --allow-newer=base,ghc-prim,template-haskell,..."` 3. install dependencies: `cabal install --only-dependencies $args .` 4. run the build, `cabal configure --ghc-options="-p -hc" $args && cabal build` You should end up with a .prof and .hp file. Honestly, I often skip the `cabal` step entirely and just use `ghc` to compile a module of interest directly. [1] https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation [2] https://ghc.haskell.org/trac/ghc/wiki/Building/GettingTheSources >> ... some obvious sub-optimal algorithms in GHC. > > obvious to whom? you mean sub-optimality is already known, > or that it would become obvious once the reports are there? > I think "obvious" may have been a bit of a strong word here. There are sub-optimal algorithms in the compiler and they can be found with a bit of work. If you have a good testcase tickling such an algorithm finding the issue can be quite straightforward; if not then the process can be a bit trickier. However, GHC is just another Haskell program and performance issues are approached just like in any other project. > Even without profiling - does hackage collect timing information from > its automated builds? > Sadly it doesn't. But... > What needs to be done to add timing information in places like > https://hackage.haskell.org/package/obdd-0.6.1/reports/1 ? > I've discussed the possibility with Herbert to add instrumentation in his matrix builder [3] to collect this sort of information. As a general note, keep in mind that timings are quite unstable, dependent upon factors beyond our control at all levels of the stack. For this reason, I generally prefer to rely on allocations, not runtimes, while profiling. As always, don't hesitate to drop by #ghc if you run into trouble. Cheers, - Ben [3] http://matrix.hackage.haskell.org/packages -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Tue Dec 6 22:14:10 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 06 Dec 2016 17:14:10 -0500 Subject: Measuring performance of GHC In-Reply-To: <1481059227.9850.1.camel@joachim-breitner.de> References: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> <1481059227.9850.1.camel@joachim-breitner.de> Message-ID: <87r35kpux9.fsf@ben-laptop.smart-cactus.org> Joachim Breitner writes: > Hi, > > Am Dienstag, den 06.12.2016, 19:27 +0000 schrieb Michal Terepeta: >> (isn't that's what perf.haskell.org is doing?) > > for compiler performance, it only reports the test suite perf test > number so far. > > If someone modifies the nofib runner to give usable timing results for > the compiler, I can easily track these numbers as well. > I have a module [1] that does precisely this for the PITA project (which I still have yet to put up on a public server; I'll try to make time for this soon). Cheers, - Ben [1] https://github.com/bgamari/ghc-perf-import/blob/master/SummarizeResults.hs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From mail at joachim-breitner.de Tue Dec 6 22:20:37 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 06 Dec 2016 17:20:37 -0500 Subject: Measuring performance of GHC In-Reply-To: <87r35kpux9.fsf@ben-laptop.smart-cactus.org> References: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> <1481059227.9850.1.camel@joachim-breitner.de> <87r35kpux9.fsf@ben-laptop.smart-cactus.org> Message-ID: <1481062837.9850.3.camel@joachim-breitner.de> Hi, Am Dienstag, den 06.12.2016, 17:14 -0500 schrieb Ben Gamari: > Joachim Breitner writes: > > > Hi, > > > > Am Dienstag, den 06.12.2016, 19:27 +0000 schrieb Michal Terepeta: > > > (isn't that's what perf.haskell.org is doing?) > > > > for compiler performance, it only reports the test suite perf test > > number so far. > > > > If someone modifies the nofib runner to give usable timing results for > > the compiler, I can easily track these numbers as well. > > > > I have a module [1] that does precisely this for the PITA project (which > I still have yet to put up on a public server; I'll try to make time for > this soon). Are you saying that the compile time measurements of a single run of the compiler are actually useful? I’d expect we first have to make nofib call the compiler repeatedly. Also, shouldn’t this then become part of nofib-analye? Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at smart-cactus.org Tue Dec 6 23:51:34 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Tue, 06 Dec 2016 18:51:34 -0500 Subject: Measuring performance of GHC In-Reply-To: <1481062837.9850.3.camel@joachim-breitner.de> References: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> <1481059227.9850.1.camel@joachim-breitner.de> <87r35kpux9.fsf@ben-laptop.smart-cactus.org> <1481062837.9850.3.camel@joachim-breitner.de> Message-ID: <87oa0opqex.fsf@ben-laptop.smart-cactus.org> Joachim Breitner writes: > Hi, > > Am Dienstag, den 06.12.2016, 17:14 -0500 schrieb Ben Gamari: >> Joachim Breitner writes: >> >> > Hi, >> > >> > Am Dienstag, den 06.12.2016, 19:27 +0000 schrieb Michal Terepeta: >> > > (isn't that's what perf.haskell.org is doing?) >> > >> > for compiler performance, it only reports the test suite perf test >> > number so far. >> > >> > If someone modifies the nofib runner to give usable timing results for >> > the compiler, I can easily track these numbers as well. >> > >> >> I have a module [1] that does precisely this for the PITA project (which >> I still have yet to put up on a public server; I'll try to make time for >> this soon). > > Are you saying that the compile time measurements of a single run of > the compiler are actually useful? > Not really, I generally ignore the compile times. However, knowing compiler allocations on a per-module basis is quite nice. > I’d expect we first have to make nofib call the compiler repeatedly. > This would be a good idea though. > Also, shouldn’t this then become part of nofib-analye? > The logic for producing these statistics is implemented by nofib-analyse's Slurp module today. All the script does is produce the statistics in a more consistent format. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From johannes.waldmann at htwk-leipzig.de Wed Dec 7 10:34:42 2016 From: johannes.waldmann at htwk-leipzig.de (Johannes Waldmann) Date: Wed, 7 Dec 2016 11:34:42 +0100 Subject: Measuring performance of GHC In-Reply-To: <87twagpv7a.fsf@ben-laptop.smart-cactus.org> References: <570CF467.4060001@htwk-leipzig.de> <871t69zayc.fsf@smart-cactus.org> <4e9e4089-fd7f-12a7-4d03-3c4fd9cfa625@htwk-leipzig.de> <87twagpv7a.fsf@ben-laptop.smart-cactus.org> Message-ID: <606f91df-bb5c-62cd-8413-fe61290d1273@htwk-leipzig.de> Hi Ben, thanks, > 4. run the build, `cabal configure --ghc-options="-p -hc" $args && cabal build` cabal configure $args --ghc-options="+RTS -p -hc -RTS" > You should end up with a .prof and .hp file. Yes, that works. - Typical output starts like this COST CENTRE MODULE %time %alloc SimplTopBinds SimplCore 60.7 57.3 OccAnal SimplCore 6.0 6.0 Simplify SimplCore 3.0 0.5 These files are always called ghc.{prof,hp}, how could this be changed? Ideally, the output file name would depend on the package being compiled, then the mechanism could probably be used with 'stack' builds. Building executables mentioned in the cabal file will already overwrite profiling info from building libraries. When I 'cabal build' the 'text' package, then the last actual compilation (which leaves the profiling info) is for cbits/cbits.c I don't see how to build Data/Text.hs alone (with ghc, not via cabal), I am getting Failed to load interface for ‘Data.Text.Show’ - J. From mail at joachim-breitner.de Wed Dec 7 14:51:12 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 07 Dec 2016 09:51:12 -0500 Subject: Measuring performance of GHC In-Reply-To: <606f91df-bb5c-62cd-8413-fe61290d1273@htwk-leipzig.de> References: <570CF467.4060001@htwk-leipzig.de> <871t69zayc.fsf@smart-cactus.org> <4e9e4089-fd7f-12a7-4d03-3c4fd9cfa625@htwk-leipzig.de> <87twagpv7a.fsf@ben-laptop.smart-cactus.org> <606f91df-bb5c-62cd-8413-fe61290d1273@htwk-leipzig.de> Message-ID: <1481122272.1113.0.camel@joachim-breitner.de> Hi, Am Mittwoch, den 07.12.2016, 11:34 +0100 schrieb Johannes Waldmann: > When I 'cabal build' the 'text' package, > then the last actual compilation (which leaves > the profiling info) is for cbits/cbits.c > > I don't see how to build Data/Text.hs alone > (with ghc, not via cabal), I am getting > Failed to load interface for ‘Data.Text.Show’ you can run $ cabal build -v and then copy’n’paste the command line that you are intested in, add the flags +RTS -p -hc -RTS -fforce-recomp and run that again. Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at well-typed.com Wed Dec 7 15:02:09 2016 From: ben at well-typed.com (Ben Gamari) Date: Wed, 07 Dec 2016 10:02:09 -0500 Subject: Measuring performance of GHC In-Reply-To: <606f91df-bb5c-62cd-8413-fe61290d1273@htwk-leipzig.de> References: <570CF467.4060001@htwk-leipzig.de> <871t69zayc.fsf@smart-cactus.org> <4e9e4089-fd7f-12a7-4d03-3c4fd9cfa625@htwk-leipzig.de> <87twagpv7a.fsf@ben-laptop.smart-cactus.org> <606f91df-bb5c-62cd-8413-fe61290d1273@htwk-leipzig.de> Message-ID: <87inqvpytq.fsf@ben-laptop.smart-cactus.org> Johannes Waldmann writes: > Hi Ben, thanks, > > >> 4. run the build, `cabal configure --ghc-options="-p -hc" $args && cabal build` > > cabal configure $args --ghc-options="+RTS -p -hc -RTS" > Ahh, yes, of course. I should have tried this before hitting send. >> You should end up with a .prof and .hp file. > > Yes, that works. - Typical output starts like this > > COST CENTRE MODULE %time %alloc > > SimplTopBinds SimplCore 60.7 57.3 > OccAnal SimplCore 6.0 6.0 > Simplify SimplCore 3.0 0.5 > Ahh yes. So one of the things I neglected to mention is that the profiled build flavour includes only a few cost centers. One of the tricky aspects of the cost-center profiler is that it affects core-to-core optimizations, meaning that the act of profiling may actually shift around costs. Consequently by default the the build flavour includes a rather conservative set of cost-centers to avoid distorting the results and preserve compiler performance. Typically when I've profiled the compiler I already have a region of interest in mind. I simply add `OPTIONS_GHC -fprof-auto` pragmas to the modules involved. The build system already adds this flag to a few top-level modules, hence the cost-centers which you observe (see compiler/ghc.mk; search for GhcProfiled). If you don't have a particular piece of the compiler in mind to study, you certainly can just pepper every module with cost centers by adding -fprof-auto to GhcStage2HcOpts (e.g. in mk/build.mk). The resulting compiler may be a bit slow and you may need to be just a tad more careful in evaluating the profile. It might be nice if we had a more aggressive profiled build flavour which added cost centers to a larger fraction of machinery of the compiler, which excluding low-level utilities like FastString, which are critical to the compiler's performance. > > These files are always called ghc.{prof,hp}, > how could this be changed? Ideally, the output file name > would depend on the package being compiled, > then the mechanism could probably be used with 'stack' builds. > We really should have a way to do this but sadly do not currently. Ideally we would also have a way to change the default eventlog destination path. > Building executables mentioned in the cabal file will > already overwrite profiling info from building libraries. > Note that you can instruct `cabal` to only build a single component of a package. For instance, in the case of the `text` package you can build just the library component with `cabal build text`. > When I 'cabal build' the 'text' package, > then the last actual compilation (which leaves > the profiling info) is for cbits/cbits.c > Ahh right. Moreover, there is likely another GHC invocation after that to link the final library. This is why I typically just use GHC directly, perhaps stealing the command line produced by `cabal` (with `-v`). > I don't see how to build Data/Text.hs alone > (with ghc, not via cabal), I am getting > Failed to load interface for ‘Data.Text.Show’ > Hmm, I'm not sure I see the issue. In the case of `text` I can just run `ghc` from the source root (ensuring that I set the #include path with `-I`), $ git clone git://github.com/bos/text $ cd text $ ghc Data/Text.hs -Iinclude However, some other packages (particularly those that make heavy use of CPP) aren't entirely straightforward. In these cases I often find myself copying bits from the command line produced by cabal. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From michal.terepeta at gmail.com Wed Dec 7 19:27:08 2016 From: michal.terepeta at gmail.com (Michal Terepeta) Date: Wed, 07 Dec 2016 19:27:08 +0000 Subject: Measuring performance of GHC In-Reply-To: <87zik8pxwf.fsf@ben-laptop.smart-cactus.org> References: <87mvg9rfvj.fsf@ben-laptop.smart-cactus.org> <87zik8pxwf.fsf@ben-laptop.smart-cactus.org> Message-ID: On Tue, Dec 6, 2016 at 10:10 PM Ben Gamari wrote: > [...] > > How should we proceed? Should I open a new ticket focused on this? > > (maybe we could try to figure out all the details there?) > > > That sounds good to me. Cool, opened: https://ghc.haskell.org/trac/ghc/ticket/12941 to track this. Cheers, Michal -------------- next part -------------- An HTML attachment was scrubbed... URL: From conal at conal.net Thu Dec 8 03:43:38 2016 From: conal at conal.net (Conal Elliott) Date: Wed, 7 Dec 2016 19:43:38 -0800 Subject: How to inline early in a GHC plugin? In-Reply-To: References: Message-ID: Yes, thank you, Simon. I had not occurred to me that an inlining could start working in a later phase due to loss of loop-breaker status, rather than the relationship between the current phase and the identifier's declared inlining phase. -- Conal On Tue, Dec 6, 2016 at 6:02 AM, Simon Peyton Jones wrote: > * To access an unfolding, is `maybeUnfoldingTemplate (idUnfolding v)` > the recommended recipe? > > You can see by looking at the code that idUnfolding returns nothing for a > loop breaker. You have to decide if that’s what you want; if not, use > realIdUnfolding. > > > > * Is it the case that this recipe succeeds (`Just`) in some compiler > phases and not others? > > It fails for loop breakers. An Id might be a loop breaker in some phases > but not others; e.g. the loop might be broken by some optimisation. > > > > * Before an Id is ready for general inlining by the simplifier, can I > get the Id's unfolding another way so that I can substitute it early? > > > > realIdUnfolding always works. As the code shows > > > > idUnfolding :: Id -> Unfolding > > -- Do not expose the unfolding of a loop breaker! > > idUnfolding id > > | isStrongLoopBreaker (occInfo info) = NoUnfolding > > | otherwise = unfoldingInfo info > > where > > info = idInfo id > > > > realIdUnfolding :: Id -> Unfolding > > -- Expose the unfolding if there is one, including for loop breakers > > realIdUnfolding id = unfoldingInfo (idInfo id) > > > > > > Does that help? > > > > Simon > > > > > > *From:* conal.elliott at gmail.com [mailto:conal.elliott at gmail.com] *On > Behalf Of *Conal Elliott > *Sent:* 02 December 2016 18:13 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: How to inline early in a GHC plugin? > > > > Thanks for the pointers, Simon. Some more specific questions: > > > > * To access an unfolding, is `maybeUnfoldingTemplate (idUnfolding v)` > the recommended recipe? > > * Is it the case that this recipe succeeds (`Just`) in some compiler > phases and not others? > > If so, is this difference due to Ids being altered (presumably via > `setUnfoldingInfo` being called between phases)? > > * Before an Id is ready for general inlining by the simplifier, can I > get the Id's unfolding another way so that I can substitute it early? > > > > A short Skype chat might easily clear up my questions and confusions if > you have time and inclination. > > > > Regards, - Conal > > > > On Fri, Dec 2, 2016 at 9:07 AM, Simon Peyton Jones > wrote: > > I don’t really understand your question clearly. So I’ll guess > > > > Unfoldings are added to Ids in Simplify.completeBind (look for > setUnfoldingInfo). Apart from INLINE pragmas, that’s about the only place > it happens. > > > > Does that help? > > > > S > > > > *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Conal > Elliott > *Sent:* 01 December 2016 21:51 > *To:* ghc-devs at haskell.org > *Subject:* How to inline early in a GHC plugin? > > > > I'm implementing a GHC plugin that installs a `BuiltInRule` that does the > work, and I'd like to learn how to inline more flexibly. Given an > identifier `v`, I'm using `maybeUnfoldingTemplate (realIdUnfolding v)` to > get a `Maybe CoreExpr`. Sometimes this recipe yields `Nothing` until a > later compiler phase. Meanwhile, I guess my variable `v` has been replaced > by one with inlining info. First, am I understanding this mechanism > correctly? A GHC source pointer to how inlining is made available would > help me. Second, can I access the inlining info before it's made available > to the rest of the simplifier? > > > > Thanks, - Conal > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Dec 8 06:03:55 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 08 Dec 2016 01:03:55 -0500 Subject: Attempt at a real world benchmark Message-ID: <1481177035.18160.1.camel@joachim-breitner.de> Hi, I have talked so much about it, it was about time to actually follow through. I took a real-world program (tttool, one of mine, show shameless), inlined all the dependencies so that it would compile without Cabal, just with a single invocation of GHC on the 277 modules. The state, including a README, can be found at https://github.com/nomeata/tttool-nofib#turning-tttool-into-a-benchmark I am not sure how useful this is going to be: + Tests lots of common and important real-world libraries. − Takes a lot of time to compile, includes CPP macros and C code. (More details in the README linked above). It is late, so I’ll just send this out as it is for now to get the discussion going if this is a useful approach, or what should be done different. (If this is deemed to be useful, I’d do the same for, say, pandoc next). Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Thu Dec 8 12:06:20 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 8 Dec 2016 12:06:20 +0000 Subject: Windows validate failures Message-ID: I'm getting irreproducible validate failures on Windows. Here's the output: Unexpected failures: ghci/prog003/prog003.run prog003 [bad exit code] (ghci) plugins/plugins07.run plugins07 [bad exit code] (normal) plugins/T10420.run T10420 [bad exit code] (normal) plugins/T10294a.run T10294a [bad exit code] (normal) plugins/T11244.run T11244 [bad stderr] (normal) Framework failures: plugins/plugins07.run plugins07 [normal] (pre_cmd failed: 2) plugins/T10420.run T10420 [normal] (pre_cmd failed: 2) plugins/T10294a.run T10294a [normal] (pre_cmd failed: 2) plugins/T11244.run T11244 [normal] (pre_cmd failed: 2) th/TH_spliceE5.run TH_spliceE5 [ext-interp] ([Errno 17] File exists: '/c/Users/simonpj/AppData/Local/Temp/ghctest-zsi7djxa/test spaces/./th/TH_spliceE5.run') th/TH_reifyMkName.run TH_reifyMkName [ext-interp] ([Errno 17] File exists: '/c/Users/simonpj/AppData/Local/Temp/ghctest-zsi7djxa/test spaces/./th/TH_reifyMkName.run') th/T3395.run T3395 [ext-interp] ([Errno 17] File exists: '/c/Users/simonpj/AppData/Local/Temp/ghctest-zsi7djxa/test spaces/./th/T3395.run') th/T5508.run T5508 [ext-interp] ([Errno 17] File exists: '/c/Users/simonpj/AppData/Local/Temp/ghctest-zsi7djxa/test spaces/./th/T5508.run') th/T10828.run T10828 [ext-interp] ([Errno 17] File exists: '/c/Users/simonpj/AppData/Local/Temp/ghctest-zsi7djxa/test spaces/./th/T10828.run') I have no idea what the framework failures mean. But those plugin tests work ok when run one at a time .../tests/plugins$ make TEST=T10420 PYTHON="python3" "python3" ../../driver/runtests.py -e ghc_compiler_always_flags="'-dcore-lint -dcmm-lint -no-user-package-db -rtsopts -fno-warn-missed-specialisations -fshow-warning-groups -dno-debug-output'" -e config.compiler_debugged=False -e ghc_with_native_codegen=1 -e config.have_vanilla=True -e config.have_dynamic=False -e config.have_profiling=False -e ghc_with_threaded_rts=1 -e ghc_with_dynamic_rts=0 -e config.have_interp=True -e config.unregisterised=False -e config.ghc_dynamic_by_default=False -e config.ghc_dynamic=False -e ghc_with_smp=1 -e ghc_with_llvm=0 -e windows=True -e darwin=False -e config.in_tree_compiler=True -e config.cleanup=True -e config.local=True --rootdir=. --configfile=../../config/ghc -e 'config.confdir="../../config"' -e 'config.platform="x86_64-unknown-mingw32"' -e 'config.os="mingw32"' -e 'config.arch="x86_64"' -e 'config.wordsize="64"' -e 'config.timeout=int() or config.timeout' -e 'config.exeext=".exe"' -e 'config.top="/c/code/HEAD/testsuite"' --config 'compiler="/c/code/HEAD/inplace/bin/ghc-stage2.exe"' --config 'ghc_pkg="/c/code/HEAD/inplace/bin/ghc-pkg.exe"' --config 'haddock="/c/code/HEAD/inplace/bin/haddock.exe"' --config 'hp2ps="/c/code/HEAD/inplace/bin/hp2ps.exe"' --config 'hpc="/c/code/HEAD/inplace/bin/hpc.exe"' --config 'gs="gs"' --config 'timeout_prog="../../timeout/install-inplace/bin/timeout.exe"' -e "config.stage=2" \ --only=T10420 \ \ \ \ \ \ Timeout is 300 Found 1 .T files... Beginning test run at Thu Dec 8 12:02:23 2016 GMTST ====> Scanning ./all.T =====> T10420(normal) 1 of 1 [0, 0, 0] cd "./T10420.run" && $MAKE -s --no-print-directory -C rule-defining-plugin package.T10420 TOP=/c/code/HEAD/testsuite cd "./T10420.run" && $MAKE -s --no-print-directory T10420 SUMMARY for test run started at Thu Dec 8 12:02:23 2016 GMTST 0:00:13 spent to go through 1 total tests, which gave rise to 1 test cases, of which 0 were skipped 0 had missing libraries 1 expected passes 0 expected failures 0 caused framework failures 0 unexpected passes 0 unexpected failures 0 unexpected stat failures -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Dec 8 14:46:37 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 8 Dec 2016 14:46:37 +0000 Subject: Attempt at a real world benchmark In-Reply-To: <1481177035.18160.1.camel@joachim-breitner.de> References: <1481177035.18160.1.camel@joachim-breitner.de> Message-ID: I'm delighted to see all this traffic about GHC perf -- thank you. 277 modules sounds like quite a lot; but in general a test suite that took a while (minutes, not hours) to compile would be fine. We can run it on a nightly server somewhere. Having a dashboard where you can see the results would be good. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Joachim | Breitner | Sent: 08 December 2016 06:04 | To: ghc-devs at haskell.org | Subject: Attempt at a real world benchmark | | Hi, | | I have talked so much about it, it was about time to actually follow | through. | | I took a real-world program (tttool, one of mine, show shameless), | inlined all the dependencies so that it would compile without Cabal, just | with a single invocation of GHC on the 277 modules. | | The state, including a README, can be found at | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.c | om%2Fnomeata%2Ftttool-nofib%23turning-tttool-into-a- | benchmark&data=02%7C01%7Csimonpj%40microsoft.com%7C834f94186ee44b6a816308 | d41f300611%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C63616773850524881 | 6&sdata=QdbCK1mlVXo7xmJT744uvBSEagqVVMh8J69UE27oks4%3D&reserved=0 | | I am not sure how useful this is going to be: | + Tests lots of common and important real-world libraries. | − Takes a lot of time to compile, includes CPP macros and C code. | (More details in the README linked above). | | It is late, so I’ll just send this out as it is for now to get the | discussion going if this is a useful approach, or what should be done | different. | | (If this is deemed to be useful, I’d do the same for, say, pandoc next). | | Greetings, | Joachim | | -- | Joachim “nomeata” Breitner |   mail at joachim-breitner.de • | https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.joac | him- | breitner.de%2F&data=02%7C01%7Csimonpj%40microsoft.com%7C834f94186ee44b6a8 | 16308d41f300611%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636167738505 | 248816&sdata=dwGABuFYcY%2BlFbHWs2PeBm0YyEgAUzHoRgaOaZfLVmc%3D&reserved=0 |   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F |   Debian Developer: nomeata at debian.org From mail at joachim-breitner.de Thu Dec 8 16:04:40 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 08 Dec 2016 11:04:40 -0500 Subject: Attempt at a real world benchmark In-Reply-To: <1481177035.18160.1.camel@joachim-breitner.de> References: <1481177035.18160.1.camel@joachim-breitner.de> Message-ID: <1481213080.1075.13.camel@joachim-breitner.de> Hi, Am Donnerstag, den 08.12.2016, 01:03 -0500 schrieb Joachim Breitner: > I am not sure how useful this is going to be: >  + Tests lots of common and important real-world libraries. >  − Takes a lot of time to compile, includes CPP macros and C code. > (More details in the README linked above). another problem with the approach of taking modern real-world code: It uses a lot of non-boot libraries that are quite compiler-close and do low-level stuff (e.g. using Template Haskell, or stuff like the). If we add that not nofib, we’d have to maintain its compatibility with GHC as we continue developing GHC, probably using lots of CPP. This was less an issue with the Haskell98 code in nofib. But is there a way to test realistic modern code without running into this problem? Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at well-typed.com Thu Dec 8 17:55:55 2016 From: ben at well-typed.com (Ben Gamari) Date: Thu, 08 Dec 2016 12:55:55 -0500 Subject: Linux Harbormaster host Message-ID: <87fulypaok.fsf@ben-laptop.smart-cactus.org> Hello everyone, This morning I noticed that the Linux Harbormaster builder somehow got stuck. I've killed off the hung build and added a timeout to the build script to prevent this from happening again. That being said, it may take a while for it to catch up so don't be surprised if it takes a while for Harbormaster to get to testing new diffs. Sorry for the inconvenience! Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Thu Dec 8 18:20:29 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 08 Dec 2016 13:20:29 -0500 Subject: Windows validate failures In-Reply-To: References: Message-ID: <87a8c6p9jm.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > I'm getting irreproducible validate failures on Windows. Here's the output: > What commit are you on? The plugins issues I've seen before but I believe the "Errno 17" issues we recently fixed. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From moritz at lichtzwerge.de Fri Dec 9 01:50:34 2016 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Fri, 9 Dec 2016 09:50:34 +0800 Subject: Attempt at a real world benchmark In-Reply-To: <1481213080.1075.13.camel@joachim-breitner.de> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> Message-ID: <7F06D71A-56ED-4D27-8CC3-1E89EC647B3A@lichtzwerge.de> Hi, let me thank you perusing this! >> I am not sure how useful this is going to be: >> + Tests lots of common and important real-world libraries. >> − Takes a lot of time to compile, includes CPP macros and C code. >> (More details in the README linked above). > > another problem with the approach of taking modern real-world code: > It uses a lot of non-boot libraries that are quite compiler-close and > do low-level stuff (e.g. using Template Haskell, or stuff like the). If > we add that not nofib, we’d have to maintain its compatibility with GHC > as we continue developing GHC, probably using lots of CPP. This was > less an issue with the Haskell98 code in nofib. > > But is there a way to test realistic modern code without running into > this problem? what are the reasons besides fragmentation for a modern real-world test suite outside of ghc (maybe even maintained by a different set of people)? At some point you would also end up having a matrix of performance measurements due to the evolution of the library and the evolution of ghc. Fixing the library to profile against ghc will likely end at some point in incompatibility with ghc. Fixing ghc will similarly at some point end with the inability to compile the library. However if both are always updated, how could one discriminate performance regressions of the library against regressions due to changes in ghc? — What measurements did you collect? Are these broken down per module? Something I’ve recently had some success with was dumping measurements into influxdb[1] (or a similar data point collections service) and hook that up to grafana[2] for visualization. cheers, moritz — [1]: https://www.influxdata.com/ [2]: http://grafana.org/ From mail at joachim-breitner.de Fri Dec 9 05:00:49 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 09 Dec 2016 00:00:49 -0500 Subject: Attempt at a real world benchmark In-Reply-To: <7F06D71A-56ED-4D27-8CC3-1E89EC647B3A@lichtzwerge.de> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <7F06D71A-56ED-4D27-8CC3-1E89EC647B3A@lichtzwerge.de> Message-ID: <1481259649.28496.1.camel@joachim-breitner.de> Hi, Am Freitag, den 09.12.2016, 09:50 +0800 schrieb Moritz Angermann: > Hi, > > let me thank you perusing this! > > > > I am not sure how useful this is going to be: > > >  + Tests lots of common and important real-world libraries. > > >  − Takes a lot of time to compile, includes CPP macros and C code. > > > (More details in the README linked above). > > > > another problem with the approach of taking modern real-world code: > > It uses a lot of non-boot libraries that are quite compiler-close and > > do low-level stuff (e.g. using Template Haskell, or stuff like the). If > > we add that not nofib, we’d have to maintain its compatibility with GHC > > as we continue developing GHC, probably using lots of CPP. This was > > less an issue with the Haskell98 code in nofib. > > > > But is there a way to test realistic modern code without running into > > this problem? > > > what are the reasons besides fragmentation for a modern real-world test > suite outside of ghc (maybe even maintained by a different set of people)? I am not sure what you are saying. Are you proposing the maintain a benchmark set outside GHC, or did you get the impression that I am proposing it? > At some point you would also end up having a matrix of performance > measurements due to the evolution of the library and the evolution of ghc. > Fixing the library to profile against ghc will likely end at some point in > incompatibility with ghc. Fixing ghc will similarly at some point end with > the inability to compile the library. My motivation right now is to provide something to measure GHC, so this would involve fixing the library. And that is what I am worried about: Too much maintenance effort in keeping this large piece of code compatible with GHC. But maybe it is ok if it part of nofib, and hence of GHC, so that every breaking change in GHC can immediately be accounted for in the benchmark code. A nice side effect of this might be that GHC developers can get a better idea of how much code their change breaks. > > What measurements did you collect? Are these broken down per module? Nothing yet, this is on the TODO list. > Something I’ve recently had some success with was dumping measurements > into influxdb[1] (or a similar data point collections service) and hook > that up to grafana[2] for visualization. Nice! Although these seem to be tailored for data-over-time, not data-over-commit. This mismatch in the data model was part of the motivation for me to create gipeda, which powers https://perf.haskell.org/ghc/ Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From moritz at lichtzwerge.de Fri Dec 9 05:54:05 2016 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Fri, 9 Dec 2016 13:54:05 +0800 Subject: Attempt at a real world benchmark In-Reply-To: <1481259649.28496.1.camel@joachim-breitner.de> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <7F06D71A-56ED-4D27-8CC3-1E89EC647B3A@lichtzwerge.de> <1481259649.28496.1.camel@joachim-breitner.de> Message-ID: <25386754-3C79-4884-AFAF-8177EF827F12@lichtzwerge.de> > On Dec 9, 2016, at 1:00 PM, Joachim Breitner wrote: > > Hi, > > Am Freitag, den 09.12.2016, 09:50 +0800 schrieb Moritz Angermann: >> Hi, >> >> let me thank you perusing this! >> >>>> I am not sure how useful this is going to be: >>>> + Tests lots of common and important real-world libraries. >>>> − Takes a lot of time to compile, includes CPP macros and C code. >>>> (More details in the README linked above). >>> >>> another problem with the approach of taking modern real-world code: >>> It uses a lot of non-boot libraries that are quite compiler-close and >>> do low-level stuff (e.g. using Template Haskell, or stuff like the). If >>> we add that not nofib, we’d have to maintain its compatibility with GHC >>> as we continue developing GHC, probably using lots of CPP. This was >>> less an issue with the Haskell98 code in nofib. >>> >>> But is there a way to test realistic modern code without running into >>> this problem? >> >> >> what are the reasons besides fragmentation for a modern real-world test >> suite outside of ghc (maybe even maintained by a different set of people)? > > I am not sure what you are saying. Are you proposing the maintain a > benchmark set outside GHC, or did you get the impression that I am > proposing it? Yes, that’s what *I* am proposing for the reasons I mentioned; one I did not yet mention is time. Running nofib takes time, adding more time consuming performance tests would reduce their likelihood of being run in my experience. As I see this as being almost completely scriptable, this could live outside of ghc i think. > >> At some point you would also end up having a matrix of performance >> measurements due to the evolution of the library and the evolution of ghc. >> Fixing the library to profile against ghc will likely end at some point in >> incompatibility with ghc. Fixing ghc will similarly at some point end with >> the inability to compile the library. > > My motivation right now is to provide something to measure GHC, so this > would involve fixing the library. And that is what I am worried about: > Too much maintenance effort in keeping this large piece of code > compatible with GHC. Well, we won’t know until we try :-) > But maybe it is ok if it part of nofib, and hence of GHC, so that every > breaking change in GHC can immediately be accounted for in the > benchmark code. > > A nice side effect of this might be that GHC developers can get a > better idea of how much code their change breaks. I’m not much a fan of this, but that’s just my opinion :-) >> >> What measurements did you collect? Are these broken down per module? > > Nothing yet, this is on the TODO list. > >> Something I’ve recently had some success with was dumping measurements >> into influxdb[1] (or a similar data point collections service) and hook >> that up to grafana[2] for visualization. > > Nice! Although these seem to be tailored for data-over-time, not > data-over-commit. This mismatch in the data model was part of the > motivation for me to create gipeda, which powers > https://perf.haskell.org/ghc/ Assuming we confine this to a particular branch, or discriminate by branch, commits would be measured in sequence anyway, and the timestamp could be the time of the reporting of the measurement, and the respective ghc commit hash end up being an annotation. While this is not very pretty (and I would hope that grafana has some other ability to enrich the hover-tooltips) it could present a flexible solution without requiring additional engineering effort. However, if gipeda is sufficient, please ignore my comment :) Cheers, moritz From spam at scientician.net Fri Dec 9 07:31:31 2016 From: spam at scientician.net (Bardur Arantsson) Date: Fri, 9 Dec 2016 08:31:31 +0100 Subject: Attempt at a real world benchmark In-Reply-To: <1481213080.1075.13.camel@joachim-breitner.de> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> Message-ID: On 2016-12-08 17:04, Joachim Breitner wrote: > Hi, > > Am Donnerstag, den 08.12.2016, 01:03 -0500 schrieb Joachim Breitner: >> I am not sure how useful this is going to be: >> + Tests lots of common and important real-world libraries. >> − Takes a lot of time to compile, includes CPP macros and C code. >> (More details in the README linked above). > > another problem with the approach of taking modern real-world code: > It uses a lot of non-boot libraries that are quite compiler-close and > do low-level stuff (e.g. using Template Haskell, or stuff like the). If > we add that not nofib, we’d have to maintain its compatibility with GHC > as we continue developing GHC, probably using lots of CPP. This was > less an issue with the Haskell98 code in nofib. > > But is there a way to test realistic modern code without running into > this problem? > This may be a totally crazy idea, but has any thought been given a "Phone Home"-type model? Very simplistic approach: a) Before it compiles, GHC computes a hash of the file. b) GHC has internal profiling "markers" in its compilation pipeline. c) GHC sends those "markers" + hash to some semi-centralized highly-available service somewhere under *.haskell.org. The idea is that the fact that "hashes are equal" => "performance should be comparable". Ideally, it'd probably be best to be able to have the full source, but that may be a tougher sell, obviously. (Obviously would have to be opt-in, either way.) There are a few obvious problems with this, but an obvious win would be that it could be done on a massively *decentralized* scale. Most problematic part might be that it wouldn't be able to track things like "I changed $this_line and now it compiles twice as slow". Actually, now that I think about it: What about if this were integrated into the Cabal infrastructure? If I specify "upload-perf-numbers: True" in my .cabal file, any project on (e.g.) GitHub that wanted to opt-in could do so, they could build using Travis, and voila! What do you think? Totally crazy, or could it be workable? Regards, From spam at scientician.net Fri Dec 9 07:56:22 2016 From: spam at scientician.net (Bardur Arantsson) Date: Fri, 9 Dec 2016 08:56:22 +0100 Subject: Attempt at a real world benchmark In-Reply-To: References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> Message-ID: On 2016-12-09 08:31, Bardur Arantsson wrote: > Actually, now that I think about it: What about if this were integrated > into the Cabal infrastructure? If I specify "upload-perf-numbers: True" > in my .cabal file, any project on (e.g.) GitHub that wanted to opt-in > could do so, they could build using Travis, and voila! > Post-shower addendum: If we had the right hooks in Cabal we could even also track the *runtimes* of all the tests. (Obviously a bit more brittle because one expects that adding tests would cause a performance hit, but could still be valuable information for the projects themselves to have -- which could be a motivating factor for opting in to this scheme.) Obviously it would have to be made very easy[1] to compile with GHC HEAD on travis for this to have much value for tracking regressions "as they happen" and perhaps a "hey-travis-rebuild-project" trigger would have to be implemented to get daily/weekly builds even when the project itself has no changes. We could perhaps also marshal a bit of the Hackage infrastructure instead? Anyway, loads of variations on this theme. The key point here is that the burden of keeping the "being tested" code working with GHC HEAD is on the maintainers of said projects... and they already have motivation to do so if they can get early feedback on breakage og regressions on compile times and run times. Regards, From moritz at lichtzwerge.de Fri Dec 9 09:37:11 2016 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Fri, 9 Dec 2016 17:37:11 +0800 Subject: Attempt at a real world benchmark In-Reply-To: References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> Message-ID: <39E26BB4-439A-49FB-BDAC-2856C7B9D3E1@lichtzwerge.de> >> Actually, now that I think about it: What about if this were integrated >> into the Cabal infrastructure? If I specify "upload-perf-numbers: True" >> in my .cabal file, any project on (e.g.) GitHub that wanted to opt-in >> could do so, they could build using Travis, and voila! >> > > Post-shower addendum: > > If we had the right hooks in Cabal we could even also track the > *runtimes* of all the tests. (Obviously a bit more brittle because one > expects that adding tests would cause a performance hit, but could still > be valuable information for the projects themselves to have -- which > could be a motivating factor for opting in to this scheme.) > > Obviously it would have to be made very easy[1] to compile with GHC HEAD > on travis for this to have much value for tracking regressions "as they > happen" and perhaps a "hey-travis-rebuild-project" trigger would have to > be implemented to get daily/weekly builds even when the project itself > has no changes. > > We could perhaps also marshal a bit of the Hackage infrastructure > instead? Anyway, loads of variations on this theme. The key point here > is that the burden of keeping the "being tested" code working with GHC > HEAD is on the maintainers of said projects... and they already have > motivation to do so if they can get early feedback on breakage og > regressions on compile times and run times. How would we normalize the results? Different architectures, components, configurations, and work load during cabal runs could influence the performance measurements, no? cheers, moritz From simonpj at microsoft.com Fri Dec 9 09:50:09 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Dec 2016 09:50:09 +0000 Subject: Attempt at a real world benchmark In-Reply-To: References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> Message-ID: I have wanted telemetry for years. ("Telemetry" is the term Microsoft, and I think others, use for the phone-home feature.) It would tell us how many people are using GHC; currently I have literally no idea. It could tell us which language features are most used. Perhaps it could tell us about performance, but I'm not sure how we could make use of that info without access to the actual source. The big issue is (a) design and implementation effort, and (b) dealing with the privacy issues. I think (b) used to be a big deal, but nowadays people mostly assume that their software is doing telemetry, so it feels more plausible. But someone would need to work out whether it had to be opt-in or opt-out, and how to actually make it work in practice. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Bardur Arantsson | Sent: 09 December 2016 07:32 | To: ghc-devs at haskell.org | Subject: Re: Attempt at a real world benchmark | | On 2016-12-08 17:04, Joachim Breitner wrote: | > Hi, | > | > Am Donnerstag, den 08.12.2016, 01:03 -0500 schrieb Joachim Breitner: | >> I am not sure how useful this is going to be: | >> + Tests lots of common and important real-world libraries. | >> − Takes a lot of time to compile, includes CPP macros and C code. | >> (More details in the README linked above). | > | > another problem with the approach of taking modern real-world code: | > It uses a lot of non-boot libraries that are quite compiler-close | and | > do low-level stuff (e.g. using Template Haskell, or stuff like the). | > If we add that not nofib, we’d have to maintain its compatibility | with | > GHC as we continue developing GHC, probably using lots of CPP. This | > was less an issue with the Haskell98 code in nofib. | > | > But is there a way to test realistic modern code without running | into | > this problem? | > | | This may be a totally crazy idea, but has any thought been given a | "Phone Home"-type model? | | Very simplistic approach: | | a) Before it compiles, GHC computes a hash of the file. | b) GHC has internal profiling "markers" in its compilation pipeline. | c) GHC sends those "markers" + hash to some semi-centralized highly- | available service somewhere under *.haskell.org. | | The idea is that the fact that "hashes are equal" => "performance | should be comparable". Ideally, it'd probably be best to be able to | have the full source, but that may be a tougher sell, obviously. | | (Obviously would have to be opt-in, either way.) | | There are a few obvious problems with this, but an obvious win would | be that it could be done on a massively *decentralized* scale. Most | problematic part might be that it wouldn't be able to track things | like "I changed $this_line and now it compiles twice as slow". | | Actually, now that I think about it: What about if this were | integrated into the Cabal infrastructure? If I specify "upload-perf- | numbers: True" | in my .cabal file, any project on (e.g.) GitHub that wanted to opt-in | could do so, they could build using Travis, and voila! | | What do you think? Totally crazy, or could it be workable? | | Regards, | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.h | askell.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7Cf467f18af1cf48b14f7708d4 | 20056e03%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C6361686550849222 | 57&sdata=vhdrztwo4%2FG8yTrSI%2B5aWSZblqoTTBWlIc5LpOMKquQ%3D&reserved=0 From karel.gardas at centrum.cz Fri Dec 9 13:17:32 2016 From: karel.gardas at centrum.cz (Karel Gardas) Date: Fri, 09 Dec 2016 14:17:32 +0100 Subject: Attempt at a real world benchmark In-Reply-To: References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> Message-ID: <584AAEEC.5080706@centrum.cz> Sorry for hijacking the thread, but On 12/ 9/16 10:50 AM, Simon Peyton Jones via ghc-devs wrote: > I have wanted telemetry for years. ("Telemetry" is the term Microsoft, and I think others, use for the phone-home feature.) telemetry or better "call-home", this is very dangerous idea to even mention in the context of the compiler. In some circles even mentioning this may result in losing the trust put in the compiler. > It would tell us how many people are using GHC; currently I have literally no idea. For this you don't need any kind of telemetry, but you can use numbers from various distributions popularity contests. E.g. debian: https://qa.debian.org/popcon.php?package=ghc https://qa.debian.org/popcon.php?package=alex https://qa.debian.org/popcon.php?package=happy https://qa.debian.org/popcon.php?package=haskell-platform > It could tell us which language features are most used. Language features are hard if they are not available in separate libs. If in libs, then IIRC debian is packaging those in separate packages, again you can use their package contest. > Perhaps it could tell us about performance, but I'm not sure how we could make use of that info without access to the actual source. So then, how may GHC users trust GHC not sending their own precious sources "home" just for GHC performance improvements -- which btw, may not be in the interest of the users as they may be happy with current state? > The big issue is (a) design and implementation effort, and (b) dealing with the privacy issues. I think (b) used to be a big deal, but nowadays people mostly assume that their software is doing telemetry, so it feels more plausible. But someone would need to work out whether it had to be opt-in or opt-out, and how to actually make it work in practice. Privacy here is complete can of worms (keep in mind you are dealing with a lot of different law systems), I strongly suggest not to even think about it for a second. Your note "but nowadays people mostly assume that their software is doing telemetry" may perhaps be true in sick mobile apps world, but I guess is not true in the world of developing secure and security related applications for either server usage or embedded. So if I may ask, please no, do not do any telemetry/calling home in GHC nor in its runtime system and even do not think about it. This is IMHO extremely dangerous. Thanks! Karel From ben at well-typed.com Fri Dec 9 13:25:20 2016 From: ben at well-typed.com (Ben Gamari) Date: Fri, 09 Dec 2016 08:25:20 -0500 Subject: GHC 8.2 status Message-ID: <87wpf9nsjj.fsf@ben-laptop.smart-cactus.org> Hello everyone, While we are still trying to get 8.0.2 out the door, 8.2.1 is quickly approaching. If you have a feature that you would like to see in 8.2, please add it to the 8.2 status page [1] as soon as possible and let us know; we would like to cut the ghc-8.2 branch in late December for a release in February so the hour is growing late. Cheers, - Ben [1] https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-8.2.1 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From mail at joachim-breitner.de Fri Dec 9 14:26:43 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 09 Dec 2016 09:26:43 -0500 Subject: Attempt at a real world benchmark In-Reply-To: <25386754-3C79-4884-AFAF-8177EF827F12@lichtzwerge.de> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <7F06D71A-56ED-4D27-8CC3-1E89EC647B3A@lichtzwerge.de> <1481259649.28496.1.camel@joachim-breitner.de> <25386754-3C79-4884-AFAF-8177EF827F12@lichtzwerge.de> Message-ID: <1481293603.1117.1.camel@joachim-breitner.de> Hi, Am Freitag, den 09.12.2016, 13:54 +0800 schrieb Moritz Angermann: > > I am not sure what you are saying. Are you proposing the maintain a > > benchmark set outside GHC, or did you get the impression that I am > > proposing it? > > Yes, that’s what *I* am proposing for the reasons I mentioned; one I > did not yet mention is time. Running nofib takes time, adding more time > consuming performance tests would reduce their likelihood of being run > in my experience.  As I see this as being almost completely scriptable, > this could live outside of ghc i think.  I don’t think the running time of nofib is a constraint at the moment, and I expect most who run nofib to happily let it run for a few minutes more in order to get more meaningful results. > > > But maybe it is ok if it part of nofib, and hence of GHC, so that every > > breaking change in GHC can immediately be accounted for in the > > benchmark code. > > > > A nice side effect of this might be that GHC developers can get a > > better idea of how much code their change breaks. > > I’m not much a fan of this, but that’s just my opinion :-) What is the alternative? Keep updating the libraries? But libraries change APIs. Then you need to keep updating the program itself? That seems to be too many moving parts for a benchmark suite. > > > Something I’ve recently had some success with was dumping measurements > > > into influxdb[1] (or a similar data point collections service) and hook > > > that up to grafana[2] for visualization. > > > > Nice! Although these seem to be tailored for data-over-time, not > > data-over-commit. This mismatch in the data model was part of the > > motivation for me to create gipeda, which powers > > https://perf.haskell.org/ghc/ > > Assuming we confine this to a particular branch, or discriminate by branch, > commits would be measured in sequence anyway, and the timestamp could be the > time of the reporting of the measurement, and the respective ghc commit hash > end up being an annotation. While this is not very pretty (and I would hope > that grafana has some other ability to enrich the hover-tooltips) it could > present a flexible solution without requiring additional engineering effort. > > However, if gipeda is sufficient, please ignore my comment :) Oh, we could certainly do better here! (But it serves my purposes for now, so I’ll stick to it until someone sets up something better, in which case I happily dump my code.) Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From monkleyon at googlemail.com Fri Dec 9 14:52:26 2016 From: monkleyon at googlemail.com (MarLinn) Date: Fri, 9 Dec 2016 15:52:26 +0100 Subject: Telemetry (WAS: Attempt at a real world benchmark) In-Reply-To: <584AAEEC.5080706@centrum.cz> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <584AAEEC.5080706@centrum.cz> Message-ID: <75b1cea0-f2fb-f7cf-c41d-06eea5b75c51@gmail.com> >> It could tell us which language features are most used. > > Language features are hard if they are not available in separate libs. > If in libs, then IIRC debian is packaging those in separate packages, > again you can use their package contest. What in particular makes them hard? Sorry if this seems like a stupid question to you, I'm just not that knowledgeable yet. One reason I can think of would be that we would want attribution, i.e. did the developer turn on the extension himself, or is it just used in a lib or template – but that should be easy to solve with a source hash, right? That source hash itself might need a bit of thought though. Maybe it should not be a hash of a source file, but of the parse tree. >> The big issue is (a) design and implementation effort, and (b) >> dealing with the privacy issues. I think (b) used to be a big deal, >> but nowadays people mostly assume that their software is doing >> telemetry, so it feels more plausible. But someone would need to >> work out whether it had to be opt-in or opt-out, and how to actually >> make it work in practice. > > Privacy here is complete can of worms (keep in mind you are dealing > with a lot of different law systems), I strongly suggest not to even > think about it for a second. Your note "but nowadays people mostly > assume that their software is doing telemetry" may perhaps be true in > sick mobile apps world, but I guess is not true in the world of > developing secure and security related applications for either server > usage or embedded. My first reaction to "nowadays people mostly assume that their software is doing telemetry" was to amend it with "* in the USA" in my mind. But yes, mobile is another place. Nowadays I do assume most software uses some sort of phone-home feature, but that's because it's on my To Do list of things to search for on first configuration. Note that I am using "phone home" instead of "telemetry" because some companies hide it in "check for updates" or mix it with some useless "account" stuff. Finding out where it's hidden and how much information they give about the details tells a lot about the developers, as does opt-in vs opt-out. Therefore it can be a reason to not choose a piece of software or even an ecosystem after a first try. (Let's say an operating system almost forces me to create an online account on installation. That not only tells me I might not want to use that operating system, it also sends a marketing message that the whole ecosystem is potentially toxic to my privacy because they live in a bubble where that appears to be acceptable.) So I do have that aversion even in non-security-related contexts. I would say people are aware that telemetry exists, and developers in particular. I would also say developers are aware of the potential benefits, so they might be open to it. But what they care and worry about is /what/ is reported and how they can /control/ it. Software being Open Source is a huge factor in that, because they know that, at least in theory, they could vet the source. But the reaction might still be very mixed – see Mozilla Firefox. My suggestion would be a solution that gives the developer the feeling of making the choices, and puts them in control. It should also be compatible with configuration management so that it can be integrated into company policies as easily as possible. Therefore my suggestions would be * Opt-In. Nothing takes away the feeling of being in control more than perceived "hijacking" of a device with "spy ware". This also helps circumvent legal problems because the users or their employers now have the responsibility. * The switches to turn it on or off should be in a configuration file. There should be several staged configuration files, one for a project, one for a user, one system-wide. This is for compatibility with configuration management. Configuration higher up the hierarchy override ones lower in the hierarchy, but they can't force telemetry to be on – at least not the sensitive kind. * There should be several levels or a set of options that can be switched on or off individually, for fine-grained control. All should be very well documented. Once integrated and documented, they can never change without also changing the configuration flag that switches them on. There still might be some backlash, but a careful approach like this could soothe the minds. If you are worried that we might get too little data this way, here's another thought, leading back to performance data: The most benefit in that regard would come from projects that are built regularly, on different architectures, with sources that can be inspected and with an easy way to get diffs. In other words, projects that live on github and travis anyway. Their maintainers should be easy to convince to set that little switch to "on". Regards, MarLinn -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Dec 9 15:15:47 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Dec 2016 15:15:47 +0000 Subject: Telemetry (WAS: Attempt at a real world benchmark) In-Reply-To: <75b1cea0-f2fb-f7cf-c41d-06eea5b75c51@gmail.com> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <584AAEEC.5080706@centrum.cz> <75b1cea0-f2fb-f7cf-c41d-06eea5b75c51@gmail.com> Message-ID: Just to say: · Telemetry is a good topic · It is clearly a delicate one as we’ve already seen from two widely differing reactions. That’s why I have never seriously contemplated doing anything about it. · I’m love a consensus to emerge on this, but I don’t have the bandwidth to drive it. Incidentally, when I said “telemetry is common” I meant that almost every piece of software I run on my PC these days automatically checks for updates. It no longer even asks me if I want to do that.. it just does it. That’s telemetry right there: the supplier knows how many people are running each version of their software. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of MarLinn via ghc-devs Sent: 09 December 2016 14:52 To: ghc-devs at haskell.org Subject: Re: Telemetry (WAS: Attempt at a real world benchmark) It could tell us which language features are most used. Language features are hard if they are not available in separate libs. If in libs, then IIRC debian is packaging those in separate packages, again you can use their package contest. What in particular makes them hard? Sorry if this seems like a stupid question to you, I'm just not that knowledgeable yet. One reason I can think of would be that we would want attribution, i.e. did the developer turn on the extension himself, or is it just used in a lib or template – but that should be easy to solve with a source hash, right? That source hash itself might need a bit of thought though. Maybe it should not be a hash of a source file, but of the parse tree. The big issue is (a) design and implementation effort, and (b) dealing with the privacy issues. I think (b) used to be a big deal, but nowadays people mostly assume that their software is doing telemetry, so it feels more plausible. But someone would need to work out whether it had to be opt-in or opt-out, and how to actually make it work in practice. Privacy here is complete can of worms (keep in mind you are dealing with a lot of different law systems), I strongly suggest not to even think about it for a second. Your note "but nowadays people mostly assume that their software is doing telemetry" may perhaps be true in sick mobile apps world, but I guess is not true in the world of developing secure and security related applications for either server usage or embedded. My first reaction to "nowadays people mostly assume that their software is doing telemetry" was to amend it with "* in the USA" in my mind. But yes, mobile is another place. Nowadays I do assume most software uses some sort of phone-home feature, but that's because it's on my To Do list of things to search for on first configuration. Note that I am using "phone home" instead of "telemetry" because some companies hide it in "check for updates" or mix it with some useless "account" stuff. Finding out where it's hidden and how much information they give about the details tells a lot about the developers, as does opt-in vs opt-out. Therefore it can be a reason to not choose a piece of software or even an ecosystem after a first try. (Let's say an operating system almost forces me to create an online account on installation. That not only tells me I might not want to use that operating system, it also sends a marketing message that the whole ecosystem is potentially toxic to my privacy because they live in a bubble where that appears to be acceptable.) So I do have that aversion even in non-security-related contexts. I would say people are aware that telemetry exists, and developers in particular. I would also say developers are aware of the potential benefits, so they might be open to it. But what they care and worry about is what is reported and how they can control it. Software being Open Source is a huge factor in that, because they know that, at least in theory, they could vet the source. But the reaction might still be very mixed – see Mozilla Firefox. My suggestion would be a solution that gives the developer the feeling of making the choices, and puts them in control. It should also be compatible with configuration management so that it can be integrated into company policies as easily as possible. Therefore my suggestions would be · Opt-In. Nothing takes away the feeling of being in control more than perceived "hijacking" of a device with "spy ware". This also helps circumvent legal problems because the users or their employers now have the responsibility. · The switches to turn it on or off should be in a configuration file. There should be several staged configuration files, one for a project, one for a user, one system-wide. This is for compatibility with configuration management. Configuration higher up the hierarchy override ones lower in the hierarchy, but they can't force telemetry to be on – at least not the sensitive kind. · There should be several levels or a set of options that can be switched on or off individually, for fine-grained control. All should be very well documented. Once integrated and documented, they can never change without also changing the configuration flag that switches them on. There still might be some backlash, but a careful approach like this could soothe the minds. If you are worried that we might get too little data this way, here's another thought, leading back to performance data: The most benefit in that regard would come from projects that are built regularly, on different architectures, with sources that can be inspected and with an easy way to get diffs. In other words, projects that live on github and travis anyway. Their maintainers should be easy to convince to set that little switch to "on". Regards, MarLinn -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Dec 9 15:30:21 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Dec 2016 15:30:21 +0000 Subject: GHC 8.2 status In-Reply-To: <87wpf9nsjj.fsf@ben-laptop.smart-cactus.org> References: <87wpf9nsjj.fsf@ben-laptop.smart-cactus.org> Message-ID: I added join-points, which Luke is engaged in https://ghc.haskell.org/trac/ghc/wiki/SequentCore Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Ben | Gamari | Sent: 09 December 2016 13:25 | To: GHC developers | Subject: GHC 8.2 status | | Hello everyone, | | While we are still trying to get 8.0.2 out the door, 8.2.1 is quickly | approaching. If you have a feature that you would like to see in 8.2, | please add it to the 8.2 status page [1] as soon as possible and let | us know; we would like to cut the ghc-8.2 branch in late December for | a release in February so the hour is growing late. | | Cheers, | | - Ben | | | [1] https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-8.2.1 From ben at smart-cactus.org Fri Dec 9 15:53:15 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 09 Dec 2016 10:53:15 -0500 Subject: Telemetry (WAS: Attempt at a real world benchmark) In-Reply-To: References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <584AAEEC.5080706@centrum.cz> <75b1cea0-f2fb-f7cf-c41d-06eea5b75c51@gmail.com> Message-ID: <87inqtnlp0.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Just to say: > > > · Telemetry is a good topic > > · It is clearly a delicate one as we’ve already seen from two widely > differing reactions. That’s why I have never seriously contemplated > doing anything about it. > > · I’m love a consensus to emerge on this, but I don’t have the > bandwidth to drive it. > > Incidentally, when I said “telemetry is common” I meant that almost > every piece of software I run on my PC these days automatically checks > for updates. It no longer even asks me if I want to do that.. it just > does it. That’s telemetry right there: the supplier knows how many > people are running each version of their software. > Does this necessarily count as telemetry? To be useful for statistics each installation would need to be uniquely identifiable; it's not clear to me for what fraction of software this holds. Certainly in the open-source world it's rather uncommon to tie telemetry to updates. I suppose in the Windows world this sort of thing may be more common. I'll point out that in general telemetry isn't a terribly common thing to find in open-source software save a few major projects (e.g. Firefox, Debian's popcon). I think we would be the first widely-used compiler to use such technology which does give me pause. Developers in particular tend to be more sensitive to this sort of thing than your average user. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Fri Dec 9 15:56:04 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 09 Dec 2016 10:56:04 -0500 Subject: Please =?utf-8?Q?don=E2=80=99t?= break travis In-Reply-To: <1480720953.13340.14.camel@joachim-breitner.de> References: <1480720953.13340.14.camel@joachim-breitner.de> Message-ID: <87fulxnlkb.fsf@ben-laptop.smart-cactus.org> Joachim Breitner writes: > Hi, > > again, Travis is failing to build master since a while. Unfortunately, > only the author of commits get mailed by Travis, so I did not notice it > so far. But usually, when Travis reports a build failure, this is > something actionable! If in doubt, contact me. > It seems we have once again run in to the Travis build time limit: https://travis-ci.org/ghc/ghc/jobs/182594725. I seem to recall that this isn't the first time that this has happened. Given that our testsuite is only growing, what is the long-term plan for managing this? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From mail at joachim-breitner.de Fri Dec 9 16:06:29 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 09 Dec 2016 11:06:29 -0500 Subject: Please =?UTF-8?Q?don=E2=80=99t?= break travis In-Reply-To: <87fulxnlkb.fsf@ben-laptop.smart-cactus.org> References: <1480720953.13340.14.camel@joachim-breitner.de> <87fulxnlkb.fsf@ben-laptop.smart-cactus.org> Message-ID: <1481299589.1117.12.camel@joachim-breitner.de> Am Freitag, den 09.12.2016, 10:56 -0500 schrieb Ben Gamari: > > Joachim Breitner writes: > > > Hi, > > > > again, Travis is failing to build master since a while. Unfortunately, > > only the author of commits get mailed by Travis, so I did not notice it > > so far. But usually, when Travis reports a build failure, this is > > something actionable! If in doubt, contact me. > > > > It seems we have once again run in to the Travis build time limit: > https://travis-ci.org/ghc/ghc/jobs/182594725. > > I seem to recall that this isn't the first time that this has happened. > Given that our testsuite is only growing, what is the long-term plan for > managing this? for many months I had the appearance that the time limit was no longer enforced for us. Maybe they have fixed that :-) Let’s try this: https://twitter.com/nomeata/status/807254988338630656?lang=de Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ben at well-typed.com Fri Dec 9 16:09:18 2016 From: ben at well-typed.com (Ben Gamari) Date: Fri, 09 Dec 2016 11:09:18 -0500 Subject: Differential builds with on Darwin now enabled Message-ID: <87eg1hnky9.fsf@ben-laptop.smart-cactus.org> Hello everyone, Note that the Mac Mini builder will now build submitted Differentials as well as commits to master. Originally I was hesitant to take this step since unreviewed differentials are essentially untrusted code; however it has become clear that to keep regressions from entering the tree we will need to be a bit more proactive in testing code before it is committed. Hopefully this helps! Lastly, I'd like to take this opportunity to once again thank Futurice for providing this hardware to us. We all owe them a debt of gratitude. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From m at tweag.io Fri Dec 9 16:57:24 2016 From: m at tweag.io (Boespflug, Mathieu) Date: Fri, 9 Dec 2016 17:57:24 +0100 Subject: =?UTF-8?Q?Re=3A_Please_don=E2=80=99t_break_travis?= In-Reply-To: <1481299589.1117.12.camel@joachim-breitner.de> References: <1480720953.13340.14.camel@joachim-breitner.de> <87fulxnlkb.fsf@ben-laptop.smart-cactus.org> <1481299589.1117.12.camel@joachim-breitner.de> Message-ID: Or this route: https://mail.haskell.org/pipermail/ghc-devs/2015-June/009234.html. -- Mathieu Boespflug Founder at http://tweag.io. On 9 December 2016 at 17:06, Joachim Breitner wrote: > Am Freitag, den 09.12.2016, 10:56 -0500 schrieb Ben Gamari: > > > Joachim Breitner writes: > > > > > Hi, > > > > > > again, Travis is failing to build master since a while. Unfortunately, > > > only the author of commits get mailed by Travis, so I did not notice it > > > so far. But usually, when Travis reports a build failure, this is > > > something actionable! If in doubt, contact me. > > > > > > > It seems we have once again run in to the Travis build time limit: > > https://travis-ci.org/ghc/ghc/jobs/182594725. > > > > I seem to recall that this isn't the first time that this has happened. > > Given that our testsuite is only growing, what is the long-term plan for > > managing this? > > for many months I had the appearance that the time limit was no longer > enforced for us. Maybe they have fixed that :-) > > Let’s try this: > https://twitter.com/nomeata/status/807254988338630656?lang=de > > Greetings, > Joachim > > -- > Joachim “nomeata” Breitner > mail at joachim-breitner.de • https://www.joachim-breitner.de/ > XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Fri Dec 9 16:59:29 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Fri, 09 Dec 2016 11:59:29 -0500 Subject: Please =?UTF-8?Q?don=E2=80=99t?= break travis In-Reply-To: References: <1480720953.13340.14.camel@joachim-breitner.de> <87fulxnlkb.fsf@ben-laptop.smart-cactus.org> <1481299589.1117.12.camel@joachim-breitner.de> Message-ID: <1481302769.1117.14.camel@joachim-breitner.de> Hi, I was not aware of that discussion. Great! If the twitter message does not do the job already, I will write a polite mail to Mathias Meyer. Greetings, Joachim Am Freitag, den 09.12.2016, 17:57 +0100 schrieb Boespflug, Mathieu: > Or this route: https://mail.haskell.org/pipermail/ghc-devs/2015-June/ > 009234.html. > > -- > Mathieu Boespflug > Founder at http://tweag.io. > > On 9 December 2016 at 17:06, Joachim Breitner de> wrote: > > Am Freitag, den 09.12.2016, 10:56 -0500 schrieb Ben Gamari: > > > > Joachim Breitner writes: > > > > > > > Hi, > > > > > > > > again, Travis is failing to build master since a while. > > Unfortunately, > > > > only the author of commits get mailed by Travis, so I did not > > notice it > > > > so far. But usually, when Travis reports a build failure, this > > is > > > > something actionable! If in doubt, contact me. > > > > > > > > > > It seems we have once again run in to the Travis build time > > limit: > > > https://travis-ci.org/ghc/ghc/jobs/182594725. > > > > > > I seem to recall that this isn't the first time that this has > > happened. > > > Given that our testsuite is only growing, what is the long-term > > plan for > > > managing this? > > > > for many months I had the appearance that the time limit was no > > longer > > enforced for us. Maybe they have fixed that :-) > > > > Let’s try this: > > https://twitter.com/nomeata/status/807254988338630656?lang=de > > > > Greetings, > > Joachim > > > > -- > > Joachim “nomeata” Breitner > >   mail at joachim-breitner.de • https://www.joachim-breitner.de/ > >   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F > >   Debian Developer: nomeata at debian.org > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From monkleyon at googlemail.com Fri Dec 9 17:13:34 2016 From: monkleyon at googlemail.com (MarLinn) Date: Fri, 9 Dec 2016 18:13:34 +0100 Subject: Telemetry In-Reply-To: <87inqtnlp0.fsf@ben-laptop.smart-cactus.org> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <584AAEEC.5080706@centrum.cz> <75b1cea0-f2fb-f7cf-c41d-06eea5b75c51@gmail.com> <87inqtnlp0.fsf@ben-laptop.smart-cactus.org> Message-ID: <4abb8df5-003d-4550-85a4-b0feadd880b4@gmail.com> Pretty random idea: What if ghc exposed measurement points for performance and telemetry, but a separate tool would handle the read-out, configuration, upload etc. That would keep the telemetry from being built-in, while still being a way to get *some* information. Such a support tool might be interesting for other projects, too, or even for slightly different use cases like monitoring servers. The question is if such a tool would bring enough benefit to enough projects for buy-in and to attract contributors. And just separating it doesn't solve the underlying issues of course, so attracting contributors and buy-in might be even harder than it already is for "normal" projects. Close ties to ghc might improve that, but I doubt how big such an effect would be. Additionally, this approach would just shift many of the questions over to Haskell-platform and/or Stack instead of addressing them – or even further, on that volatile front-line space where inner-community conflict roared recently. It wouldn't be the worst place to address them, but I would hesitate to throw yet another potential point of contention onto that burned field. Basically: I like that idea, but I might just have proven it fruitless anyway. Cheers, MarLinn From ryan.trinkle at gmail.com Fri Dec 9 17:22:15 2016 From: ryan.trinkle at gmail.com (Ryan Trinkle) Date: Fri, 9 Dec 2016 12:22:15 -0500 Subject: Telemetry In-Reply-To: <4abb8df5-003d-4550-85a4-b0feadd880b4@gmail.com> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <584AAEEC.5080706@centrum.cz> <75b1cea0-f2fb-f7cf-c41d-06eea5b75c51@gmail.com> <87inqtnlp0.fsf@ben-laptop.smart-cactus.org> <4abb8df5-003d-4550-85a4-b0feadd880b4@gmail.com> Message-ID: I certainly see the value of telemetry in being able to produce a higher quality product through understanding user behavior. However, I am not sure it is realistic. My clients are very conscious of intellectual property and data privacy concerns, and for some of them, even discussing the possibility of allowing telemetry in any part of the technology stack would damage their trust in me. Even if there were an easy opt-out feature, I would be very concerned that I or someone on my team would accidentally build client code without opting out, and I would have to take serious steps to ensure that this could never occur. I would be thrilled, of course, if such analyses could be easily performed against publicly available code on Hackage, as many tests have done in the past. This would also provide an additional small incentive for people to open source code that they do not have a strategic need to keep secret. MarLinn's idea sounds like a good approach to me, although I agree that it has difficulties. I think the key would be to make the report produced brief, human-readable, and clear enough that a CTO or other executive could easily sign off on "declassifying" it. We could then ask that companies voluntarily submit this report if they wish to have an impact on prioritizing the future of the language. I suppose this is a very strict version of "opt-in", and generally I think that opt-in would be fine, as long as we're very confident that we'll never have a bug that makes it opt-out instead. Ryan On Fri, Dec 9, 2016 at 12:13 PM, MarLinn via ghc-devs wrote: > Pretty random idea: What if ghc exposed measurement points for performance > and telemetry, but a separate tool would handle the read-out, > configuration, upload etc. That would keep the telemetry from being > built-in, while still being a way to get *some* information. > > Such a support tool might be interesting for other projects, too, or even > for slightly different use cases like monitoring servers. The question is > if such a tool would bring enough benefit to enough projects for buy-in and > to attract contributors. And just separating it doesn't solve the > underlying issues of course, so attracting contributors and buy-in might be > even harder than it already is for "normal" projects. Close ties to ghc > might improve that, but I doubt how big such an effect would be. > > Additionally, this approach would just shift many of the questions over to > Haskell-platform and/or Stack instead of addressing them – or even further, > on that volatile front-line space where inner-community conflict roared > recently. It wouldn't be the worst place to address them, but I would > hesitate to throw yet another potential point of contention onto that > burned field. > > Basically: I like that idea, but I might just have proven it fruitless > anyway. > > > Cheers, > MarLinn > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amindfv at gmail.com Fri Dec 9 17:46:16 2016 From: amindfv at gmail.com (Tom Murphy) Date: Fri, 9 Dec 2016 12:46:16 -0500 Subject: Attempt at a real world benchmark In-Reply-To: References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> Message-ID: On Fri, Dec 9, 2016 at 4:50 AM, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > I have wanted telemetry for years. ("Telemetry" is the term Microsoft, > and I think others, use for the phone-home feature.) > > It would tell us how many people are using GHC; currently I have literally > no idea. > > In practice I think the best data we could get is "how many people are using GHC && are willing to opt into phone-home," which seems like a rougher number than e.g. downloads of ghc/HP or number of downloads of base/containers or something similar. I also would not opt in. Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Fri Dec 9 18:02:50 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Fri, 9 Dec 2016 18:02:50 +0000 Subject: Attempt at a real world benchmark In-Reply-To: References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> Message-ID: On Fri, Dec 9, 2016 at 9:50 AM, Simon Peyton Jones via ghc-devs < ghc-devs at haskell.org> wrote: > The big issue is (a) design and implementation effort, and (b) dealing > with the privacy issues. And (c) not everyone is going to upgrade their ghc, even if you backport the telemetry to older versions (potentially back to 7.6.3 or even earlier), so likely you'd only get telemetry from new users. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Fri Dec 9 18:18:28 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Fri, 9 Dec 2016 18:18:28 +0000 Subject: =?UTF-8?Q?Re=3A_Please_don=E2=80=99t_break_travis?= In-Reply-To: <87fulxnlkb.fsf@ben-laptop.smart-cactus.org> References: <1480720953.13340.14.camel@joachim-breitner.de> <87fulxnlkb.fsf@ben-laptop.smart-cactus.org> Message-ID: On Fri, Dec 9, 2016 at 3:56 PM, Ben Gamari wrote: > I seem to recall that this isn't the first time that this has happened. > Given that our testsuite is only growing, what is the long-term plan for > managing this? > Consider running the test suite as a separate job? -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Fri Dec 9 19:31:45 2016 From: lonetiger at gmail.com (Phyx) Date: Fri, 9 Dec 2016 19:31:45 +0000 Subject: Telemetry (WAS: Attempt at a real world benchmark) In-Reply-To: <87inqtnlp0.fsf@ben-laptop.smart-cactus.org> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <584AAEEC.5080706@centrum.cz> <75b1cea0-f2fb-f7cf-c41d-06eea5b75c51@gmail.com> <87inqtnlp0.fsf@ben-laptop.smart-cactus.org> Message-ID: On Fri, Dec 9, 2016 at 3:53 PM, Ben Gamari wrote: > Simon Peyton Jones via ghc-devs writes: > > > Just to say: > > > > > > · Telemetry is a good topic > > > > · It is clearly a delicate one as we’ve already seen from two widely > > differing reactions. That’s why I have never seriously contemplated > > doing anything about it. > > > > · I’m love a consensus to emerge on this, but I don’t have the > > bandwidth to drive it. > > > > Incidentally, when I said “telemetry is common” I meant that almost > > every piece of software I run on my PC these days automatically checks > > for updates. It no longer even asks me if I want to do that.. it just > > does it. That’s telemetry right there: the supplier knows how many > > people are running each version of their software. > > > Does this necessarily count as telemetry? To be useful for statistics > each installation would need to be uniquely identifiable; it's not clear > to me for what fraction of software this holds. Certainly in the > open-source world it's rather uncommon to tie telemetry to updates. I > suppose in the Windows world this sort of thing may be more common. > Even in the Windows world this would be a hard to swallow thing. I'd like to point to when Microsoft tried this with Visual Studio 2015 Beta. The intention was that if you wanted to, while using the beta, if your code didn't compile or crash you could send the feedback data back to Microsoft. The backlash when this was found..., even though legally you agreed to it when agreeing to the EULA to the beta was huge. https://www.reddit.com/r/cpp/comments/4ibauu/visual_studio_adding_telemetry_function_calls_to/d30dmvu/ Do we really want to do this? For so very very little gain? Trust is hard to gain but easily lost. > > I'll point out that in general telemetry isn't a terribly common thing > to find in open-source software save a few major projects (e.g. Firefox, > Debian's popcon). I think we would be the first widely-used compiler to > use such technology which does give me pause. Developers in particular > tend to be more sensitive to this sort of thing than your average user. > Not only developers. Currently for instance, GHC Is on the approved software list at work. Mainly because of it's open source status, it's license and small amount of libraries it ships with with sensible licenses. If GHC adds telemetry. I'm pretty sure I'll have an uphill if not impossible battle to get GHC approved again. And the lawyers would have a good point in blocking it too. > > Cheers, > > - Ben > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rf at rufflewind.com Fri Dec 9 21:45:48 2016 From: rf at rufflewind.com (Phil Ruffwind) Date: Fri, 09 Dec 2016 16:45:48 -0500 Subject: Attempt at a real world benchmark In-Reply-To: References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> Message-ID: <1481319948.591613.814242217.59F1C539@webmail.messagingengine.com> > It could tell us which language features are most used. A lot could be gleaned just by analyzing the packages on Hackage though. For example: https://www.reddit.com/r/haskell/comments/31t2y9/distribution_of_ghc_extensions_on_hackage/ From george.colpitts at gmail.com Fri Dec 9 21:48:45 2016 From: george.colpitts at gmail.com (George Colpitts) Date: Fri, 09 Dec 2016 21:48:45 +0000 Subject: Attempt at a real world benchmark In-Reply-To: References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> Message-ID: I would opt-in. I also agree with Simon that privacy is no longer a big deal although I do believe that most companies do telemetry with an opt in policy. If it's opt-in why would anyone have a problem with telemetry? On Fri, Dec 9, 2016 at 1:46 PM Tom Murphy wrote: > On Fri, Dec 9, 2016 at 4:50 AM, Simon Peyton Jones via ghc-devs < > ghc-devs at haskell.org> wrote: > > I have wanted telemetry for years. ("Telemetry" is the term Microsoft, > and I think others, use for the phone-home feature.) > > It would tell us how many people are using GHC; currently I have literally > no idea. > > > In practice I think the best data we could get is "how many people are > using GHC && are willing to opt into phone-home," which seems like a > rougher number than e.g. downloads of ghc/HP or number of downloads of > base/containers or something similar. I also would not opt in. > > Tom > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eacameron at gmail.com Fri Dec 9 21:55:07 2016 From: eacameron at gmail.com (Elliot Cameron) Date: Fri, 9 Dec 2016 16:55:07 -0500 Subject: Attempt at a real world benchmark In-Reply-To: References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> Message-ID: I'd imagine that "opt-in" could even mean you have to install a separate program/package to send data that's been collected. If it were very separate from the compiler itself, would these security concerns still be a problem? I for one would go through the effort of opting in since I want the ecosystem to improve and I have the luxury not to be dealing with high-security code bases. ᐧ On Fri, Dec 9, 2016 at 4:48 PM, George Colpitts wrote: > I would opt-in. I also agree with Simon that privacy is no longer a big > deal although I do believe that most companies do telemetry with an opt in > policy. If it's opt-in why would anyone have a problem with telemetry? > > On Fri, Dec 9, 2016 at 1:46 PM Tom Murphy wrote: > >> On Fri, Dec 9, 2016 at 4:50 AM, Simon Peyton Jones via ghc-devs < >> ghc-devs at haskell.org> wrote: >> >> I have wanted telemetry for years. ("Telemetry" is the term Microsoft, >> and I think others, use for the phone-home feature.) >> >> It would tell us how many people are using GHC; currently I have >> literally no idea. >> >> >> In practice I think the best data we could get is "how many people are >> using GHC && are willing to opt into phone-home," which seems like a >> rougher number than e.g. downloads of ghc/HP or number of downloads of >> base/containers or something similar. I also would not opt in. >> >> Tom >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Dec 9 22:29:26 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Dec 2016 22:29:26 +0000 Subject: Windows build again Message-ID: Windows build is broken in a new way. When I run validate I end up with sh.exe processes that consume a full CPU forever. See the process log below. Note that these are not GHC processes: they are shells! I have no conception of what they are doing. Any ideas, or things I can to do gather more evidence? This is an up to date HEAD, with some small changes to GHC. I suppose I can try a completely clean HEAD, but I can't see how my changes could make the shell loop. Simon [cid:image003.jpg at 01D2526B.A8994140] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 597709 bytes Desc: image003.jpg URL: From simonpj at microsoft.com Fri Dec 9 22:44:14 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Dec 2016 22:44:14 +0000 Subject: More windows woe Message-ID: I see that anything involving ghci fails: /c/code/HEAD/inplace/bin/ghc-stage2 --interactive GHCi, version 8.1.20161209: http://www.haskell.org/ghc/ :? for help ghc-stage2.exe: unable to load package `base-4.9.0.0' ghc-stage2.exe: C:\code\HEAD\inplace\mingw\x86_64-w64-mingw32\lib\libmingwex.a: unknown symbol `_lock_file' ghc-stage2.exe: Could not on-demand load symbol '__mingw_vfprintf' ghc-stage2.exe: C:\code\HEAD\libraries\base\dist-install\build\HSbase-4.9.0.0.o: unknown symbol `__mingw_vfprintf' It's frustrating. It used to work! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Sat Dec 10 03:44:28 2016 From: ben at well-typed.com (Ben Gamari) Date: Fri, 09 Dec 2016 22:44:28 -0500 Subject: More windows woe In-Reply-To: References: Message-ID: <87y3zomorn.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > I see that anything involving ghci fails: > > /c/code/HEAD/inplace/bin/ghc-stage2 --interactive > > GHCi, version 8.1.20161209: http://www.haskell.org/ghc/ :? for help > > ghc-stage2.exe: unable to load package `base-4.9.0.0' > > ghc-stage2.exe: C:\code\HEAD\inplace\mingw\x86_64-w64-mingw32\lib\libmingwex.a: unknown symbol `_lock_file' > Yes, Tamar and I were working on tracking this down over the last few days. The patch (which I will merge after a running validation finishes) is D2817. In short, the problem is that we recently upgraded the Windows toolchain. For better or worse, the new mingw-w64 toolchain now has an atomic printf implementation, which requires the use of the _lock_file function provided by Microsoft's C runtime. However, the _lock_file symbol is only exported by certain variants of msvcrt (e.g. msvcrt90.dll), but not the distribution which mingw-w64 uses (apparently due to license considerations [1], although the exact reason isn't clear). To hack around this, mingw-w64 ships a static library, msvcrt.a, which wraps msvcrt.dll and provides hand-rolled implementations of some needed symbols, including _lock_file. However, this means that the static library msvcrt.a, and the dynamic library msvcrt.dll don't export the same set of symbols, which causes GHCi to blow up if dynamically linked. Consequently we need to All of this coupled with another recent but quite unrelated cleanup (D2579) breaking the Windows build when bootstrapped with GHC 7.10, the recent testsuite debacle, as well as a number of other Windows quirks I've discovered in the past few weeks, meant that figuring all of this out took quite some time (which is why the Windows builder *still* isn't quite up). On the bright side, one happy side-effort of this is that it prompted me to write down some notes on the interactions between the many components of our Windows toolchain [2]. Anyways, we are getting quite close. I expect we'll finally have the Windows builder up by next week. Hopefully from that point forth it will be considerably harder to break the Windows build. Cheers, - Ben [1] https://sourceforge.net/p/mingw-w64/discussion/723797/thread/55520785/ [2] https://ghc.haskell.org/trac/ghc/wiki/SurvivingWIndows -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at well-typed.com Sat Dec 10 03:50:31 2016 From: ben at well-typed.com (Ben Gamari) Date: Fri, 09 Dec 2016 22:50:31 -0500 Subject: More windows woe In-Reply-To: <87y3zomorn.fsf@ben-laptop.smart-cactus.org> References: <87y3zomorn.fsf@ben-laptop.smart-cactus.org> Message-ID: <87twacmohk.fsf@ben-laptop.smart-cactus.org> Edward Yang pointed out a truncated sentence in this message. See below. Thank you Edward! Cheers, - Ben Ben Gamari writes: > Simon Peyton Jones via ghc-devs writes: > ... > > To hack around this, mingw-w64 ships a static library, msvcrt.a, which > wraps msvcrt.dll and provides hand-rolled implementations of some needed > symbols, including _lock_file. However, this means that the static > library msvcrt.a, and the dynamic library msvcrt.dll don't export the > same set of symbols, which causes GHCi to blow up if dynamically linked. > Consequently we need to > The last sentence should have read, > Consequently we need to ensure that the runtime linker seeds its > own symbol table with these symbols. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Sat Dec 10 05:06:06 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Sat, 10 Dec 2016 00:06:06 -0500 Subject: Windows build again In-Reply-To: References: Message-ID: <87pol0mkzl.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > Windows build is broken in a new way. > When I run validate I end up with sh.exe processes that consume a full CPU forever. See the process log below. > > Note that these are not GHC processes: they are shells! I have no conception of what they are doing. > Any ideas, or things I can to do gather more evidence? > This is an up to date HEAD, with some small changes to GHC. I suppose > I can try a completely clean HEAD, but I can't see how my changes > could make the shell loop. Oh dear; this sounds like an msys2 issue. Tamar, any idea what might be going on here? How up-to-date is your msys2 installation, Simon? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at well-typed.com Sat Dec 10 05:07:56 2016 From: ben at well-typed.com (Ben Gamari) Date: Sat, 10 Dec 2016 00:07:56 -0500 Subject: More windows woe In-Reply-To: <87y3zomorn.fsf@ben-laptop.smart-cactus.org> References: <87y3zomorn.fsf@ben-laptop.smart-cactus.org> Message-ID: <87lgvomkwj.fsf@ben-laptop.smart-cactus.org> Ben Gamari writes: > Simon Peyton Jones via ghc-devs writes: > >> I see that anything involving ghci fails: >> >> /c/code/HEAD/inplace/bin/ghc-stage2 --interactive >> >> GHCi, version 8.1.20161209: http://www.haskell.org/ghc/ :? for help >> >> ghc-stage2.exe: unable to load package `base-4.9.0.0' >> >> ghc-stage2.exe: C:\code\HEAD\inplace\mingw\x86_64-w64-mingw32\lib\libmingwex.a: unknown symbol `_lock_file' >> > Yes, Tamar and I were working on tracking this down over the last few > days. The patch (which I will merge after a running validation finishes) > is D2817. > For the record I was able to validate this locally and merged it. Unfortunately it seems that Harbormaster still chokes, so your milage may vary. I'll have a look tomorrow. The saga continues... Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From chak at justtesting.org Sat Dec 10 05:34:49 2016 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Sat, 10 Dec 2016 16:34:49 +1100 Subject: Telemetry (WAS: Attempt at a real world benchmark) In-Reply-To: References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <584AAEEC.5080706@centrum.cz> <75b1cea0-f2fb-f7cf-c41d-06eea5b75c51@gmail.com> Message-ID: <68DB5E24-FD8F-49EC-8BD7-2D71C131620D@justtesting.org> > Simon Peyton Jones via ghc-devs : > > Just to say: > > · Telemetry is a good topic > · It is clearly a delicate one as we’ve already seen from two widely differing reactions. That’s why I have never seriously contemplated doing anything about it. > · I’m love a consensus to emerge on this, but I don’t have the bandwidth to drive it. > > Incidentally, when I said “telemetry is common” I meant that almost every piece of software I run on my PC these days automatically checks for updates. It no longer even asks me if I want to do that.. it just does it. That’s telemetry right there: the supplier knows how many people are running each version of their software. I think, it is important to notice that the expectations of users varies quite significantly from platform to platform. For example, macOS users on average expect more privacy protections than Windows users and Linux users expect more than macOS users. In particular, a lot of 3rd party software on macOS still asks whether you want to enable automatic update checks. Moreover, while most people tolerate that end user GUI software performs some analytics, I am sure that most users of command line (and especially developer tools) would be very surprised to learn that it performs analytics. Finally, once you gather analytics you need to have a privacy policy in many/most jurisdictions (certainly in EU and AU) these days, which explains what data is gathered, where it is stored, etc. This typically also involves statements about sharing that data. All quite easily covered by a software business, but hard to do in an open source project unless you limit access to the data to a few people. (Even if you ask users for permission to gather data, I am quite sure, you still need a privacy policy.) Manuel > From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of MarLinn via ghc-devs > Sent: 09 December 2016 14:52 > To: ghc-devs at haskell.org > Subject: Re: Telemetry (WAS: Attempt at a real world benchmark) > > > > It could tell us which language features are most used. > > Language features are hard if they are not available in separate libs. If in libs, then IIRC debian is packaging those in separate packages, again you can use their package contest. > > What in particular makes them hard? Sorry if this seems like a stupid question to you, I'm just not that knowledgeable yet. One reason I can think of would be that we would want attribution, i.e. did the developer turn on the extension himself, or is it just used in a lib or template – but that should be easy to solve with a source hash, right? That source hash itself might need a bit of thought though. Maybe it should not be a hash of a source file, but of the parse tree. > > > The big issue is (a) design and implementation effort, and (b) dealing with the privacy issues. I think (b) used to be a big deal, but nowadays people mostly assume that their software is doing telemetry, so it feels more plausible. But someone would need to work out whether it had to be opt-in or opt-out, and how to actually make it work in practice. > > Privacy here is complete can of worms (keep in mind you are dealing with a lot of different law systems), I strongly suggest not to even think about it for a second. Your note "but nowadays people mostly assume that their software is doing telemetry" may perhaps be true in sick mobile apps world, but I guess is not true in the world of developing secure and security related applications for either server usage or embedded. > > My first reaction to "nowadays people mostly assume that their software is doing telemetry" was to amend it with "* in the USA" in my mind. But yes, mobile is another place. Nowadays I do assume most software uses some sort of phone-home feature, but that's because it's on my To Do list of things to search for on first configuration. Note that I am using "phone home" instead of "telemetry" because some companies hide it in "check for updates" or mix it with some useless "account" stuff. Finding out where it's hidden and how much information they give about the details tells a lot about the developers, as does opt-in vs opt-out. Therefore it can be a reason to not choose a piece of software or even an ecosystem after a first try. (Let's say an operating system almost forces me to create an online account on installation. That not only tells me I might not want to use that operating system, it also sends a marketing message that the whole ecosystem is potentially toxic to my privacy because they live in a bubble where that appears to be acceptable.) So I do have that aversion even in non-security-related contexts. > > I would say people are aware that telemetry exists, and developers in particular. I would also say developers are aware of the potential benefits, so they might be open to it. But what they care and worry about is what is reported and how they can control it. Software being Open Source is a huge factor in that, because they know that, at least in theory, they could vet the source. But the reaction might still be very mixed – see Mozilla Firefox. > > My suggestion would be a solution that gives the developer the feeling of making the choices, and puts them in control. It should also be compatible with configuration management so that it can be integrated into company policies as easily as possible. Therefore my suggestions would be > > · Opt-In. Nothing takes away the feeling of being in control more than perceived "hijacking" of a device with "spy ware". This also helps circumvent legal problems because the users or their employers now have the responsibility. > > · The switches to turn it on or off should be in a configuration file. There should be several staged configuration files, one for a project, one for a user, one system-wide. This is for compatibility with configuration management. Configuration higher up the hierarchy override ones lower in the hierarchy, but they can't force telemetry to be on – at least not the sensitive kind. > > · There should be several levels or a set of options that can be switched on or off individually, for fine-grained control. All should be very well documented. Once integrated and documented, they can never change without also changing the configuration flag that switches them on. > > There still might be some backlash, but a careful approach like this could soothe the minds. > > If you are worried that we might get too little data this way, here's another thought, leading back to performance data: The most benefit in that regard would come from projects that are built regularly, on different architectures, with sources that can be inspected and with an easy way to get diffs. In other words, projects that live on github and travis anyway. Their maintainers should be easy to convince to set that little switch to "on". > > > > Regards, > MarLinn > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz at lichtzwerge.de Sat Dec 10 08:10:00 2016 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Sat, 10 Dec 2016 16:10:00 +0800 Subject: Telemetry (WAS: Attempt at a real world benchmark) In-Reply-To: <68DB5E24-FD8F-49EC-8BD7-2D71C131620D@justtesting.org> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <584AAEEC.5080706@centrum.cz> <75b1cea0-f2fb-f7cf-c41d-06eea5b75c51@gmail.com> <68DB5E24-FD8F-49EC-8BD7-2D71C131620D@justtesting.org> Message-ID: <99EB7982-4E03-466E-83EB-E9EB649AE909@lichtzwerge.de> Hi, I’m mostly against any tracking. For privacy reasons, but also what is the data going to tell? Would I track timings, used extensions and ghc version, module size, per compiled module, per compiled project or per ghc invocation? What are the reasons we believe the packages in hackage, or the more restrictive stackage set are non-representative? If we can agree that they are representative of the language and it’s uses, analyzing the publicly available code should provide almost the identical results that large scale compiler telemetry, no? I have no idea about the pervasiveness of telemetry on windows. Nor do I know how much macOS actually phones home, or all the applications that are shipped by default with it. Two items I would like to note, that *do* phone home and are *out out*: - homebrew[1] package manager that I assume quite a few people use (because it works rather well), see the Analytics.md[2], especially the opt-out section[3]. - cocoapods[4] (iOS/macOS library repository), which sends back statistics about package usage[5] In both cases, I would say the community didn’t really appreciate the change but was unable to change the maintainers/authors direction they were taking the tool into. I think we should first need a consensus on what questions we would like to answer. And then figure out which of these questions can only be answered properly by calling home from the compiler. I am still opposed to the idea of having a compiler call home, and would try to make sure that my compiler does not (most likely by only using custom built compilers that have this functionality surgically removed; which would end up being a continuous burden to keep up with), so that I would not accidentally risk sending potentially sensitive data. In whole it would undermine my trust in the compiler. cheers, moritz — [1]: http://brew.sh/ [2]: https://github.com/Homebrew/brew/blob/master/docs/Analytics.md [3]: https://github.com/Homebrew/brew/blob/master/docs/Analytics.md#opting-out [4]: https://cocoapods.org/ [5]: http://blog.cocoapods.org/Stats/ > On Dec 10, 2016, at 1:34 PM, Manuel M T Chakravarty wrote: > >> Simon Peyton Jones via ghc-devs : >> >> Just to say: >> >> · Telemetry is a good topic >> · It is clearly a delicate one as we’ve already seen from two widely differing reactions. That’s why I have never seriously contemplated doing anything about it. >> · I’m love a consensus to emerge on this, but I don’t have the bandwidth to drive it. >> >> Incidentally, when I said “telemetry is common” I meant that almost every piece of software I run on my PC these days automatically checks for updates. It no longer even asks me if I want to do that.. it just does it. That’s telemetry right there: the supplier knows how many people are running each version of their software. > > I think, it is important to notice that the expectations of users varies quite significantly from platform to platform. For example, macOS users on average expect more privacy protections than Windows users and Linux users expect more than macOS users. In particular, a lot of 3rd party software on macOS still asks whether you want to enable automatic update checks. > > Moreover, while most people tolerate that end user GUI software performs some analytics, I am sure that most users of command line (and especially developer tools) would be very surprised to learn that it performs analytics. > > Finally, once you gather analytics you need to have a privacy policy in many/most jurisdictions (certainly in EU and AU) these days, which explains what data is gathered, where it is stored, etc. This typically also involves statements about sharing that data. All quite easily covered by a software business, but hard to do in an open source project unless you limit access to the data to a few people. (Even if you ask users for permission to gather data, I am quite sure, you still need a privacy policy.) > > Manuel > > >> From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of MarLinn via ghc-devs >> Sent: 09 December 2016 14:52 >> To: ghc-devs at haskell.org >> Subject: Re: Telemetry (WAS: Attempt at a real world benchmark) >> >> >> >> It could tell us which language features are most used. >> >> Language features are hard if they are not available in separate libs. If in libs, then IIRC debian is packaging those in separate packages, again you can use their package contest. >> >> What in particular makes them hard? Sorry if this seems like a stupid question to you, I'm just not that knowledgeable yet. One reason I can think of would be that we would want attribution, i.e. did the developer turn on the extension himself, or is it just used in a lib or template – but that should be easy to solve with a source hash, right? That source hash itself might need a bit of thought though. Maybe it should not be a hash of a source file, but of the parse tree. >> >> >> The big issue is (a) design and implementation effort, and (b) dealing with the privacy issues. I think (b) used to be a big deal, but nowadays people mostly assume that their software is doing telemetry, so it feels more plausible. But someone would need to work out whether it had to be opt-in or opt-out, and how to actually make it work in practice. >> >> Privacy here is complete can of worms (keep in mind you are dealing with a lot of different law systems), I strongly suggest not to even think about it for a second. Your note "but nowadays people mostly assume that their software is doing telemetry" may perhaps be true in sick mobile apps world, but I guess is not true in the world of developing secure and security related applications for either server usage or embedded. >> >> My first reaction to "nowadays people mostly assume that their software is doing telemetry" was to amend it with "* in the USA" in my mind. But yes, mobile is another place. Nowadays I do assume most software uses some sort of phone-home feature, but that's because it's on my To Do list of things to search for on first configuration. Note that I am using "phone home" instead of "telemetry" because some companies hide it in "check for updates" or mix it with some useless "account" stuff. Finding out where it's hidden and how much information they give about the details tells a lot about the developers, as does opt-in vs opt-out. Therefore it can be a reason to not choose a piece of software or even an ecosystem after a first try. (Let's say an operating system almost forces me to create an online account on installation. That not only tells me I might not want to use that operating system, it also sends a marketing message that the whole ecosystem is potentially toxic to my privacy because they live in a bubble where that appears to be acceptable.) So I do have that aversion even in non-security-related contexts. >> >> I would say people are aware that telemetry exists, and developers in particular. I would also say developers are aware of the potential benefits, so they might be open to it. But what they care and worry about is what is reported and how they can control it. Software being Open Source is a huge factor in that, because they know that, at least in theory, they could vet the source. But the reaction might still be very mixed – see Mozilla Firefox. >> >> My suggestion would be a solution that gives the developer the feeling of making the choices, and puts them in control. It should also be compatible with configuration management so that it can be integrated into company policies as easily as possible. Therefore my suggestions would be >> >> · Opt-In. Nothing takes away the feeling of being in control more than perceived "hijacking" of a device with "spy ware". This also helps circumvent legal problems because the users or their employers now have the responsibility. >> >> · The switches to turn it on or off should be in a configuration file. There should be several staged configuration files, one for a project, one for a user, one system-wide. This is for compatibility with configuration management. Configuration higher up the hierarchy override ones lower in the hierarchy, but they can't force telemetry to be on – at least not the sensitive kind. >> >> · There should be several levels or a set of options that can be switched on or off individually, for fine-grained control. All should be very well documented. Once integrated and documented, they can never change without also changing the configuration flag that switches them on. >> >> There still might be some backlash, but a careful approach like this could soothe the minds. >> >> If you are worried that we might get too little data this way, here's another thought, leading back to performance data: The most benefit in that regard would come from projects that are built regularly, on different architectures, with sources that can be inspected and with an easy way to get diffs. In other words, projects that live on github and travis anyway. Their maintainers should be easy to convince to set that little switch to "on". >> >> >> >> Regards, >> MarLinn >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From chak at justtesting.org Sat Dec 10 10:36:21 2016 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Sat, 10 Dec 2016 21:36:21 +1100 Subject: Telemetry (WAS: Attempt at a real world benchmark) In-Reply-To: <99EB7982-4E03-466E-83EB-E9EB649AE909@lichtzwerge.de> References: <1481177035.18160.1.camel@joachim-breitner.de> <1481213080.1075.13.camel@joachim-breitner.de> <584AAEEC.5080706@centrum.cz> <75b1cea0-f2fb-f7cf-c41d-06eea5b75c51@gmail.com> <68DB5E24-FD8F-49EC-8BD7-2D71C131620D@justtesting.org> <99EB7982-4E03-466E-83EB-E9EB649AE909@lichtzwerge.de> Message-ID: <213D8FD0-510F-4779-AAEE-4C3BEDC4DB9E@justtesting.org> > Am 10.12.2016 um 19:10 schrieb Moritz Angermann : > Two items I would like to note, that *do* phone home and are *out out*: > > - homebrew[1] package manager that I assume quite a few people use (because it works > rather well), see the Analytics.md[2], especially the opt-out section[3]. > - cocoapods[4] (iOS/macOS library repository), which sends back statistics about package > usage[5] > Package managers are inherently different to compilers as the *core* functionality of a package manager requires network traffic. As network traffic on every sane server infrastructure is logged, the use of a package manager obviously creates a trail. Now, the package manager can capture more or less local information and server side the data can be kept for varying amounts of time. In contrast, it is a reasonable expectation that a compiler does not initiate any network traffic as it is not needed for its core functionality. In other words, adding more analytics to cabal would probably be fairly uncontroversial (if it anonymises the data properly), but adding it to GHC will make some people very unhappy. Manuel > >> On Dec 10, 2016, at 1:34 PM, Manuel M T Chakravarty wrote: >> >>> Simon Peyton Jones via ghc-devs : >>> >>> Just to say: >>> >>> · Telemetry is a good topic >>> · It is clearly a delicate one as we’ve already seen from two widely differing reactions. That’s why I have never seriously contemplated doing anything about it. >>> · I’m love a consensus to emerge on this, but I don’t have the bandwidth to drive it. >>> >>> Incidentally, when I said “telemetry is common” I meant that almost every piece of software I run on my PC these days automatically checks for updates. It no longer even asks me if I want to do that.. it just does it. That’s telemetry right there: the supplier knows how many people are running each version of their software. >> >> I think, it is important to notice that the expectations of users varies quite significantly from platform to platform. For example, macOS users on average expect more privacy protections than Windows users and Linux users expect more than macOS users. In particular, a lot of 3rd party software on macOS still asks whether you want to enable automatic update checks. >> >> Moreover, while most people tolerate that end user GUI software performs some analytics, I am sure that most users of command line (and especially developer tools) would be very surprised to learn that it performs analytics. >> >> Finally, once you gather analytics you need to have a privacy policy in many/most jurisdictions (certainly in EU and AU) these days, which explains what data is gathered, where it is stored, etc. This typically also involves statements about sharing that data. All quite easily covered by a software business, but hard to do in an open source project unless you limit access to the data to a few people. (Even if you ask users for permission to gather data, I am quite sure, you still need a privacy policy.) >> >> Manuel >> >> >>> From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of MarLinn via ghc-devs >>> Sent: 09 December 2016 14:52 >>> To: ghc-devs at haskell.org >>> Subject: Re: Telemetry (WAS: Attempt at a real world benchmark) >>> >>> >>> >>> It could tell us which language features are most used. >>> >>> Language features are hard if they are not available in separate libs. If in libs, then IIRC debian is packaging those in separate packages, again you can use their package contest. >>> >>> What in particular makes them hard? Sorry if this seems like a stupid question to you, I'm just not that knowledgeable yet. One reason I can think of would be that we would want attribution, i.e. did the developer turn on the extension himself, or is it just used in a lib or template – but that should be easy to solve with a source hash, right? That source hash itself might need a bit of thought though. Maybe it should not be a hash of a source file, but of the parse tree. >>> >>> >>> The big issue is (a) design and implementation effort, and (b) dealing with the privacy issues. I think (b) used to be a big deal, but nowadays people mostly assume that their software is doing telemetry, so it feels more plausible. But someone would need to work out whether it had to be opt-in or opt-out, and how to actually make it work in practice. >>> >>> Privacy here is complete can of worms (keep in mind you are dealing with a lot of different law systems), I strongly suggest not to even think about it for a second. Your note "but nowadays people mostly assume that their software is doing telemetry" may perhaps be true in sick mobile apps world, but I guess is not true in the world of developing secure and security related applications for either server usage or embedded. >>> >>> My first reaction to "nowadays people mostly assume that their software is doing telemetry" was to amend it with "* in the USA" in my mind. But yes, mobile is another place. Nowadays I do assume most software uses some sort of phone-home feature, but that's because it's on my To Do list of things to search for on first configuration. Note that I am using "phone home" instead of "telemetry" because some companies hide it in "check for updates" or mix it with some useless "account" stuff. Finding out where it's hidden and how much information they give about the details tells a lot about the developers, as does opt-in vs opt-out. Therefore it can be a reason to not choose a piece of software or even an ecosystem after a first try. (Let's say an operating system almost forces me to create an online account on installation. That not only tells me I might not want to use that operating system, it also sends a marketing message that the whole ecosystem is potentially toxic to my privacy because they live in a bubble where that appears to be acceptable.) So I do have that aversion even in non-security-related contexts. >>> >>> I would say people are aware that telemetry exists, and developers in particular. I would also say developers are aware of the potential benefits, so they might be open to it. But what they care and worry about is what is reported and how they can control it. Software being Open Source is a huge factor in that, because they know that, at least in theory, they could vet the source. But the reaction might still be very mixed – see Mozilla Firefox. >>> >>> My suggestion would be a solution that gives the developer the feeling of making the choices, and puts them in control. It should also be compatible with configuration management so that it can be integrated into company policies as easily as possible. Therefore my suggestions would be >>> >>> · Opt-In. Nothing takes away the feeling of being in control more than perceived "hijacking" of a device with "spy ware". This also helps circumvent legal problems because the users or their employers now have the responsibility. >>> >>> · The switches to turn it on or off should be in a configuration file. There should be several staged configuration files, one for a project, one for a user, one system-wide. This is for compatibility with configuration management. Configuration higher up the hierarchy override ones lower in the hierarchy, but they can't force telemetry to be on – at least not the sensitive kind. >>> >>> · There should be several levels or a set of options that can be switched on or off individually, for fine-grained control. All should be very well documented. Once integrated and documented, they can never change without also changing the configuration flag that switches them on. >>> >>> There still might be some backlash, but a careful approach like this could soothe the minds. >>> >>> If you are worried that we might get too little data this way, here's another thought, leading back to performance data: The most benefit in that regard would come from projects that are built regularly, on different architectures, with sources that can be inspected and with an easy way to get diffs. In other words, projects that live on github and travis anyway. Their maintainers should be easy to convince to set that little switch to "on". >>> >>> >>> >>> Regards, >>> MarLinn >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From simonpj at microsoft.com Sat Dec 10 10:57:20 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sat, 10 Dec 2016 10:57:20 +0000 Subject: More windows woe In-Reply-To: References: Message-ID: Reverting didn’t work Appluying D2817 didn’t work: /c/code/HEAD$ c:/code/HEAD/inplace/bin/ghc-stage2 --interactive GHCi, version 8.1.20161209: http://www.haskell.org/ghc/ :? for help ghc-stage2.exe: unable to load package `base-4.9.0.0' ghc-stage2.exe: C:\code\HEAD\inplace\mingw\x86_64-w64-mingw32\lib\libmingwex.a: unknown symbol `_unlock_file' Now it’s ‘unlock_file’. I’ll try adding that. Simon From: Phyx [mailto:lonetiger at gmail.com] Sent: 09 December 2016 23:07 To: Simon Peyton Jones Subject: Re: More windows woe When fixing bootstrapping with pre 7.10.3 compilers this accidentally broke. There's a patch up to fix it https://phabricator.haskell.org/D2817 If you're on 7.10.3 or later for your bootstrapping compiler revert 6da62535469149d69ec98674db1c51dbde0efab1 and it should work again. Just waiting for a buildbot build to commit the above patch. On Fri, Dec 9, 2016 at 10:44 PM, Simon Peyton Jones via ghc-devs > wrote: I see that anything involving ghci fails: /c/code/HEAD/inplace/bin/ghc-stage2 --interactive GHCi, version 8.1.20161209: http://www.haskell.org/ghc/ :? for help ghc-stage2.exe: unable to load package `base-4.9.0.0' ghc-stage2.exe: C:\code\HEAD\inplace\mingw\x86_64-w64-mingw32\lib\libmingwex.a: unknown symbol `_lock_file' ghc-stage2.exe: Could not on-demand load symbol '__mingw_vfprintf' ghc-stage2.exe: C:\code\HEAD\libraries\base\dist-install\build\HSbase-4.9.0.0.o: unknown symbol `__mingw_vfprintf' It’s frustrating. It used to work! Simon _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sat Dec 10 10:59:00 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sat, 10 Dec 2016 10:59:00 +0000 Subject: More windows woe In-Reply-To: <87y3zomorn.fsf@ben-laptop.smart-cactus.org> References: <87y3zomorn.fsf@ben-laptop.smart-cactus.org> Message-ID: But meanwhile is there any way to get a working windows build? I'm totally stalled on all fronts. It was ok a few days ago Thanks Simon | -----Original Message----- | From: Ben Gamari [mailto:ben at well-typed.com] | Sent: 10 December 2016 03:44 | To: Simon Peyton Jones ; GHC developers | Subject: Re: More windows woe | | Simon Peyton Jones via ghc-devs writes: | | > I see that anything involving ghci fails: | > | > /c/code/HEAD/inplace/bin/ghc-stage2 --interactive | > | > GHCi, version 8.1.20161209: http://www.haskell.org/ghc/ :? for help | > | > ghc-stage2.exe: unable to load package `base-4.9.0.0' | > | > ghc-stage2.exe: C:\code\HEAD\inplace\mingw\x86_64-w64- | mingw32\lib\libmingwex.a: unknown symbol `_lock_file' | > | Yes, Tamar and I were working on tracking this down over the last few | days. The patch (which I will merge after a running validation finishes) | is D2817. | | In short, the problem is that we recently upgraded the Windows toolchain. | For better or worse, the new mingw-w64 toolchain now has an atomic printf | implementation, which requires the use of the _lock_file function | provided by Microsoft's C runtime. However, the _lock_file symbol is only | exported by certain variants of msvcrt (e.g. msvcrt90.dll), but not the | distribution which mingw-w64 uses (apparently due to license | considerations [1], although the exact reason isn't clear). | | To hack around this, mingw-w64 ships a static library, msvcrt.a, which | wraps msvcrt.dll and provides hand-rolled implementations of some needed | symbols, including _lock_file. However, this means that the static | library msvcrt.a, and the dynamic library msvcrt.dll don't export the | same set of symbols, which causes GHCi to blow up if dynamically linked. | Consequently we need to | | All of this coupled with another recent but quite unrelated cleanup | (D2579) breaking the Windows build when bootstrapped with GHC 7.10, the | recent testsuite debacle, as well as a number of other Windows quirks | I've discovered in the past few weeks, meant that figuring all of this | out took quite some time (which is why the Windows builder *still* isn't | quite up). On the bright side, one happy side-effort of this is that it | prompted me to write down some notes on the interactions between the many | components of our Windows toolchain [2]. | | Anyways, we are getting quite close. I expect we'll finally have the | Windows builder up by next week. Hopefully from that point forth it will | be considerably harder to break the Windows build. | | Cheers, | | - Ben | | | [1] https://sourceforge.net/p/mingw- | w64/discussion/723797/thread/55520785/ | [2] https://ghc.haskell.org/trac/ghc/wiki/SurvivingWIndows From simonpj at microsoft.com Sat Dec 10 11:23:57 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sat, 10 Dec 2016 11:23:57 +0000 Subject: More windows woe In-Reply-To: References: Message-ID: Adding unlock_file makes ghci work. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Simon Peyton Jones via ghc-devs Sent: 10 December 2016 10:57 To: Phyx Cc: GHC developers Subject: RE: More windows woe Reverting didn’t work Appluying D2817 didn’t work: /c/code/HEAD$ c:/code/HEAD/inplace/bin/ghc-stage2 --interactive GHCi, version 8.1.20161209: http://www.haskell.org/ghc/ :? for help ghc-stage2.exe: unable to load package `base-4.9.0.0' ghc-stage2.exe: C:\code\HEAD\inplace\mingw\x86_64-w64-mingw32\lib\libmingwex.a: unknown symbol `_unlock_file' Now it’s ‘unlock_file’. I’ll try adding that. Simon From: Phyx [mailto:lonetiger at gmail.com] Sent: 09 December 2016 23:07 To: Simon Peyton Jones > Subject: Re: More windows woe When fixing bootstrapping with pre 7.10.3 compilers this accidentally broke. There's a patch up to fix it https://phabricator.haskell.org/D2817 If you're on 7.10.3 or later for your bootstrapping compiler revert 6da62535469149d69ec98674db1c51dbde0efab1 and it should work again. Just waiting for a buildbot build to commit the above patch. On Fri, Dec 9, 2016 at 10:44 PM, Simon Peyton Jones via ghc-devs > wrote: I see that anything involving ghci fails: /c/code/HEAD/inplace/bin/ghc-stage2 --interactive GHCi, version 8.1.20161209: http://www.haskell.org/ghc/ :? for help ghc-stage2.exe: unable to load package `base-4.9.0.0' ghc-stage2.exe: C:\code\HEAD\inplace\mingw\x86_64-w64-mingw32\lib\libmingwex.a: unknown symbol `_lock_file' ghc-stage2.exe: Could not on-demand load symbol '__mingw_vfprintf' ghc-stage2.exe: C:\code\HEAD\libraries\base\dist-install\build\HSbase-4.9.0.0.o: unknown symbol `__mingw_vfprintf' It’s frustrating. It used to work! Simon _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Sun Dec 11 05:01:25 2016 From: david.feuer at gmail.com (David Feuer) Date: Sun, 11 Dec 2016 00:01:25 -0500 Subject: Magical function to support reflection Message-ID: The following proposal (with fancier formatting and some improved wording) can be viewed at https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport Using the Data.Reflection has some runtime costs. Notably, there can be no inlining or unboxing of reified values. I think it would be nice to add a GHC special to support it. I'll get right to the point of what I want, and then give a bit of background about why. === What I want I propose the following absurdly over-general lie: reify# :: (forall s . c s a => t s r) -> a -> r `c` is assumed to be a single-method class with no superclasses whose dictionary representation is exactly the same as the representation of `a`, and `t s r` is assumed to be a newtype wrapper around `r`. In desugaring, reify# f would be compiled to f at S, where S is a fresh type. I believe it's necessary to use a fresh type to prevent specialization from mixing up different reified values. === Background Let me set up a few pieces. These pieces are slightly modified from what the package actually does to make things cleaner under the hood, but the differences are fairly shallow. newtype Tagged s a = Tagged { unTagged :: a } unproxy :: (Proxy s -> a) -> Tagged s a unproxy f = Tagged (f Proxy) class Reifies s a | s -> a where reflect' :: Tagged s a -- For convenience reflect :: forall s a proxy . Reifies s a => proxy s -> a reflect _ = unTagged (reflect' :: Tagged s a) -- The key function--see below regarding implementation reify' :: (forall s . Reifies s a => Tagged s r) -> a -> r -- For convenience reify :: a -> (forall s . Reifies s a => Proxy s -> r) -> r reify a f = reify' (unproxy f) a The key idea of reify' is that something of type forall s . Reifies s a => Tagged s r is represented in memory exactly the same as a function of type a -> r So we can currently use unsafeCoerce to interpret one as the other. Following the general approach of the library, we can do this as such: newtype Magic a r = Magic (forall s . Reifies s a => Tagged s r) reify' :: (forall s . Reifies s a => Tagged s r) -> a -> r reify' f = unsafeCoerce (Magic f) This certainly works. The trouble is that any knowledge about what is reflected is totally lost. For instance, if I write reify 12 $ \p -> reflect p + 3 then GHC will not see, at compile time, that the result is 15. If I write reify (+1) $ \p -> reflect p x then GHC will never inline the application of (+1). Etc. I'd like to replace reify' with reify# to avoid this problem. Thanks, David Feuer From george.colpitts at gmail.com Mon Dec 12 12:44:14 2016 From: george.colpitts at gmail.com (George Colpitts) Date: Mon, 12 Dec 2016 12:44:14 +0000 Subject: [GHC] #876: Length is not a good consumer In-Reply-To: <069.fcb57596958d6b091ac3cea89ccb9fce@haskell.org> References: <054.b373028d5cb568a8380002fb5d2d74f4@haskell.org> <069.fcb57596958d6b091ac3cea89ccb9fce@haskell.org> Message-ID: my apologies, sorry for the terrible bug report On Sun, Dec 11, 2016 at 11:05 AM GHC wrote: > #876: Length is not a good consumer > -------------------------------------+------------------------------------- > Reporter: ariep@… | Owner: > Type: bug | Status: new > Priority: lowest | Milestone: 7.6.2 > Component: libraries/base | Version: 6.5 > Resolution: | Keywords: length > Operating System: Linux | Architecture: > | Unknown/Multiple > Type of failure: Runtime | Test Case: > performance bug | perf/should_run/T876 > Blocked By: | Blocking: > Related Tickets: | Differential Rev(s): > Wiki Page: | > -------------------------------------+------------------------------------- > > Comment (by nomeata): > > This code, compiled with `-O, does fuse, and allocates nothing (or > constant amounts) > {{{#!hs > > module Foo where > x :: Int -> Int > x n = length [0..(10^n)::Int] > }}} > > {{{ > $ ghci -fobject-code -O Foo > GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help > [1 of 1] Compiling Foo ( Foo.hs, Foo.o ) > Ok, modules loaded: Foo. > Prelude Foo> :set +s > Prelude Foo> x 1 > 11 > (0.03 secs, 14,976,744 bytes) > Prelude Foo> x 7 > 10000001 > (0.02 secs, 0 bytes) > Prelude Foo> x 8 > 100000001 > (0.04 secs, 0 bytes) > }}} > > (almost) HEAD: > {{{ > GHCi, version 8.1.20161117: http://www.haskell.org/ghc/ :? for help > [1 of 1] Compiling Foo (.hs -> .o) > WARNING: file compiler/simplCore/SimplCore.hs, line 663 > Simplifier bailing out after 4 iterations [58, 14, 2, 2] > Size = {terms: 96, types: 32, coercions: 0} > Ok, modules loaded: Foo (Foo.o). > Prelude Foo> :set +s > Prelude Foo> x 1 > 11 > (0.19 secs, 94,792 bytes) > Prelude Foo> x 2 > 101 > (0.01 secs, 94,648 bytes) > Prelude Foo> x 7 > 10000001 > (0.01 secs, 98,568 bytes) > Prelude Foo> x 8 > 100000001 > (0.05 secs, 98,448 bytes) > }}} > > Testing this in with interpreted code is not sufficient, as the optimizer > does less in that case. So so far, everything seems as expected to me. > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Mon Dec 12 15:19:49 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 12 Dec 2016 10:19:49 -0500 Subject: Please =?UTF-8?Q?don=E2=80=99t?= break travis In-Reply-To: <1481302769.1117.14.camel@joachim-breitner.de> References: <1480720953.13340.14.camel@joachim-breitner.de> <87fulxnlkb.fsf@ben-laptop.smart-cactus.org> <1481299589.1117.12.camel@joachim-breitner.de> <1481302769.1117.14.camel@joachim-breitner.de> Message-ID: <1481555989.19142.0.camel@joachim-breitner.de> Hi, taken care off, for now, it seems, see attachement. Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- An embedded message was scrubbed... From: Joep van Delft Subject: Re: Extended job runtime for GHC Date: Mon, 12 Dec 2016 10:37:40 +0000 Size: 9940 URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From mail at joachim-breitner.de Mon Dec 12 15:44:41 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 12 Dec 2016 10:44:41 -0500 Subject: [GHC] #876: Length is not a good consumer In-Reply-To: References: <054.b373028d5cb568a8380002fb5d2d74f4@haskell.org> <069.fcb57596958d6b091ac3cea89ccb9fce@haskell.org> Message-ID: <1481557481.19142.3.camel@joachim-breitner.de> Am Montag, den 12.12.2016, 12:44 +0000 schrieb George Colpitts: > my apologies, sorry for the terrible bug report No worries! Better a bug report closed as invalid than a real bug unreported. Greetings, Joachim -- -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ekmett at gmail.com Mon Dec 12 18:15:46 2016 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 12 Dec 2016 13:15:46 -0500 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: A few thoughts in no particular order: Unlike this proposal, the existing 'reify' itself as core can actually be made well typed. Tagged in the example could be replaced with explicit type application if backwards compatibility isn't a concern. OTOH, it is. The form of reify' there is actually an uncomfortable middle ground between the current implementation and perhaps the more "ghc-like" implementation that uses a type family to determine 'a'. On the other hand, giving the type above with the type family in it would be rather awkward, and generalizing it further without it would make it even more brittle. On the other other hand, if you're going to be magic, you might as well go all the way to something like: reify# :: (p => r) -> a -> r and admit both fundep and TF forms. I mean, if you're going to lie you might as well lie big. It'd be nice to show that this can be used to reify KnownNat, Typeable, KnownSymbol, etc. and other commonly hacked dictionaries as well as Reifies. There are a very large number of instances out there scattered across dozens of packages that would be broken by switching from Proxy to Tagged or explicit type application internally. (I realize that this is a lesser concern that can be resolved by a major version bump and some community friction, but it does mean pragmatically that migrating to something like this would need a plan.) -Edward On Sun, Dec 11, 2016 at 12:01 AM, David Feuer wrote: > The following proposal (with fancier formatting and some improved > wording) can be viewed at > https://ghc.haskell.org/trac/ghc/wiki/MagicalReflectionSupport > > Using the Data.Reflection has some runtime costs. Notably, there can > be no inlining or unboxing of reified values. I think it would be nice > to add a GHC special to support it. I'll get right to the point of > what I want, and then give a bit of background about why. > > === What I want > > I propose the following absurdly over-general lie: > > reify# :: (forall s . c s a => t s r) -> a -> r > > `c` is assumed to be a single-method class with no superclasses whose > dictionary representation is exactly the same as the representation of > `a`, and `t s r` is assumed to be a newtype wrapper around `r`. In > desugaring, reify# f would be compiled to f at S, where S is a fresh > type. I believe it's necessary to use a fresh type to prevent > specialization from mixing up different reified values. > > === Background > > Let me set up a few pieces. These pieces are slightly modified from > what the package actually does to make things cleaner under the hood, > but the differences are fairly shallow. > > newtype Tagged s a = Tagged { unTagged :: a } > > unproxy :: (Proxy s -> a) -> Tagged s a > unproxy f = Tagged (f Proxy) > > class Reifies s a | s -> a where > reflect' :: Tagged s a > > -- For convenience > reflect :: forall s a proxy . Reifies s a => proxy s -> a > reflect _ = unTagged (reflect' :: Tagged s a) > > -- The key function--see below regarding implementation > reify' :: (forall s . Reifies s a => Tagged s r) -> a -> r > > -- For convenience > reify :: a -> (forall s . Reifies s a => Proxy s -> r) -> r > reify a f = reify' (unproxy f) a > > The key idea of reify' is that something of type > > forall s . Reifies s a => Tagged s r > > is represented in memory exactly the same as a function of type > > a -> r > > So we can currently use unsafeCoerce to interpret one as the other. > Following the general approach of the library, we can do this as such: > > newtype Magic a r = Magic (forall s . Reifies s a => Tagged s r) > reify' :: (forall s . Reifies s a => Tagged s r) -> a -> r > reify' f = unsafeCoerce (Magic f) > > This certainly works. The trouble is that any knowledge about what is > reflected is totally lost. For instance, if I write > > reify 12 $ \p -> reflect p + 3 > > then GHC will not see, at compile time, that the result is 15. If I write > > reify (+1) $ \p -> reflect p x > > then GHC will never inline the application of (+1). Etc. > > I'd like to replace reify' with reify# to avoid this problem. > > Thanks, > David Feuer > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Dec 12 18:31:24 2016 From: david.feuer at gmail.com (David Feuer) Date: Mon, 12 Dec 2016 13:31:24 -0500 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: On Dec 12, 2016 1:15 PM, "Edward Kmett" wrote: A few thoughts in no particular order: Unlike this proposal, the existing 'reify' itself as core can actually be made well typed. Can you explain this? Tagged in the example could be replaced with explicit type application if backwards compatibility isn't a concern. OTOH, it is. Would that help Core typing? On the other other hand, if you're going to be magic, you might as well go all the way to something like: reify# :: (p => r) -> a -> r How would we implement reify in terms of this variant? and admit both fundep and TF forms. I mean, if you're going to lie you might as well lie big. Definitely. There are a very large number of instances out there scattered across dozens of packages that would be broken by switching from Proxy to Tagged or explicit type application internally. (I realize that this is a lesser concern that can be resolved by a major version bump and some community friction, but it does mean pragmatically that migrating to something like this would need a plan.) I just want to make sure that we do what we need to get Really Good Code, if we're going to the trouble of adding compiler support. -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Tue Dec 13 04:34:57 2016 From: david.feuer at gmail.com (David Feuer) Date: Mon, 12 Dec 2016 23:34:57 -0500 Subject: Explicit inequality evidence Message-ID: According to Ben Gamari's wiki page[1], the new Typeable is expected to offer eqTypeRep :: forall k (a :: k) (b :: k). TypeRep a -> TypeRep b -> Maybe (a :~: b) Ideally, we'd prefer to get either evidence of equality or evidence of inequality. The traditional approach is to use Dec (a :~: b), where data Dec a = Yes a | No (a -> Void). But a :~: b -> Void isn't strong enough for all purposes. In particular, if we want to use inequality to drive type family reduction, we could be in trouble. I'm wondering if we could expose inequality much as we expose equality. Under an a # b constraint, GHC would recognize a and b as unequal. Some rules: Base rules 1. f x # a -> b 2. If C is a constructor, then C # f x and C # a -> b 3. If C and D are distinct constructors, then C # D Propagation rules 1. x # y <=> (x -> z) # (y -> z) <=> (z -> x) # (z -> y) 2. x # y <=> (x z) # (y z) <=> (z x) # (z y) 3. If x # y then y # x Irreflexivity 1. x # x is unsatisfiable (this rule would be used for checking patterns). With this hypothetical machinery in place, we could get something like data a :#: b where Unequal :: a # b => Unequal (a :#: b) eqTypeRep' :: forall k (a :: k) (b :: k). TypeRep a -> TypeRep b -> Either (a :#: b) (a :~: b) Pattern matching on an Unequal constructor would reveal the inequality, allowing closed type families relying on it to reduce. Evidence structure: Whereas (:~:) has just one value, Refl, it would be possible to imagine richer evidence of inequality. If two types are unequal, then they must be unequal in some particular fashion. I conjecture that we don't actually gain much value by using rich evidence here. If the types are Typeable, then we can explore them ourselves, using eqTypeRep' recursively to locate one or more differences. If they're not, I don't think we can track the source(s) of inequality in a coherent fashion. The information we got would only be suitable for use in an error message. So one option would be to bundle up some strings describing the known mismatch, and warn the user very sternly that they shouldn't try to do anything too terribly fancy with them. [1] https://ghc.haskell.org/trac/ghc/wiki/Typeable/BenGamari -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg.grenrus at iki.fi Tue Dec 13 05:49:32 2016 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Tue, 13 Dec 2016 07:49:32 +0200 Subject: Explicit inequality evidence In-Reply-To: References: Message-ID: <9A96BA66-ECC1-4FD0-AF29-1836B10385C5@iki.fi> Hi, I was thinking about (and almost needing) inequality evidence myself, so I’m :+1: to exploration. First the bike shedding: I’d prefer /~ and :/~:. -- new Typeable [1] would actually provide heterogenous equality: eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). TypeRep a -> TypeRep b -> Maybe (a :~~: b) And this one is tricky, should it be: eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). TypeRep a -> TypeRep b -> Either (Either (k1 :/~: k2) (a :/~: b)) (a :~~: b) i.e. how kind inequality would work? -- I'm not sure about propagation rules, with inequality you have to be *very* careful! irreflexivity, x /~ x and symmetry x /~ y <=> y /~ x are clear. I assume that in your rules, variables are not type families, otherwise x /~ y => f x /~ f y doesn't hold if `f` isn't injective. (e.g. type family F x where F x = ()) other direction is true though. Also: f x ~ a -> b, is true with f ~ (->) a, x ~ b. -- Oleg - [1]: https://github.com/ghc-proposals/ghc-proposals/pull/16 > On 13 Dec 2016, at 06:34, David Feuer wrote: > > According to Ben Gamari's wiki page[1], the new Typeable is expected to offer > > eqTypeRep :: forall k (a :: k) (b :: k). TypeRep a -> TypeRep b -> Maybe (a :~: b) > > Ideally, we'd prefer to get either evidence of equality or evidence of inequality. The traditional approach is to use Dec (a :~: b), where data Dec a = Yes a | No (a -> Void). But a :~: b -> Void isn't strong enough for all purposes. In particular, if we want to use inequality to drive type family reduction, we could be in trouble. > > I'm wondering if we could expose inequality much as we expose equality. Under an a # b constraint, GHC would recognize a and b as unequal. Some rules: > > Base rules > 1. f x # a -> b > 2. If C is a constructor, then C # f x and C # a -> b > 3. If C and D are distinct constructors, then C # D > > Propagation rules > 1. x # y <=> (x -> z) # (y -> z) <=> (z -> x) # (z -> y) > 2. x # y <=> (x z) # (y z) <=> (z x) # (z y) > 3. If x # y then y # x > > Irreflexivity > 1. x # x is unsatisfiable (this rule would be used for checking patterns). > > With this hypothetical machinery in place, we could get something like > > data a :#: b where > Unequal :: a # b => Unequal (a :#: b) > > eqTypeRep' :: forall k (a :: k) (b :: k). TypeRep a -> TypeRep b -> Either (a :#: b) (a :~: b) > > Pattern matching on an Unequal constructor would reveal the inequality, allowing closed type families relying on it to reduce. > > Evidence structure: > > Whereas (:~:) has just one value, Refl, it would be possible to imagine richer evidence of inequality. If two types are unequal, then they must be unequal in some particular fashion. I conjecture that we don't actually gain much value by using rich evidence here. If the types are Typeable, then we can explore them ourselves, using eqTypeRep' recursively to locate one or more differences. If they're not, I don't think we can track the source(s) of inequality in a coherent fashion. The information we got would only be suitable for use in an error message. So one option would be to bundle up some strings describing the known mismatch, and warn the user very sternly that they shouldn't try to do anything too terribly fancy with them. > > [1] https://ghc.haskell.org/trac/ghc/wiki/Typeable/BenGamari > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From david.feuer at gmail.com Tue Dec 13 06:02:23 2016 From: david.feuer at gmail.com (David Feuer) Date: Tue, 13 Dec 2016 01:02:23 -0500 Subject: Explicit inequality evidence In-Reply-To: <9A96BA66-ECC1-4FD0-AF29-1836B10385C5@iki.fi> References: <9A96BA66-ECC1-4FD0-AF29-1836B10385C5@iki.fi> Message-ID: On Tue, Dec 13, 2016 at 12:49 AM, Oleg Grenrus wrote: > First the bike shedding: I’d prefer /~ and :/~:. Those are indeed better. > new Typeable [1] would actually provide heterogenous equality: > > eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). > TypeRep a -> TypeRep b -> Maybe (a :~~: b) > > And this one is tricky, should it be: > > eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). > TypeRep a -> TypeRep b -> > Either (Either (k1 :/~: k2) (a :/~: b)) (a :~~: b) > > i.e. how kind inequality would work? I don't know. It sounds like some details of how kinds are expressed in TypeRep might still be a bit uncertain, but I'm not tuned in. Maybe we should punt and use heterogeneous inequality? That's over my head. > I'm not sure about propagation rules, with inequality you have to be *very* careful! > > irreflexivity, x /~ x and symmetry x /~ y <=> y /~ x are clear. > > I assume that in your rules, variables are not type families, otherwise > > x /~ y => f x /~ f y doesn't hold if `f` isn't injective. (e.g. type family F x where F x = ()) > other direction is true though. I was definitely imagining them as first-class types; your point that f x /~ f y => x /~ y even if f is a type family is an excellent one. > Also: > > f x ~ a -> b, is true with f ~ (->) a, x ~ b. Whoops! Yeah, I momentarily forgot that (->) is a constructor. Just leave out that bogus piece. Thanks, David Feuer From george.colpitts at gmail.com Tue Dec 13 12:24:48 2016 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 13 Dec 2016 12:24:48 +0000 Subject: [GHC] #876: Length is not a good consumer In-Reply-To: <1481557481.19142.3.camel@joachim-breitner.de> References: <054.b373028d5cb568a8380002fb5d2d74f4@haskell.org> <069.fcb57596958d6b091ac3cea89ccb9fce@haskell.org> <1481557481.19142.3.camel@joachim-breitner.de> Message-ID: Joachim, thanks for the kind words but I'll be more careful not to waste people's time with bad bug reports like that. I got confused; when I I google "haskell list length" I end up at https://hackage.haskell.org/package/base-4.9.0.0/docs/*Data-List*.html . When I look at the source code for length by clicking on "Source" It takes me to the start of the file https://hackage.haskell.org/package/base-4.9.0.0/docs/src/*Data.Foldable* .html#length . instead of the definition of length in https://hackage.haskell.org/package/base-4.9.0.0/docs/src/*GHC.List.* html#length . To me, this seems like a bug in haddock. In *Data.List*.html when I click on the source code for init I go to https://hackage.haskell.org/package/base-4.9.0.0/docs/src/GHC.List.html#init. This is the file I should go to for the source code for length also. Perhaps the problem is that the type of length in Data.List is Foldable t => t a -> Int while init is [a] -> [a] ? Should I file a haddock bug for the preceding? There seems to be two minor related problems with the Users Guide (8.0.1.20161117) in section 10.32.6,List fusion. First, it should mention length as a good consumer. Secondly, it says: "If you want to write your own good consumers or producers, look at the Prelude definitions of the above functions to see how to do so." However if you go to https://hackage.haskell.org/package/base-4.9.0.0/docs/Prelude.html and look at the source code for length you end up at https://hackage.haskell.org/package/base-4.9.0.0/docs/src/Data.Foldable.html#length which is not a good consumer. I think the User's Guide should be changed to replace "Prelude" with "Data.List" in the quoted sentence. I'll file a doc bug on the User's Guide for these two issues. Also making a function implement list fusion is not always easy, e.g. the bug, length is not a good consumer, was open for six years before being fixed. Thus I will suggest in the bug that we delete "readily" and change "Prelude" to "Data.List" in "This list could readily be extended; if there are Prelude functions that you use a lot which are not included, please tell us." Of course the preceding is not an excuse for giving space allocations of interpreted code when I reopened the bug but at that point I was convinced that there was a problem and wasn't critical of the "evidence" I was giving to support my claim. Thanks George On Mon, Dec 12, 2016 at 11:44 AM Joachim Breitner wrote: Am Montag, den 12.12.2016, 12:44 +0000 schrieb George Colpitts: > my apologies, sorry for the terrible bug report No worries! Better a bug report closed as invalid than a real bug unreported. Greetings, Joachim -- -- Joachim “nomeata” Breitner mail at joachim-breitner.de • https://www.joachim-breitner.de/ XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Tue Dec 13 15:01:04 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Tue, 13 Dec 2016 10:01:04 -0500 Subject: Explicit inequality evidence In-Reply-To: References: <9A96BA66-ECC1-4FD0-AF29-1836B10385C5@iki.fi> Message-ID: <7FB0558D-EF5C-4766-A21D-062E0438540C@cs.brynmawr.edu> I've thought about inequality on and off for years now, but it's a hard nut to crack. If we want this evidence to affect closed type family reduction, then we would need evidence of inequality in Core, and a brand-spanking-new type safety proof. I don't wish to discourage this inquiry, but I also don't think this battle will be won easily. Richard > On Dec 13, 2016, at 1:02 AM, David Feuer wrote: > > On Tue, Dec 13, 2016 at 12:49 AM, Oleg Grenrus wrote: >> First the bike shedding: I’d prefer /~ and :/~:. > > Those are indeed better. > >> new Typeable [1] would actually provide heterogenous equality: >> >> eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). >> TypeRep a -> TypeRep b -> Maybe (a :~~: b) >> >> And this one is tricky, should it be: >> >> eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). >> TypeRep a -> TypeRep b -> >> Either (Either (k1 :/~: k2) (a :/~: b)) (a :~~: b) >> >> i.e. how kind inequality would work? > > I don't know. It sounds like some details of how kinds are expressed > in TypeRep might still be a bit uncertain, but I'm not tuned in. Maybe > we should punt and use heterogeneous inequality? That's over my head. > >> I'm not sure about propagation rules, with inequality you have to be *very* careful! >> >> irreflexivity, x /~ x and symmetry x /~ y <=> y /~ x are clear. >> >> I assume that in your rules, variables are not type families, otherwise >> >> x /~ y => f x /~ f y doesn't hold if `f` isn't injective. (e.g. type family F x where F x = ()) >> other direction is true though. > > I was definitely imagining them as first-class types; your point that > f x /~ f y => x /~ y even if f is a type family is an excellent one. > >> Also: >> >> f x ~ a -> b, is true with f ~ (->) a, x ~ b. > > Whoops! Yeah, I momentarily forgot that (->) is a constructor. Just > leave out that bogus piece. > > Thanks, > David Feuer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From mail at joachim-breitner.de Tue Dec 13 15:11:17 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 13 Dec 2016 10:11:17 -0500 Subject: [GHC] #876: Length is not a good consumer In-Reply-To: References: <054.b373028d5cb568a8380002fb5d2d74f4@haskell.org> <069.fcb57596958d6b091ac3cea89ccb9fce@haskell.org> <1481557481.19142.3.camel@joachim-breitner.de> Message-ID: <1481641877.29813.1.camel@joachim-breitner.de> Dear George, Am Dienstag, den 13.12.2016, 12:24 +0000 schrieb George Colpitts: > I got confused; when I  I google   "haskell list length" I end up at  > https://hackage.haskell.org/package/base-4.9.0.0/docs/Data-List.html. > When I look at the source code for length by clicking on "Source" It > takes me to the start of the file https://hackage.haskell.org/package > /base-4.9.0.0/docs/src/Data.Foldable.html#length. instead of the > definition of length in https://hackage.haskell.org/package/base-4.9. > 0.0/docs/src/GHC.List.html#length.  > > To me, this seems like a bug in haddock. In Data.List.html when I > click on the source code for init I go to https://hackage.haskell.org > /package/base-4.9.0.0/docs/src/GHC.List.html#init.  This is the file >  I should go to for the source code for length also. Perhaps the > problem is that the type of length in Data.List is Foldable t => t a > -> Int while init is [a] -> [a] ? Should I file a haddock bug for the > preceding? not, this is all right and intentional. Data.List re-exports Data.Foldable.length so that you do not get import conflicts when importing both. This was a design decision back then when the FTP (Foldable/Traversable) proposal was enacted. > There seems to be two minor related problems with the Users Guide > (8.0.1.20161117) in section 10.32.6,List fusion. First, it should > mention length as a good consumer. Secondly, it says: "If you want to > write your own good consumers or producers, look at the Prelude > definitions of the above functions to see how to do so." However if > you go to https://hackage.haskell.org/package/base- > 4.9.0.0/docs/Prelude.html and look at the source code for length you > end up at https://hackage.haskell.org/package/base- > 4.9.0.0/docs/src/Data.Foldable.html#length which is not a good > consumer. I think the User's Guide should be changed to replace > "Prelude" with "Data.List" in the quoted sentence. I'll file a doc > bug on the User's Guide for these two issues. Yes, that would be helpful. The text has not been added since FTP. Maybe even better, the user’s guide could simply contain a section that explains how to make good consumers and producer, including the hoops that one has to jump through when one wants to use the library’s version of a function when no fusion happens. Maybe together with David Feuer, who most recently battled with that. Maybe I can write that, I just had to write about list fusion for a paper anyways. Greetings, Joachim -- Joachim “nomeata” Breitner   mail at joachim-breitner.de • https://www.joachim-breitner.de/   XMPP: nomeata at joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F   Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From shea at shealevy.com Tue Dec 13 17:48:25 2016 From: shea at shealevy.com (Shea Levy) Date: Tue, 13 Dec 2016 12:48:25 -0500 Subject: Reason for fixing minimum bootstrap version at 2 major releases ago? Message-ID: <878trjlnyu.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> Hi all, I'm wondering, why do we require ghc to be bootstrappable with the past 2 major releases instead of just the past 1? Is it a common case that someone is compiling GHC but can't easily get the latest release? Thanks, Shea -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From kili at outback.escape.de Tue Dec 13 20:14:40 2016 From: kili at outback.escape.de (Matthias Kilian) Date: Tue, 13 Dec 2016 21:14:40 +0100 Subject: Reason for fixing minimum bootstrap version at 2 major releases ago? In-Reply-To: <878trjlnyu.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> References: <878trjlnyu.fsf@shlevy-laptop.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: <20161213201440.GA60302@nutty.outback.escape.de> Hi, On Tue, Dec 13, 2016 at 12:48:25PM -0500, Shea Levy wrote: > I'm wondering, why do we require ghc to be bootstrappable with the past > 2 major releases instead of just the past 1? Is it a common case that > someone is compiling GHC but can't easily get the latest release? Well, I can't speak for those who made this decision, but supporting more than one major release eases the life of os distribution package maintainers, especially in cases where the distribution package had been left behind for too long, which may happen for many reasons, like a maintainer dropped maintainership and a new one has to be found first, or like a maintainer missing a major release because he's to busy or too lazy. (The latter sometimes applies to me ;-)) Ciao, Kili From ben at well-typed.com Tue Dec 13 22:44:22 2016 From: ben at well-typed.com (Ben Gamari) Date: Tue, 13 Dec 2016 17:44:22 -0500 Subject: Reverted f723ba2f3b6d778f903fb1de4a5af93fe65eed10 Message-ID: <87k2b3la9l.fsf@ben-laptop.smart-cactus.org> Hi Simon, Earlier today I noticed that the testsuite started failing with f723ba2f3b6d778f903fb1de4a5af93fe65eed10 due to break024 and break011. See https://phabricator.haskell.org/harbormaster/build/16407/ (I've included the output differences below). I've reverted the patch to keep the tree buildable but obviously feel free to commit again when it validates. Cheers, - Ben --- "/tmp/ghctest-zc_e5ng4/test spaces/./ghci.debugger/scripts/break011.run/break011.stdout.normalised" 2016-12-13 01:09:56.868988119 +0000 +++ "/tmp/ghctest-zc_e5ng4/test spaces/./ghci.debugger/scripts/break011.run/break011.run.stdout.normalised" 2016-12-13 01:09:56.868988119 +0000 @@ -40,17 +40,9 @@ CallStack (from HasCallStack): error, called at Test7.hs:: in :Main Stopped in , -_exception :: e = SomeException - (ErrorCallWithLocation - "foo" - "CallStack (from HasCallStack): - error, called at Test7.hs:: in :Main") +_exception :: e = _ Stopped in , -_exception :: e = SomeException - (ErrorCallWithLocation - "foo" - "CallStack (from HasCallStack): - error, called at Test7.hs:: in :Main") +_exception :: e = _ *** Exception: foo CallStack (from HasCallStack): error, called at Test7.hs:: in :Main --- "/tmp/ghctest-zc_e5ng4/test spaces/./ghci.debugger/scripts/break024.run/break024.stdout.normalised" 2016-12-13 01:09:57.464988119 +0000 +++ "/tmp/ghctest-zc_e5ng4/test spaces/./ghci.debugger/scripts/break024.run/break024.run.stdout.normalised" 2016-12-13 01:09:57.464988119 +0000 @@ -11,8 +11,7 @@ (GHC.IO.Exception.IOError Nothing GHC.IO.Exception.UserError [] "error" Nothing Nothing) Stopped in , -_exception :: e = SomeException - (GHC.IO.Exception.IOError Nothing GHC.IO.Exception.UserError ....) +_exception :: e = _ Stopped in , _exception :: e = _ _exception = SomeException -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From simonpj at microsoft.com Tue Dec 13 23:13:20 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 13 Dec 2016 23:13:20 +0000 Subject: Reverted f723ba2f3b6d778f903fb1de4a5af93fe65eed10 In-Reply-To: <87k2b3la9l.fsf@ben-laptop.smart-cactus.org> References: <87k2b3la9l.fsf@ben-laptop.smart-cactus.org> Message-ID: Sorry about that. Didn't break for me. I'll check. Simon | -----Original Message----- | From: Ben Gamari [mailto:ben at well-typed.com] | Sent: 13 December 2016 22:44 | To: Simon Peyton Jones | Cc: GHC developers | Subject: Reverted f723ba2f3b6d778f903fb1de4a5af93fe65eed10 | | Hi Simon, | | Earlier today I noticed that the testsuite started failing with | f723ba2f3b6d778f903fb1de4a5af93fe65eed10 due to break024 and break011. | See https://phabricator.haskell.org/harbormaster/build/16407/ (I've | included the output differences below). | | I've reverted the patch to keep the tree buildable but obviously feel | free to commit again when it validates. | | Cheers, | | - Ben | | | | --- "/tmp/ghctest-zc_e5ng4/test | spaces/./ghci.debugger/scripts/break011.run/break011.stdout.normalised" | 2016-12-13 01:09:56.868988119 +0000 | +++ "/tmp/ghctest-zc_e5ng4/test | spaces/./ghci.debugger/scripts/break011.run/break011.run.stdout.normalise | d" 2016-12-13 01:09:56.868988119 +0000 | @@ -40,17 +40,9 @@ | CallStack (from HasCallStack): | error, called at Test7.hs:: in :Main | Stopped in , -_exception :: e = SomeException | - (ErrorCallWithLocation | - "foo" | - "CallStack (from HasCallStack): | - error, called at Test7.hs:: in :Main") | +_exception :: e = _ | Stopped in , -_exception :: e = | SomeException | - (ErrorCallWithLocation | - "foo" | - "CallStack (from HasCallStack): | - error, called at Test7.hs:: in :Main") | +_exception :: e = _ | *** Exception: foo | CallStack (from HasCallStack): | error, called at Test7.hs:: in :Main | --- "/tmp/ghctest-zc_e5ng4/test | spaces/./ghci.debugger/scripts/break024.run/break024.stdout.normalised" | 2016-12-13 01:09:57.464988119 +0000 | +++ "/tmp/ghctest-zc_e5ng4/test | spaces/./ghci.debugger/scripts/break024.run/break024.run.stdout.normalise | d" 2016-12-13 01:09:57.464988119 +0000 | @@ -11,8 +11,7 @@ | (GHC.IO.Exception.IOError | Nothing GHC.IO.Exception.UserError [] "error" Nothing | Nothing) Stopped in , -_exception :: e = | SomeException | - (GHC.IO.Exception.IOError Nothing | GHC.IO.Exception.UserError ....) | +_exception :: e = _ | Stopped in , _exception :: e = _ _exception | = SomeException From carter.schonwald at gmail.com Wed Dec 14 15:07:41 2016 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 14 Dec 2016 10:07:41 -0500 Subject: Explicit inequality evidence In-Reply-To: <7FB0558D-EF5C-4766-A21D-062E0438540C@cs.brynmawr.edu> References: <9A96BA66-ECC1-4FD0-AF29-1836B10385C5@iki.fi> <7FB0558D-EF5C-4766-A21D-062E0438540C@cs.brynmawr.edu> Message-ID: Possibly naive question: do we have decidable inequality in a meta theoretical sense? I feel like we have definite equality and fuzzy might not always be equal but could be for polymorphic types. And that definite inequality on non polymorphic terms is a lot smaller than what folks likely want? On Dec 13, 2016 10:01 AM, "Richard Eisenberg" wrote: > I've thought about inequality on and off for years now, but it's a hard > nut to crack. If we want this evidence to affect closed type family > reduction, then we would need evidence of inequality in Core, and a > brand-spanking-new type safety proof. I don't wish to discourage this > inquiry, but I also don't think this battle will be won easily. > > Richard > > > On Dec 13, 2016, at 1:02 AM, David Feuer wrote: > > > > On Tue, Dec 13, 2016 at 12:49 AM, Oleg Grenrus > wrote: > >> First the bike shedding: I’d prefer /~ and :/~:. > > > > Those are indeed better. > > > >> new Typeable [1] would actually provide heterogenous equality: > >> > >> eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). > >> TypeRep a -> TypeRep b -> Maybe (a :~~: b) > >> > >> And this one is tricky, should it be: > >> > >> eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). > >> TypeRep a -> TypeRep b -> > >> Either (Either (k1 :/~: k2) (a :/~: b)) (a :~~: b) > >> > >> i.e. how kind inequality would work? > > > > I don't know. It sounds like some details of how kinds are expressed > > in TypeRep might still be a bit uncertain, but I'm not tuned in. Maybe > > we should punt and use heterogeneous inequality? That's over my head. > > > >> I'm not sure about propagation rules, with inequality you have to be > *very* careful! > >> > >> irreflexivity, x /~ x and symmetry x /~ y <=> y /~ x are clear. > >> > >> I assume that in your rules, variables are not type families, otherwise > >> > >> x /~ y => f x /~ f y doesn't hold if `f` isn't injective. (e.g. type > family F x where F x = ()) > >> other direction is true though. > > > > I was definitely imagining them as first-class types; your point that > > f x /~ f y => x /~ y even if f is a type family is an excellent one. > > > >> Also: > >> > >> f x ~ a -> b, is true with f ~ (->) a, x ~ b. > > > > Whoops! Yeah, I momentarily forgot that (->) is a constructor. Just > > leave out that bogus piece. > > > > Thanks, > > David Feuer > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Wed Dec 14 16:07:53 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Wed, 14 Dec 2016 11:07:53 -0500 Subject: Explicit inequality evidence In-Reply-To: References: <9A96BA66-ECC1-4FD0-AF29-1836B10385C5@iki.fi> <7FB0558D-EF5C-4766-A21D-062E0438540C@cs.brynmawr.edu> Message-ID: <868FF82F-6EF3-4E59-9704-52661FEFD6FC@cs.brynmawr.edu> > On Dec 14, 2016, at 10:07 AM, Carter Schonwald wrote: > > Possibly naive question: do we have decidable inequality in a meta theoretical sense? I feel like we have definite equality and fuzzy might not always be equal but could be for polymorphic types. And that definite inequality on non polymorphic terms is a lot smaller than what folks likely want? Not sure what you mean here. FC/Core has a definitional equality which is decidable (and must be). And if definitional equality is decidable, it follows that definitional inequality is decidable. On the other hand, what we are talking about in this thread is *propositional* inequality -- that is, an inequality supported by a proof. Propositional equality must be a larger relation than definitional equality: this is what the Refl constructor, or in the Greek, shows. It then follows that propositional inequality must be smaller than definitional inequality. This is a Good Thing, because F Int and Bool are definitionally inequal, but we don't want them to be propositionally inequal. Propositional inequality is almost surely undecidable, because of looping type families (at least). But that's OK -- propositional equality is also undecidable, and that hasn't slowed us down. :) Richard From oleg.grenrus at iki.fi Thu Dec 15 06:30:02 2016 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Thu, 15 Dec 2016 08:30:02 +0200 Subject: Explicit inequality evidence In-Reply-To: <7FB0558D-EF5C-4766-A21D-062E0438540C@cs.brynmawr.edu> References: <9A96BA66-ECC1-4FD0-AF29-1836B10385C5@iki.fi> <7FB0558D-EF5C-4766-A21D-062E0438540C@cs.brynmawr.edu> Message-ID: Out of curiosity: where's the current type safety proof, and is it mechanized? Oleg On 13.12.2016 17:01, Richard Eisenberg wrote: > I've thought about inequality on and off for years now, but it's a hard nut to crack. If we want this evidence to affect closed type family reduction, then we would need evidence of inequality in Core, and a brand-spanking-new type safety proof. I don't wish to discourage this inquiry, but I also don't think this battle will be won easily. > > Richard > >> On Dec 13, 2016, at 1:02 AM, David Feuer wrote: >> >> On Tue, Dec 13, 2016 at 12:49 AM, Oleg Grenrus wrote: >>> First the bike shedding: I’d prefer /~ and :/~:. >> Those are indeed better. >> >>> new Typeable [1] would actually provide heterogenous equality: >>> >>> eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). >>> TypeRep a -> TypeRep b -> Maybe (a :~~: b) >>> >>> And this one is tricky, should it be: >>> >>> eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). >>> TypeRep a -> TypeRep b -> >>> Either (Either (k1 :/~: k2) (a :/~: b)) (a :~~: b) >>> >>> i.e. how kind inequality would work? >> I don't know. It sounds like some details of how kinds are expressed >> in TypeRep might still be a bit uncertain, but I'm not tuned in. Maybe >> we should punt and use heterogeneous inequality? That's over my head. >> >>> I'm not sure about propagation rules, with inequality you have to be *very* careful! >>> >>> irreflexivity, x /~ x and symmetry x /~ y <=> y /~ x are clear. >>> >>> I assume that in your rules, variables are not type families, otherwise >>> >>> x /~ y => f x /~ f y doesn't hold if `f` isn't injective. (e.g. type family F x where F x = ()) >>> other direction is true though. >> I was definitely imagining them as first-class types; your point that >> f x /~ f y => x /~ y even if f is a type family is an excellent one. >> >>> Also: >>> >>> f x ~ a -> b, is true with f ~ (->) a, x ~ b. >> Whoops! Yeah, I momentarily forgot that (->) is a constructor. Just >> leave out that bogus piece. >> >> Thanks, >> David Feuer >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From alan.zimm at gmail.com Thu Dec 15 10:55:29 2016 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 15 Dec 2016 12:55:29 +0200 Subject: Fwd: [Ann] Haskell Ecosystem Proposals In-Reply-To: References: Message-ID: I am forwarding this mail to ghc-devs and cabal-devs in case anyone missed the original which went to haskell-cafe only. Alan ---------- Forwarded message ---------- From: Alan & Kim Zimmerman Date: Sun, Dec 11, 2016 at 9:39 PM Subject: [Ann] Haskell Ecosystem Proposals To: haskell Earlier this year Simon Peyton Jones wrote about respect [1], and said "It's worth separating two things 1. Publicly debating an issue where judgements differ 2. Using offensive or adversarial language in that debate" There is now a repository[2] for us as a community to have the first kind of discussion about issues that affect the community as a whole. The intention is that this becomes a neutral place where discussion can take place about coordinating the various services offered to the haskell community. This is partly to expose the thinking and constraints on a particular approach, so proponents of other approaches can have a better understanding of how things can evolve. The idea is that through an honest understanding of the various parts we can achieve consensus on how to improve things. If this all sounds a bit handwavy, the first concrete example of this approach is a pull request [3] discussing the management of implicit or speculative version bounds between cabal-install/hackage and stack/stackage. This has reached a point where there is a clearer understanding of the actual problem, and a viable solution must be agreed. The structure of the repository is shamelessly copied from the one for GHC proposals, so the actual process description is way off. It should probably just state that we discuss until consensus is reached if possible, but that we are always open for further discussion. It is up to all of us to make this work. Regards Alan [1] https://mail.haskell.org/pipermail/haskell/2016-September/024995.html [2] https://github.com/haskell/ecosystem-proposals [3] https://github.com/haskell/ecosystem-proposals/pull/1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Dec 15 11:00:31 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 15 Dec 2016 11:00:31 +0000 Subject: Join points! Message-ID: Everyone: please take a look. Luke Very good. · I think it’s fine to work from your repo; no need to use the main repo. · One big patch is fine. The exception is late lambda-lifting which would best be done separately. · Can you identify any bits that you are less happy with? · Before long, can you put up nofib figures? Make a Trac ticket for this too. On Phab, you can have dialogue about “what does this line of code mean”. On Trac you can have longer-term strategic concerns. There isn’t a clear boundary. But Trac persists and Phab really doesn’t. We should talk about your question about floating. Simon From: Luke Maurer [mailto:maurerl at cs.uoregon.edu] Sent: 15 December 2016 10:52 To: Simon Peyton Jones Subject: Phab diff up Okay, after some further cleanups, I've put up a Phabricator diff: https://phabricator.haskell.org/D2853 (Has some lint failures, but I figure better to put it up sooner … will fix after I get some sleep) Was going to push to a branch in the official GHC repo, too, but I don't think I have push access? Anyway, should I try and split it up into pieces? Hard to see how that would work, given how many interconnected pieces there are. I suppose if you apply the changes to Core Lint last, it might work … Also, the patch includes the stuff from the late lambda-lifting branch, which is perhaps more than we want to push at once! Certainly that much is splittable, if desired. I'm also just not as happy with that code. - Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Thu Dec 15 15:06:32 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Thu, 15 Dec 2016 10:06:32 -0500 Subject: Explicit inequality evidence In-Reply-To: References: <9A96BA66-ECC1-4FD0-AF29-1836B10385C5@iki.fi> <7FB0558D-EF5C-4766-A21D-062E0438540C@cs.brynmawr.edu> Message-ID: <0B8CCC17-96DF-4A56-ADD4-DC9EAD0D1D3D@cs.brynmawr.edu> Hi Oleg, I'm afraid to say that there is no one current type safety proof. Instead, there are lots of bits and pieces: - A system with roles (but no TypeInType or kind polymorphism) is proved in "Safe Zero-cost Coercions for Haskell" (JFP '16) [1]. - A system with TypeInType but no roles is proved in "System FC with Explicit Kind Equality (extended version)" (ICFP '13) [2]. This type safety proof is broken (see [3], section 5.10.5.2), but we have no counterexample to safety. - Closed type families are proved safe in "Closed Type Families with Overlapping Equations" (POPL '14) [4]. This system has no roles nor kind polymorphism. It also assumed that type family reductions terminate, explicitly leaving the challenge of proving safety with non-terminating type families as an open problem (see Section 6 of that paper). There may be a solution in work that has since been completed ("Non-ω-overlapping TRSs are UN" (LIPIcs '16) [5]), but I'm not aware of work that has adapted that solution to work with Haskell. - My thesis (Univ. of Pennsylvania '16) [3] has a proof of a version of Haskell with dependent types. Closed type families have been converted into type-level lambdas; the full proof does not consider the possibility of non-linear patterns in type families. A start toward such an approach is described (Section 5.13.2) but not fleshed out. Roles are not included. - A draft paper, never published, ("An overabundance of equality: Implementing kind equalities into Haskell" (2015) [6]) considers the possibility of combining roles with TypeInType. The proof is somewhat sparse, and it has not gotten the level of scrutiny in the other proofs. Furthermore, the way roles and TypeInType are integrated in GHC is different than what appears in this draft. - Forthcoming work, by Stephanie Weirich, Pedro Amorim, Antoine Voizard, and myself, contains a mechanized proof of safety of a dependently typed Haskell-like system, but with no roles, closed type families, or even datatypes. I do not believe there is a public link to this work; we expect to submit to ICFP. - There is a formally written, but unproved, description of what is implemented in GHC [7]. It is useful for understanding the GHC source code in relation to other published work. There is no proof whatsoever. This is a sorry state of affairs, I know. It remains my hope that we will have a formal, mechanized proof of this all Some Day, and progress is indeed slowly marching toward that goal. Richard [1]: http://cs.brynmawr.edu/~rae/papers/2016/coercible-jfp/coercible-jfp.pdf [2]: http://cs.brynmawr.edu/~rae/papers/2013/fckinds/fckinds-extended.pdf [3]: http://cs.brynmawr.edu/~rae/papers/2016/thesis/eisenberg-thesis.pdf [4]: http://cs.brynmawr.edu/~rae/papers/2014/axioms/axioms-extended.pdf [5]: http://kar.kent.ac.uk/55349/1/proc-kahrs.pdf [6]: http://cs.brynmawr.edu/~rae/papers/2015/equalities/equalities.pdf [7]: https://github.com/ghc/ghc/blob/master/docs/core-spec/core-spec.pdf > On Dec 15, 2016, at 1:30 AM, Oleg Grenrus wrote: > > Out of curiosity: where's the current type safety proof, and is it > mechanized? > > Oleg > > > On 13.12.2016 17:01, Richard Eisenberg wrote: >> I've thought about inequality on and off for years now, but it's a hard nut to crack. If we want this evidence to affect closed type family reduction, then we would need evidence of inequality in Core, and a brand-spanking-new type safety proof. I don't wish to discourage this inquiry, but I also don't think this battle will be won easily. >> >> Richard >> >>> On Dec 13, 2016, at 1:02 AM, David Feuer wrote: >>> >>> On Tue, Dec 13, 2016 at 12:49 AM, Oleg Grenrus wrote: >>>> First the bike shedding: I’d prefer /~ and :/~:. >>> Those are indeed better. >>> >>>> new Typeable [1] would actually provide heterogenous equality: >>>> >>>> eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). >>>> TypeRep a -> TypeRep b -> Maybe (a :~~: b) >>>> >>>> And this one is tricky, should it be: >>>> >>>> eqTypeRep' :: forall k1 k2 (a :: k1) (b :: k2). >>>> TypeRep a -> TypeRep b -> >>>> Either (Either (k1 :/~: k2) (a :/~: b)) (a :~~: b) >>>> >>>> i.e. how kind inequality would work? >>> I don't know. It sounds like some details of how kinds are expressed >>> in TypeRep might still be a bit uncertain, but I'm not tuned in. Maybe >>> we should punt and use heterogeneous inequality? That's over my head. >>> >>>> I'm not sure about propagation rules, with inequality you have to be *very* careful! >>>> >>>> irreflexivity, x /~ x and symmetry x /~ y <=> y /~ x are clear. >>>> >>>> I assume that in your rules, variables are not type families, otherwise >>>> >>>> x /~ y => f x /~ f y doesn't hold if `f` isn't injective. (e.g. type family F x where F x = ()) >>>> other direction is true though. >>> I was definitely imagining them as first-class types; your point that >>> f x /~ f y => x /~ y even if f is a type family is an excellent one. >>> >>>> Also: >>>> >>>> f x ~ a -> b, is true with f ~ (->) a, x ~ b. >>> Whoops! Yeah, I momentarily forgot that (->) is a constructor. Just >>> leave out that bogus piece. >>> >>> Thanks, >>> David Feuer >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > From marlowsd at gmail.com Thu Dec 15 15:24:50 2016 From: marlowsd at gmail.com (Simon Marlow) Date: Thu, 15 Dec 2016 15:24:50 +0000 Subject: Compile GHC with -prof to get a stack trace on panic Message-ID: I think this has been mentioned before but it's probably not widely known: if you compile GHC profiled (that is, enable GhcProfiled=YES in your mk/ build.mk), then every panic comes with a stack trace. Here's one I just saw: ghc-stage2: panic! (the 'impossible' happened) (GHC version 8.1.20161206 for x86_64-unknown-linux): Ix{Int}.index: Index (65536) out of range ((0,28)) CallStack (from -prof): HscTypes.bin_fixities (compiler/main/HscTypes.hs:1050:51-56) HscMain.checkOldIface (compiler/main/HscMain.hs:(586,20)-(587,60)) HscMain.hscIncrementalFrontend (compiler/main/HscMain.hs:(556,1)-(618,81)) HscMain.hscIncrementalCompile (compiler/main/HscMain.hs:(644,1)-(699,32)) GHC.withCleanupSession (compiler/main/GHC.hs:(464,1)-(473,27)) GHC.runGhc (compiler/main/GHC.hs:(439,1)-(444,26)) GHC.defaultErrorHandler (compiler/main/GHC.hs:(379,1)-(411,7)) To get more detail in the stack trace you need to add GhcStage2HcOpts += -fprof-auto-top Or -fprof-auto, depending on how much detail you want. Cheers Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at well-typed.com Thu Dec 15 18:11:59 2016 From: ben at well-typed.com (Ben Gamari) Date: Thu, 15 Dec 2016 13:11:59 -0500 Subject: FYI: base version bump landing soon Message-ID: <87eg19jc40.fsf@ben-laptop.smart-cactus.org> Hello fellow Haskellers, Sometime soon (likely today) I'll be landing a commit to `master` which will bump the version of the `base` library to 4.10.0.0 This will involve bumping a number of submodules as well. This will mean that testing against Hackage will typically require that you pass `--allow-newer=base` to cabal. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ezyang at mit.edu Sun Dec 18 05:45:20 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 17 Dec 2016 21:45:20 -0800 Subject: Patch for time repository Message-ID: <1482039815-sup-5848@sabre> Hi all, I'd like to push the following patch (see bottom of email) to GHC's time repository, but I do not seem to have permissions. Upstream has already taken the fix but the version we currently have in the repo is quite a bit older than upstream. Can someone do it for me / give me bits? Thanks. Edward commit 44c23839f964592946c889626f8acbd1f4f72e55 Author: Edward Z. Yang Date: Sat Dec 17 20:05:11 2016 -0800 Remove useless internal library version bounds. These bounds never do anything, and in Cabal HEAD cause errors which cause GHC's build system to choke. Signed-off-by: Edward Z. Yang diff --git a/time.cabal b/time.cabal index 4a6eb02..28f2c21 100644 --- a/time.cabal +++ b/time.cabal @@ -100,7 +100,7 @@ test-suite ShowDefaultTZAbbreviations ghc-options: -Wall -fwarn-tabs build-depends: base, - time == 1.6.0.1 + time main-is: ShowDefaultTZAbbreviations.hs test-suite tests @@ -122,7 +122,7 @@ test-suite tests build-depends: base, deepseq, - time == 1.6.0.1, + time, QuickCheck >= 2.5.1, test-framework >= 0.8, test-framework-quickcheck2 >= 0.3, From ezyang at mit.edu Sun Dec 18 06:12:33 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 17 Dec 2016 22:12:33 -0800 Subject: haskell.org not sending intermediate certs Message-ID: <1482041114-sup-1001@sabre> See: https://www.sslshopper.com/ssl-checker.html#hostname=www.haskell.org This is causing curl to fail to download it: ezyang at sabre:~/Downloads$ curl https://www.haskell.org/cabal/release/cabal-install-1.24.0.0/cabal-install-1.24.0.0-x86_64-unknown-mingw32.zip curl: (60) server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. Apologies if this is the wrong list. Thanks, Edward From ezyang at mit.edu Sun Dec 18 08:13:43 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Sun, 18 Dec 2016 00:13:43 -0800 Subject: Patch for time repository In-Reply-To: <1482039815-sup-5848@sabre> References: <1482039815-sup-5848@sabre> Message-ID: <1482048804-sup-304@sabre> I resolved this by just bumping our submodule to latest HEAD in the repo (which was taken by upstream.) Edward Excerpts from Edward Z. Yang's message of 2016-12-17 21:45:20 -0800: > Hi all, > > I'd like to push the following patch (see bottom of email) > to GHC's time repository, but I do not seem to have permissions. > Upstream has already taken the fix but the version we currently > have in the repo is quite a bit older than upstream. > > Can someone do it for me / give me bits? Thanks. > > Edward > > commit 44c23839f964592946c889626f8acbd1f4f72e55 > Author: Edward Z. Yang > Date: Sat Dec 17 20:05:11 2016 -0800 > > Remove useless internal library version bounds. > > These bounds never do anything, and in Cabal HEAD cause > errors which cause GHC's build system to choke. > > Signed-off-by: Edward Z. Yang > > diff --git a/time.cabal b/time.cabal > index 4a6eb02..28f2c21 100644 > --- a/time.cabal > +++ b/time.cabal > @@ -100,7 +100,7 @@ test-suite ShowDefaultTZAbbreviations > ghc-options: -Wall -fwarn-tabs > build-depends: > base, > - time == 1.6.0.1 > + time > main-is: ShowDefaultTZAbbreviations.hs > > test-suite tests > @@ -122,7 +122,7 @@ test-suite tests > build-depends: > base, > deepseq, > - time == 1.6.0.1, > + time, > QuickCheck >= 2.5.1, > test-framework >= 0.8, > test-framework-quickcheck2 >= 0.3, From hesselink at gmail.com Sun Dec 18 19:12:23 2016 From: hesselink at gmail.com (Erik Hesselink) Date: Sun, 18 Dec 2016 20:12:23 +0100 Subject: haskell.org not sending intermediate certs In-Reply-To: <1482041114-sup-1001@sabre> References: <1482041114-sup-1001@sabre> Message-ID: I noticed this as well, since my work VPN does fairly strict certificate checking and didn't allow me to connect to any haskell.org urls due to this. I'm not sure about the right list, I've added admin at haskell.org to the CC list. Erik On 18 December 2016 at 07:12, Edward Z. Yang wrote: > See: https://www.sslshopper.com/ssl-checker.html#hostname=www.haskell.org > > This is causing curl to fail to download it: > > ezyang at sabre:~/Downloads$ curl https://www.haskell.org/cabal/ > release/cabal-install-1.24.0.0/cabal-install-1.24.0.0-x86_ > 64-unknown-mingw32.zip > curl: (60) server certificate verification failed. CAfile: > /etc/ssl/certs/ca-certificates.crt CRLfile: none > More details here: http://curl.haxx.se/docs/sslcerts.html > > curl performs SSL certificate verification by default, using a "bundle" > of Certificate Authority (CA) public keys (CA certs). If the default > bundle file isn't adequate, you can specify an alternate file > using the --cacert option. > If this HTTPS server uses a certificate signed by a CA represented in > the bundle, the certificate verification probably failed due to a > problem with the certificate (it might be expired, or the name might > not match the domain name in the URL). > If you'd like to turn off curl's verification of the certificate, use > the -k (or --insecure) option. > > Apologies if this is the wrong list. > > Thanks, > Edward > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From davean at xkcd.com Sun Dec 18 22:49:43 2016 From: davean at xkcd.com (davean) Date: Sun, 18 Dec 2016 17:49:43 -0500 Subject: haskell.org not sending intermediate certs In-Reply-To: References: <1482041114-sup-1001@sabre> Message-ID: admin at h.o is the correct list though I expect all of us are on ghc-devs at h.o also :) I at least read admin with a far higher priority though. We've gone and added the full chain for clients that don't self-acquire them and also tightened up the allowed cipher list. Please let us know if you encounter any further issues. -davean On Sun, Dec 18, 2016 at 2:12 PM, Erik Hesselink wrote: > I noticed this as well, since my work VPN does fairly strict certificate > checking and didn't allow me to connect to any haskell.org urls due to > this. > > I'm not sure about the right list, I've added admin at haskell.org to the CC > list. > > Erik > > On 18 December 2016 at 07:12, Edward Z. Yang wrote: > >> See: https://www.sslshopper.com/ssl-checker.html#hostname=www.haskell.org >> >> This is causing curl to fail to download it: >> >> ezyang at sabre:~/Downloads$ curl https://www.haskell.org/cabal/ >> release/cabal-install-1.24.0.0/cabal-install-1.24.0.0-x86_64 >> -unknown-mingw32.zip >> curl: (60) server certificate verification failed. CAfile: >> /etc/ssl/certs/ca-certificates.crt CRLfile: none >> More details here: http://curl.haxx.se/docs/sslcerts.html >> >> curl performs SSL certificate verification by default, using a "bundle" >> of Certificate Authority (CA) public keys (CA certs). If the default >> bundle file isn't adequate, you can specify an alternate file >> using the --cacert option. >> If this HTTPS server uses a certificate signed by a CA represented in >> the bundle, the certificate verification probably failed due to a >> problem with the certificate (it might be expired, or the name might >> not match the domain name in the URL). >> If you'd like to turn off curl's verification of the certificate, use >> the -k (or --insecure) option. >> >> Apologies if this is the wrong list. >> >> Thanks, >> Edward >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Mon Dec 19 03:08:27 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Sun, 18 Dec 2016 22:08:27 -0500 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: <20161217141928.vari6ewges5o6cmh@city.ac.uk> References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> <20161217141928.vari6ewges5o6cmh@city.ac.uk> Message-ID: <57004E1B-9121-43D8-B4B2-8FB45FF632A6@cs.brynmawr.edu> > On Dec 17, 2016, at 9:19 AM, Ross Paterson wrote: > > On Tue, Nov 29, 2016 at 12:41:53PM +0000, Simon Peyton Jones wrote: >> Type checking and desugaring for arrow syntax has received Absolutely >> No Love for several years. I do not understand how it works very well, >> and I would not be at all surprised if it is broken in corner cases. >> >> It really needs someone to look at it carefully, document it better, and >> perhaps refactor it – esp by using a different data type rather than >> piggy-backing on HsExpr. > > HsCmd was split from HsExpr in 2012. It still re-uses MatchGroup, Stmt, > etc, though. > > The desugaring is made more complicated by doing a lot of analysis that > might be better done in the renamer. And -- unrelated to the original post in this thread -- these complications in desugarer (specifically, the use of fixM) are making my incoming levity-polymorphism update much harder (see https://phabricator.haskell.org/D2852). Even if you don't have time to make the edits yourself, if you could give a 10,000 ft view as to how to remove fixM from the desugarer, I'd be very grateful. I've not looked deeply at this, mostly because it's hard for me to make anything but very local changes to code I don't understand. Thanks! Richard From ezyang at mit.edu Mon Dec 19 04:04:51 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Sun, 18 Dec 2016 20:04:51 -0800 Subject: haskell.org not sending intermediate certs In-Reply-To: References: <1482041114-sup-1001@sabre> Message-ID: <1482120218-sup-660@sabre> curl is working now, and the SSL checker is all green. Thanks! Edward Excerpts from davean's message of 2016-12-18 17:49:43 -0500: > admin at h.o is the correct list though I expect all of us are on ghc-devs at h.o > also :) > I at least read admin with a far higher priority though. > > We've gone and added the full chain for clients that don't self-acquire > them and also tightened up the allowed cipher list. > Please let us know if you encounter any further issues. > > -davean > > On Sun, Dec 18, 2016 at 2:12 PM, Erik Hesselink wrote: > > > I noticed this as well, since my work VPN does fairly strict certificate > > checking and didn't allow me to connect to any haskell.org urls due to > > this. > > > > I'm not sure about the right list, I've added admin at haskell.org to the CC > > list. > > > > Erik > > > > On 18 December 2016 at 07:12, Edward Z. Yang wrote: > > > >> See: https://www.sslshopper.com/ssl-checker.html#hostname=www.haskell.org > >> > >> This is causing curl to fail to download it: > >> > >> ezyang at sabre:~/Downloads$ curl https://www.haskell.org/cabal/ > >> release/cabal-install-1.24.0.0/cabal-install-1.24.0.0-x86_64 > >> -unknown-mingw32.zip > >> curl: (60) server certificate verification failed. CAfile: > >> /etc/ssl/certs/ca-certificates.crt CRLfile: none > >> More details here: http://curl.haxx.se/docs/sslcerts.html > >> > >> curl performs SSL certificate verification by default, using a "bundle" > >> of Certificate Authority (CA) public keys (CA certs). If the default > >> bundle file isn't adequate, you can specify an alternate file > >> using the --cacert option. > >> If this HTTPS server uses a certificate signed by a CA represented in > >> the bundle, the certificate verification probably failed due to a > >> problem with the certificate (it might be expired, or the name might > >> not match the domain name in the URL). > >> If you'd like to turn off curl's verification of the certificate, use > >> the -k (or --insecure) option. > >> > >> Apologies if this is the wrong list. > >> > >> Thanks, > >> Edward > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> > > > > From asr at eafit.edu.co Mon Dec 19 22:21:28 2016 From: asr at eafit.edu.co (=?UTF-8?B?QW5kcsOpcyBTaWNhcmQtUmFtw61yZXo=?=) Date: Mon, 19 Dec 2016 17:21:28 -0500 Subject: Will directory 1.3.0.0 be shipped with GHC 8.0.2? Message-ID: Dear all, I got directory 1.3.0.0 after installing GHC 8.0.2 RC2 (which hasn't been announced) using: $ git clone http://git.haskell.org/ghc.git $ cd ghc $ git checkout ghc-8.0.2-rc $ git submodule update --init $ perl boot $ ./configure $ make $ make install Will GHC 8.0.2 do a major version bump of directory? Best, -- Andrés La información contenida en este correo electrónico está dirigida únicamente a su destinatario y puede contener información confidencial, material privilegiado o información protegida por derecho de autor. Está prohibida cualquier copia, utilización, indebida retención, modificación, difusión, distribución o reproducción total o parcial. Si usted recibe este mensaje por error, por favor contacte al remitente y elimínelo. La información aquí contenida es responsabilidad exclusiva de su remitente por lo tanto la Universidad EAFIT no se hace responsable de lo que el mensaje contenga. The information contained in this email is addressed to its recipient only and may contain confidential information, privileged material or information protected by copyright. Its prohibited any copy, use, improper retention, modification, dissemination, distribution or total or partial reproduction. If you receive this message by error, please contact the sender and delete it. The information contained herein is the sole responsibility of the sender therefore Universidad EAFIT is not responsible for what the message contains. From ben at well-typed.com Tue Dec 20 00:32:18 2016 From: ben at well-typed.com (Ben Gamari) Date: Mon, 19 Dec 2016 19:32:18 -0500 Subject: Will directory 1.3.0.0 be shipped with GHC 8.0.2? In-Reply-To: References: Message-ID: <8737hjzbhp.fsf@ben-laptop.smart-cactus.org> Andrés Sicard-Ramírez writes: > Dear all, > > I got directory 1.3.0.0 after installing GHC 8.0.2 RC2 (which hasn't > been announced) using: > The source release went out last week. I'll likely announce the availability of builds tomorrow or Wednesday. Indeed GHC 8.0.2 will ship with directory 1.3; this is unfortunate, but we concluded that this would be the only sensible option considering there is a rather subtle change in semantics in this release (namely fixing directory #63). Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at smart-cactus.org Tue Dec 20 00:54:36 2016 From: ben at smart-cactus.org (Ben Gamari) Date: Mon, 19 Dec 2016 19:54:36 -0500 Subject: Reading source annotations during type checking In-Reply-To: References: Message-ID: <87zijrxvw3.fsf@ben-laptop.smart-cactus.org> Alan, did you see this? Alejandro Serrano Mena writes: > Dear GHC devs, > Is there a way to retrieve "source annotations" (as defined by > https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/extending_ghc.html#source-annotations) > during type checking. In particular, I am interested in reading them in > TcExpr and TcCanonical. > > Regards, > Alejandro > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From alan.zimm at gmail.com Tue Dec 20 07:47:09 2016 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Tue, 20 Dec 2016 09:47:09 +0200 Subject: Reading source annotations during type checking In-Reply-To: <87zijrxvw3.fsf@ben-laptop.smart-cactus.org> References: <87zijrxvw3.fsf@ben-laptop.smart-cactus.org> Message-ID: I did, and thought I saw a reply. They are captured in the AST. data AnnDecl name = HsAnnotation SourceText -- Note [Pragma source text] in BasicTypes (AnnProvenance name) (Located (HsExpr name)) Alan On Tue, Dec 20, 2016 at 2:54 AM, Ben Gamari wrote: > Alan, did you see this? > > Alejandro Serrano Mena writes: > > > Dear GHC devs, > > Is there a way to retrieve "source annotations" (as defined by > > https://downloads.haskell.org/~ghc/latest/docs/html/users_ > guide/extending_ghc.html#source-annotations) > > during type checking. In particular, I am interested in reading them in > > TcExpr and TcCanonical. > > > > Regards, > > Alejandro > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trupill at gmail.com Tue Dec 20 09:58:54 2016 From: trupill at gmail.com (Alejandro Serrano Mena) Date: Tue, 20 Dec 2016 10:58:54 +0100 Subject: Reading source annotations during type checking In-Reply-To: References: <87zijrxvw3.fsf@ben-laptop.smart-cactus.org> Message-ID: Thanks very much! New things to try during Christmas :) Alejandro 2016-12-20 8:47 GMT+01:00 Alan & Kim Zimmerman : > I did, and thought I saw a reply. > > They are captured in the AST. > > data AnnDecl name = HsAnnotation > SourceText -- Note [Pragma source text] in BasicTypes > (AnnProvenance name) (Located (HsExpr name)) > > Alan > > On Tue, Dec 20, 2016 at 2:54 AM, Ben Gamari wrote: > >> Alan, did you see this? >> >> Alejandro Serrano Mena writes: >> >> > Dear GHC devs, >> > Is there a way to retrieve "source annotations" (as defined by >> > https://downloads.haskell.org/~ghc/latest/docs/html/users_gu >> ide/extending_ghc.html#source-annotations) >> > during type checking. In particular, I am interested in reading them in >> > TcExpr and TcCanonical. >> > >> > Regards, >> > Alejandro >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Wed Dec 21 05:15:03 2016 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 21 Dec 2016 00:15:03 -0500 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> Message-ID: Arrows haven't seen much love for a while. In part this is because many of the original applications for arrows have been shown to be perfectly suited to being handled by Applicatives. e.g. the Swiestra/Duponcheel parser that sort of kickstarted everything. There are several options for improved arrow desugaring. Megacz's work on GArrows at first feels like it should be applicable here, as it lets you change out the choice of pseudo-product while preserving the general arrow feel. Unfortunately, the GArrow class isn't sufficient for most arrow desguaring, due to the fact that the arrow desugaring inherently involves breaking apart patterns for almost any non-trivial use and nothing really requires the GArrow 'product' to actually even be product like. Cale Gibbard and Ryan Trinkle on the other hand like to use a more CCC-like basis for arrows. This stays in the spirit to the GArrow class, but you still have the problems around pattern matching. I don't think they actually wrote anything to deal with the actual arrow notation and just programmed in the alternate style to get better introspection on the operations involved. I think the key insight there is that much of the notation can be made to work with weaker categorical structures than full arrows, but the existing class hierarchy around arrows is very coarse. As a minor data point both of these sorts of encodings of arrow problems start to drag in language extensions that make the notation harder to standardize. Currently they work with bog standard Haskell 98/2010. If you're looking for an interesting theoretical direction to extend Arrow notation: An arrow is a strong monad in the category of profunctors [1]. Using the profunctors library [2] (Strong p, Category p) is equivalent in power to Arrow p. Exploiting that, a profunctor-based desugaring could get away with much weaker constraints than Arrow depending on how much of proc notation you use. Alternately a separate class hierarchy that only required covariance in the second argument is an option, but my vague recollection from the last time that I looked into this is that while such a desguaring only uses covariance in the second argument of the profunctor, you can prove that contravariance in the first argument follows from the pile of laws. This subject came up the last time someone thought to extend the Arrow desguaring. You can probably find a thread on the mailing list from Ross Paterson a few years ago. This version has the benefit of fitting pretty close to the existing arrow desugaring and not needing new language extensions. On the other hand, refactoring the Arrow class in this (or any other) way is somewhat of an invasive exercise. The profunctors package offers moral equivalents to most of the Arrow subclasses, but no effort has been made to match the existing Arrow hierarchy. Given that little new code seems to be being written with Arrows in mind, while some older code makes heavy use of it (hxt, etc.), refactoring the arrow hierarchy is kind of a hard sell. It is by no means impossible, just something that would require a fair bit of community wrangling and a lot of work showing clear advantages to a new status quo at a time when its very hard to get anybody to care about arrow notation at all. -Edward [1] http://www-kb.is.s.u-tokyo.ac.jp/~asada/papers/arrStrMnd.pdf [2] http://hackage.haskell.org/package/profunctors-5.2/docs/Data-Profunctor-Strong.html On Fri, Dec 2, 2016 at 10:57 AM, Jan Bracker via ghc-devs < ghc-devs at haskell.org> wrote: > Simon, Richard, > > thank you for your answer! I don't have time to look into the GHC sources > right now, but I will set aside some time after the holidays and take a > close look at what the exact restrictions on proc-notation are and document > them. > > Since you suggested a rewrite of GHC's handling of proc-syntax, are there > any opinions on integrating generalized arrows (Joseph 2014) in the > process? I think they would greatly improve arrows! I don't know if I have > the time to attempt this, but if I find the time I would give it a try. Why > wasn't this integrated while it was still actively developed? > > Best, > Jan > > [Joseph 2014] https://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/ > EECS-2014-130.pdf > > > > 2016-11-29 12:41 GMT+00:00 Simon Peyton Jones : > >> Jan, >> >> >> >> Type checking and desugaring for arrow syntax has received Absolutely No >> Love for several years. I do not understand how it works very well, and I >> would not be at all surprised if it is broken in corner cases. >> >> >> >> It really needs someone to look at it carefully, document it better, and >> perhaps refactor it – esp by using a different data type rather than >> piggy-backing on HsExpr. >> >> >> >> In the light of that understanding, I think rebindable syntax will be >> easier. >> >> >> >> I don’t know if you are up for that, but it’s a rather un-tended part of >> GHC. >> >> >> >> Thanks >> >> >> >> Simon >> >> >> >> *From:* ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of *Richard >> Eisenberg >> *Sent:* 28 November 2016 22:30 >> *To:* Jan Bracker >> *Cc:* ghc-devs at haskell.org >> *Subject:* Help needed: Restrictions of proc-notation with >> RebindableSyntax >> >> >> >> Jan’s question is a good one, but I don’t know enough about procs to be >> able to answer. I do know that the answer can be found by looking for uses >> of `tcSyntaxOp` in the TcArrows module.... but I just can’t translate it >> all to source Haskell, having roughly 0 understanding of this end of the >> language. >> >> >> >> Can anyone else help Jan here? >> >> >> >> Richard >> >> >> >> On Nov 23, 2016, at 4:34 AM, Jan Bracker via ghc-devs < >> ghc-devs at haskell.org> wrote: >> >> >> >> Hello, >> >> >> >> I want to use the proc-notation together with RebindableSyntax. So far >> what I am trying to do is working fine, but I would like to know what the >> exact restrictions on the supplied functions are. I am introducing >> additional indices and constraints on the operations. The documentation [1] >> says the details are in flux and that I should ask directly. >> >> >> >> Best, >> >> Jan >> >> >> >> [1] https://downloads.haskell.org/~ghc/latest/docs/html/user >> s_guide/glasgow_exts.html#rebindable-syntax-and-the-implicit >> -prelude-import >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mle+hs at mega-nerd.com Wed Dec 21 06:33:36 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Wed, 21 Dec 2016 17:33:36 +1100 Subject: Confused about the sub-modules Message-ID: <20161221173336.bfa2a14da3410d072751ff67@mega-nerd.com> Hi all, I'm a bit confused about how the GHC dev tree handles submodules like libraries/Cabal, libraries/process, libraries/directory and libraries/containers. All of these libraries/submodules seem to have their own github projects where people can submit PRs, but once the commits have been made there, what is the process to get submodules updated in the GHC tree? Any light people can shed on this process would be appreciated. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From monkleyon at googlemail.com Wed Dec 21 06:43:02 2016 From: monkleyon at googlemail.com (MarLinn) Date: Wed, 21 Dec 2016 07:43:02 +0100 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> Message-ID: <808e9d01-6eb1-f02d-ffff-b18fec8bffd5@gmail.com> Sorry to barge into the discussion with neither much knowledge of the theory nor the implementation. I tried to look at both, but my understanding is severely lacking. However I do feel a tiny bit emboldened because my own findings turned out to at least have the same shadow as the contents of this more thorough overview. The one part of the existing story I personally found the most promising was to explore the category hierarchy around Arrows, in other words the Gibbard/Trinkle perspective. Therefore I want to elaborate my own naive findings a tiny bit. Bear in mind that much of this is gleaned from experimental implementations or interpreted, but I do not have proofs, or even theory. Almost all parts necessary for an Arrow seem to already be contained in a symmetrical braided category. Fascinatingly, even the braiding might be superfluous in some cases, leaving only the need for a monoidal category. But to get from a braided category to a full Arrow, there seems to be a need for "constructors" like (arr $ \x -> (x,x)) and "destructors" like (arr fst). There seem to be several options for those, and a choice would have to be made. Notably: is introduction done by duplicating existing values, or by introducing new "unit" values (for a suitable definition of "unit")? That choice doesn't seem impactful, but my gut feeling is that that's just because I cannot see the potential points of impact. What makes this story worse is that the currently known hierarchies around ArrowChoice and ArrowLoop seem to be coarser still – although the work around profunctors might help. That said, my understanding is so bad that I can not even see any benefits or drawbacks of the structure of ArrowLoop's "loop" versus a more "standard" fix-point structure. I do, however, think there is something to be gained. The good old Rosetta Stone paper still makes me think that what is now Arrow notation might be turned into a much more potent tool – exactly because we might be able to lift those restrictions. One particular idea I have in mind: If the notation can support purely braided categories, it might be used to describe reversible computation, which in turn is used in describing quantum computation. The frustrating part for me is that I would like to contribute to this effort. But again, my understanding of each and every component is fleeting at best. MarLinn On 2016-12-21 06:15, Edward Kmett wrote: > Arrows haven't seen much love for a while. In part this is because > many of the original applications for arrows have been shown to be > perfectly suited to being handled by Applicatives. e.g. the > Swiestra/Duponcheel parser that sort of kickstarted everything. > > There are several options for improved arrow desugaring. > > Megacz's work on GArrows at first feels like it should be applicable > here, as it lets you change out the choice of pseudo-product while > preserving the general arrow feel. Unfortunately, the GArrow class > isn't sufficient for most arrow desguaring, due to the fact that the > arrow desugaring inherently involves breaking apart patterns for > almost any non-trivial use and nothing really requires the GArrow > 'product' to actually even be product like. > > Cale Gibbard and Ryan Trinkle on the other hand like to use a more > CCC-like basis for arrows. This stays in the spirit to the GArrow > class, but you still have the problems around pattern matching. I > don't think they actually wrote anything to deal with the actual arrow > notation and just programmed in the alternate style to get better > introspection on the operations involved. I think the key insight > there is that much of the notation can be made to work with weaker > categorical structures than full arrows, but the existing class > hierarchy around arrows is very coarse. > > As a minor data point both of these sorts of encodings of arrow > problems start to drag in language extensions that make the notation > harder to standardize. Currently they work with bog standard Haskell > 98/2010. > > If you're looking for an interesting theoretical direction to extend > Arrow notation: > > An arrow is a strong monad in the category of profunctors [1]. > > Using the profunctors library [2] (Strong p, Category p) is equivalent > in power to Arrow p. > > Exploiting that, a profunctor-based desugaring could get away with > much weaker constraints than Arrow depending on how much of proc > notation you use. > > Alternately a separate class hierarchy that only required covariance > in the second argument is an option, but my vague recollection from > the last time that I looked into this is that while such a desguaring > only uses covariance in the second argument of the profunctor, you can > prove that contravariance in the first argument follows from the pile > of laws. This subject came up the last time someone thought to extend > the Arrow desguaring. You can probably find a thread on the mailing > list from Ross Paterson a few years ago. > > This version has the benefit of fitting pretty close to the existing > arrow desugaring and not needing new language extensions. > > On the other hand, refactoring the Arrow class in this (or any other) > way is somewhat of an invasive exercise. The profunctors package > offers moral equivalents to most of the Arrow subclasses, but no > effort has been made to match the existing Arrow hierarchy. > > Given that little new code seems to be being written with Arrows in > mind, while some older code makes heavy use of it (hxt, etc.), > refactoring the arrow hierarchy is kind of a hard sell. It is by no > means impossible, just something that would require a fair bit of > community wrangling and a lot of work showing clear advantages to a > new status quo at a time when its very hard to get anybody to care > about arrow notation at all. > > -Edward > > [1] http://www-kb.is.s.u-tokyo.ac.jp/~asada/papers/arrStrMnd.pdf > > [2] > http://hackage.haskell.org/package/profunctors-5.2/docs/Data-Profunctor-Strong.html > > On Fri, Dec 2, 2016 at 10:57 AM, Jan Bracker via ghc-devs > > wrote: > > Simon, Richard, > > thank you for your answer! I don't have time to look into the GHC > sources right now, but I will set aside some time after the > holidays and take a close look at what the exact restrictions on > proc-notation are and document them. > > Since you suggested a rewrite of GHC's handling of proc-syntax, > are there any opinions on integrating generalized arrows (Joseph > 2014) in the process? I think they would greatly improve arrows! I > don't know if I have the time to attempt this, but if I find the > time I would give it a try. Why wasn't this integrated while it > was still actively developed? > > Best, > Jan > > [Joseph 2014] > https://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-130.pdf > > > > > 2016-11-29 12:41 GMT+00:00 Simon Peyton Jones > >: > > Jan, > > Type checking and desugaring for arrow syntax has received > Absolutely No Love for several years. I do not understand how > it works very well, and I would not be at all surprised if it > is broken in corner cases. > > It really needs someone to look at it carefully, document it > better, and perhaps refactor it – esp by using a different > data type rather than piggy-backing on HsExpr. > > In the light of that understanding, I think rebindable syntax > will be easier. > > I don’t know if you are up for that, but it’s a rather > un-tended part of GHC. > > Thanks > > Simon > > *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org > ] *On Behalf Of *Richard > Eisenberg > *Sent:* 28 November 2016 22:30 > *To:* Jan Bracker > > *Cc:* ghc-devs at haskell.org > *Subject:* Help needed: Restrictions of proc-notation with > RebindableSyntax > > Jan’s question is a good one, but I don’t know enough about > procs to be able to answer. I do know that the answer can be > found by looking for uses of `tcSyntaxOp` in the TcArrows > module.... but I just can’t translate it all to source > Haskell, having roughly 0 understanding of this end of the > language. > > Can anyone else help Jan here? > > Richard > > On Nov 23, 2016, at 4:34 AM, Jan Bracker via ghc-devs > > wrote: > > Hello, > > I want to use the proc-notation together with > RebindableSyntax. So far what I am trying to do is working > fine, but I would like to know what the exact restrictions > on the supplied functions are. I am introducing additional > indices and constraints on the operations. The > documentation [1] says the details are in flux and that I > should ask directly. > > Best, > > Jan > > [1] > https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/glasgow_exts.html#rebindable-syntax-and-the-implicit-prelude-import > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Wed Dec 21 06:48:46 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 21 Dec 2016 01:48:46 -0500 Subject: Confused about the sub-modules In-Reply-To: <20161221173336.bfa2a14da3410d072751ff67@mega-nerd.com> References: <20161221173336.bfa2a14da3410d072751ff67@mega-nerd.com> Message-ID: <1482302869-sup-287@sabre> Once the commit is upstream, I just checkout a newer commit from master and then commit it as a submodule update. Maybe it's wrong but no one has ever told me otherwise. Around release time the release manager makes sure all the libraries correspond to actual releases. Edward Excerpts from Erik de Castro Lopo's message of 2016-12-21 17:33:36 +1100: > Hi all, > > I'm a bit confused about how the GHC dev tree handles submodules like > libraries/Cabal, libraries/process, libraries/directory and > libraries/containers. > > All of these libraries/submodules seem to have their own github projects > where people can submit PRs, but once the commits have been made there, > what is the process to get submodules updated in the GHC tree? > > Any light people can shed on this process would be appreciated. > > Erik From alan.zimm at gmail.com Wed Dec 21 07:20:15 2016 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 21 Dec 2016 09:20:15 +0200 Subject: Confused about the sub-modules In-Reply-To: <1482302869-sup-287@sabre> References: <20161221173336.bfa2a14da3410d072751ff67@mega-nerd.com> <1482302869-sup-287@sabre> Message-ID: For the utils/haddock submodule there is a ghc-head branch, and the commit should be on that before pushing to GHC master with a submodule update. I do not know if that convention is followed on any of the other libraries. Alan On Wed, Dec 21, 2016 at 8:48 AM, Edward Z. Yang wrote: > Once the commit is upstream, I just checkout a newer commit from > master and then commit it as a submodule update. Maybe it's > wrong but no one has ever told me otherwise. Around release > time the release manager makes sure all the libraries correspond to > actual releases. > > Edward > > Excerpts from Erik de Castro Lopo's message of 2016-12-21 17:33:36 +1100: > > Hi all, > > > > I'm a bit confused about how the GHC dev tree handles submodules like > > libraries/Cabal, libraries/process, libraries/directory and > > libraries/containers. > > > > All of these libraries/submodules seem to have their own github projects > > where people can submit PRs, but once the commits have been made there, > > what is the process to get submodules updated in the GHC tree? > > > > Any light people can shed on this process would be appreciated. > > > > Erik > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Wed Dec 21 07:26:09 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 21 Dec 2016 02:26:09 -0500 Subject: Confused about the sub-modules In-Reply-To: References: <20161221173336.bfa2a14da3410d072751ff67@mega-nerd.com> <1482302869-sup-287@sabre> Message-ID: <1482305154-sup-3861@sabre> Not any more. The commit just has to exist in the remote repo (that's what the lint checks.) Excerpts from Alan & Kim Zimmerman's message of 2016-12-21 09:20:15 +0200: > For the utils/haddock submodule there is a ghc-head branch, and the commit > should be on that before pushing to GHC master with a submodule update. > > I do not know if that convention is followed on any of the other libraries. > > Alan > > On Wed, Dec 21, 2016 at 8:48 AM, Edward Z. Yang wrote: > > > Once the commit is upstream, I just checkout a newer commit from > > master and then commit it as a submodule update. Maybe it's > > wrong but no one has ever told me otherwise. Around release > > time the release manager makes sure all the libraries correspond to > > actual releases. > > > > Edward > > > > Excerpts from Erik de Castro Lopo's message of 2016-12-21 17:33:36 +1100: > > > Hi all, > > > > > > I'm a bit confused about how the GHC dev tree handles submodules like > > > libraries/Cabal, libraries/process, libraries/directory and > > > libraries/containers. > > > > > > All of these libraries/submodules seem to have their own github projects > > > where people can submit PRs, but once the commits have been made there, > > > what is the process to get submodules updated in the GHC tree? > > > > > > Any light people can shed on this process would be appreciated. > > > > > > Erik > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > From simonpj at microsoft.com Wed Dec 21 08:30:19 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 21 Dec 2016 08:30:19 +0000 Subject: Confused about the sub-modules In-Reply-To: <1482302869-sup-287@sabre> References: <20161221173336.bfa2a14da3410d072751ff67@mega-nerd.com> <1482302869-sup-287@sabre> Message-ID: Info here. I hope it is up to date https://ghc.haskell.org/trac/ghc/wiki/Repositories If it's out of date, please fix! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Edward Z. | Yang | Sent: 21 December 2016 06:49 | To: Erik de Castro Lopo | Cc: ghc-devs | Subject: Re: Confused about the sub-modules | | Once the commit is upstream, I just checkout a newer commit from master and | then commit it as a submodule update. Maybe it's wrong but no one has ever | told me otherwise. Around release time the release manager makes sure all | the libraries correspond to actual releases. | | Edward | | Excerpts from Erik de Castro Lopo's message of 2016-12-21 17:33:36 +1100: | > Hi all, | > | > I'm a bit confused about how the GHC dev tree handles submodules like | > libraries/Cabal, libraries/process, libraries/directory and | > libraries/containers. | > | > All of these libraries/submodules seem to have their own github | > projects where people can submit PRs, but once the commits have been | > made there, what is the process to get submodules updated in the GHC tree? | > | > Any light people can shed on this process would be appreciated. | > | > Erik | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmail.haskell | .org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fghc- | devs&data=02%7C01%7Csimonpj%40microsoft.com%7C60ddf19e8b144b4508c608d4296d71 | ea%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636178997417986784&sdata=D3t | 262ge8xaOYdKm%2Byqx7oCIaHs6mTtY0g17zGU54Pw%3D&reserved=0 From simonpj at microsoft.com Wed Dec 21 08:36:48 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 21 Dec 2016 08:36:48 +0000 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: <808e9d01-6eb1-f02d-ffff-b18fec8bffd5@gmail.com> References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> <808e9d01-6eb1-f02d-ffff-b18fec8bffd5@gmail.com> Message-ID: The frustrating part for me is that I would like to contribute to this effort. But again, my understanding of each and every component is fleeting at best. Don’t be discouraged – you can learn! And you would not be displacing anyone… as I say, the entire arrows story in GHC lacks leadership and vision. I even wonder (whisper it) about taking it out altogether, when Edward says “many of the original applications for arrows have been shown to be perfectly suited to being handled by Applicatives” (i.e. with no extensions except AppliciativeDo. But I have no data on whether anyone (at all) is using arrow notation these days, and if so how mission-critical it is to them; and old packages like Yampa certainly use it. So arrow notation will probably stay. But I don’t really understand the code, and it’s in “keep it limping along” mode as far as I am concerned. All it needs is love. But as Edward suggests, it not just a technical question; love would involve building a community consensus about what we want. Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of MarLinn via ghc-devs Sent: 21 December 2016 06:43 To: ghc-devs at haskell.org Subject: Re: Help needed: Restrictions of proc-notation with RebindableSyntax Sorry to barge into the discussion with neither much knowledge of the theory nor the implementation. I tried to look at both, but my understanding is severely lacking. However I do feel a tiny bit emboldened because my own findings turned out to at least have the same shadow as the contents of this more thorough overview. The one part of the existing story I personally found the most promising was to explore the category hierarchy around Arrows, in other words the Gibbard/Trinkle perspective. Therefore I want to elaborate my own naive findings a tiny bit. Bear in mind that much of this is gleaned from experimental implementations or interpreted, but I do not have proofs, or even theory. Almost all parts necessary for an Arrow seem to already be contained in a symmetrical braided category. Fascinatingly, even the braiding might be superfluous in some cases, leaving only the need for a monoidal category. But to get from a braided category to a full Arrow, there seems to be a need for "constructors" like (arr $ \x -> (x,x)) and "destructors" like (arr fst). There seem to be several options for those, and a choice would have to be made. Notably: is introduction done by duplicating existing values, or by introducing new "unit" values (for a suitable definition of "unit")? That choice doesn't seem impactful, but my gut feeling is that that's just because I cannot see the potential points of impact. What makes this story worse is that the currently known hierarchies around ArrowChoice and ArrowLoop seem to be coarser still – although the work around profunctors might help. That said, my understanding is so bad that I can not even see any benefits or drawbacks of the structure of ArrowLoop's "loop" versus a more "standard" fix-point structure. I do, however, think there is something to be gained. The good old Rosetta Stone paper still makes me think that what is now Arrow notation might be turned into a much more potent tool – exactly because we might be able to lift those restrictions. One particular idea I have in mind: If the notation can support purely braided categories, it might be used to describe reversible computation, which in turn is used in describing quantum computation. The frustrating part for me is that I would like to contribute to this effort. But again, my understanding of each and every component is fleeting at best. MarLinn On 2016-12-21 06:15, Edward Kmett wrote: Arrows haven't seen much love for a while. In part this is because many of the original applications for arrows have been shown to be perfectly suited to being handled by Applicatives. e.g. the Swiestra/Duponcheel parser that sort of kickstarted everything. There are several options for improved arrow desugaring. Megacz's work on GArrows at first feels like it should be applicable here, as it lets you change out the choice of pseudo-product while preserving the general arrow feel. Unfortunately, the GArrow class isn't sufficient for most arrow desguaring, due to the fact that the arrow desugaring inherently involves breaking apart patterns for almost any non-trivial use and nothing really requires the GArrow 'product' to actually even be product like. Cale Gibbard and Ryan Trinkle on the other hand like to use a more CCC-like basis for arrows. This stays in the spirit to the GArrow class, but you still have the problems around pattern matching. I don't think they actually wrote anything to deal with the actual arrow notation and just programmed in the alternate style to get better introspection on the operations involved. I think the key insight there is that much of the notation can be made to work with weaker categorical structures than full arrows, but the existing class hierarchy around arrows is very coarse. As a minor data point both of these sorts of encodings of arrow problems start to drag in language extensions that make the notation harder to standardize. Currently they work with bog standard Haskell 98/2010. If you're looking for an interesting theoretical direction to extend Arrow notation: An arrow is a strong monad in the category of profunctors [1]. Using the profunctors library [2] (Strong p, Category p) is equivalent in power to Arrow p. Exploiting that, a profunctor-based desugaring could get away with much weaker constraints than Arrow depending on how much of proc notation you use. Alternately a separate class hierarchy that only required covariance in the second argument is an option, but my vague recollection from the last time that I looked into this is that while such a desguaring only uses covariance in the second argument of the profunctor, you can prove that contravariance in the first argument follows from the pile of laws. This subject came up the last time someone thought to extend the Arrow desguaring. You can probably find a thread on the mailing list from Ross Paterson a few years ago. This version has the benefit of fitting pretty close to the existing arrow desugaring and not needing new language extensions. On the other hand, refactoring the Arrow class in this (or any other) way is somewhat of an invasive exercise. The profunctors package offers moral equivalents to most of the Arrow subclasses, but no effort has been made to match the existing Arrow hierarchy. Given that little new code seems to be being written with Arrows in mind, while some older code makes heavy use of it (hxt, etc.), refactoring the arrow hierarchy is kind of a hard sell. It is by no means impossible, just something that would require a fair bit of community wrangling and a lot of work showing clear advantages to a new status quo at a time when its very hard to get anybody to care about arrow notation at all. -Edward [1] http://www-kb.is.s.u-tokyo.ac.jp/~asada/papers/arrStrMnd.pdf [2] http://hackage.haskell.org/package/profunctors-5.2/docs/Data-Profunctor-Strong.html On Fri, Dec 2, 2016 at 10:57 AM, Jan Bracker via ghc-devs > wrote: Simon, Richard, thank you for your answer! I don't have time to look into the GHC sources right now, but I will set aside some time after the holidays and take a close look at what the exact restrictions on proc-notation are and document them. Since you suggested a rewrite of GHC's handling of proc-syntax, are there any opinions on integrating generalized arrows (Joseph 2014) in the process? I think they would greatly improve arrows! I don't know if I have the time to attempt this, but if I find the time I would give it a try. Why wasn't this integrated while it was still actively developed? Best, Jan [Joseph 2014] https://www2.eecs.berkeley.edu/Pubs/TechRpts/2014/EECS-2014-130.pdf 2016-11-29 12:41 GMT+00:00 Simon Peyton Jones >: Jan, Type checking and desugaring for arrow syntax has received Absolutely No Love for several years. I do not understand how it works very well, and I would not be at all surprised if it is broken in corner cases. It really needs someone to look at it carefully, document it better, and perhaps refactor it – esp by using a different data type rather than piggy-backing on HsExpr. In the light of that understanding, I think rebindable syntax will be easier. I don’t know if you are up for that, but it’s a rather un-tended part of GHC. Thanks Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Richard Eisenberg Sent: 28 November 2016 22:30 To: Jan Bracker > Cc: ghc-devs at haskell.org Subject: Help needed: Restrictions of proc-notation with RebindableSyntax Jan’s question is a good one, but I don’t know enough about procs to be able to answer. I do know that the answer can be found by looking for uses of `tcSyntaxOp` in the TcArrows module.... but I just can’t translate it all to source Haskell, having roughly 0 understanding of this end of the language. Can anyone else help Jan here? Richard On Nov 23, 2016, at 4:34 AM, Jan Bracker via ghc-devs > wrote: Hello, I want to use the proc-notation together with RebindableSyntax. So far what I am trying to do is working fine, but I would like to know what the exact restrictions on the supplied functions are. I am introducing additional indices and constraints on the operations. The documentation [1] says the details are in flux and that I should ask directly. Best, Jan [1] https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/glasgow_exts.html#rebindable-syntax-and-the-implicit-prelude-import _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From mle+hs at mega-nerd.com Wed Dec 21 09:13:23 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Wed, 21 Dec 2016 20:13:23 +1100 Subject: Confused about the sub-modules In-Reply-To: <1482305154-sup-3861@sabre> References: <20161221173336.bfa2a14da3410d072751ff67@mega-nerd.com> <1482302869-sup-287@sabre> <1482305154-sup-3861@sabre> Message-ID: <20161221201323.37da9eb918d12743556feb79@mega-nerd.com> Edward Z. Yang wrote: > Not any more. The commit just has to exist in the remote repo (that's > what the lint checks.) So this is where I am running into trouble. Everything for process and directory is fine, but for Cabal and containers, the git repo on git.haskell.org is missing the commits I need. (No need to CC me, I'm subscribed to ghc-devs). Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From matthewtpickering at gmail.com Wed Dec 21 10:12:56 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 21 Dec 2016 10:12:56 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype Message-ID: Dear devs, I have completed writing a migration which moves tickets from trac to phabricator. The conversion is essentially lossless. The trac transaction history is replayed which means all events are transferred with their original authors and timestamps. I welcome comments on the work I have done so far, especially bugs as I have definitely not looked at all 12000 tickets. http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com All the user accounts are automatically generated. If you want to see the tracker from your perspective then send me an email or ping me on IRC and I can set the password of the relevant account. NOTE: This is not a decision, the existence of this prototype is to show that the migration is feasible in a satisfactory way and to remove hypothetical arguments from the discussion. I must also thank Dan Palmer and Herbert who helped me along the way. Dan was responsible for the first implementation and setting up much of the infrastructure at the Haskell Exchange hackathon in October. We extensively used the API bindings which Herbert had been working on. Further information below! Matt ===================================================================== Reasons ====== Why this change? The main argument is consolidation. Having many different services is confusing for new and old contributors. Phabricator has proved effective as a code review tool. It is modern and actively developed with a powerful feature set which we currently only use a small fraction of. Trac is showing signs of its age. It is old and slow, users regularly lose comments through accidently refreshing their browser. Further to this, the integration with other services is quite poor. Commits do not close tickets which mention them and the only link to commits is a comment. Querying the tickets is also quite difficult, I usually resort to using google search or my emails to find the relevant ticket. Why is Phabricator better? ==================== Through learning more about Phabricator, there are many small things that I think it does better which will improve the usability of the issue tracker. I will list a few but I urge you to try it out. * Commits which mention ticket numbers are currently posted as trac comments. There is better integration in phabricator as linking to commits has first-class support. * Links with differentials are also more direct than the current custom field which means you must update two places when posting a differential. * Fields are verified so that mispelling user names is not possible (see #12623 where Ben mispelled his name for example) * This is also true for projects and other fields. Inspecting these fields on trac you will find that the formatting on each ticket is often quite different. * Keywords are much more useful as the set of used keywords is discoverable. * Related tickets are much more substantial as the status of related tickets is reflected to parent ticket. (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) Implementation ============ Keywords are implemented as projects. A project is a combination of a tag which can be used with any Phabricator object, a workboard to organise tasks and a group of people who care about the topic. Not all keywords are migrated. Only keywords with at least 5 tickets were added to avoid lots of useless projects. The state of keywords is still a bit unsatisfactory but I wanted to take this chance to clean them up. Custom fields such as architecture and OS are replaced by *projects* just like keywords. This has the same advantage as other projects. Users can be subscribed to projects and receive emails when new tickets are tagged with a project. The large majority of tickets have very little additional metadata set. I also implemented these as custom fields but found the the result to be less satisfactory. Some users who have trac accounts do not have phab accounts. Fortunately it is easy to create new user accounts for these users which have empty passwords which can be recovered by the appropriate email address. This means tickets can be properly attributed in the migration. The ticket numbers are maintained. I still advocate moving the infrastructure tickets in order to maintain this mapping. Especially as there has been little activity in thr the last year. Tickets are linked to the relevant commits, differentials and other tickets. There are 3000 dummy differentials which are used to test that the linking works correctly. Of course with real data, the proper differential would be linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) There are a couple of issues currently with the migration. There are a few issues in the parser which converts trac markup to remarkup. Most comments have very simple with just paragraphs and code blocks but complex items like lists are sometimes parsed incorrectly. Definition lists are converted to tables as there are no equivalent in remarkup. Trac ticket links are converted to phab ticket links. The ideal time to migrate is before the end of January The busiest time for the issue tracker is before and after a new major release. With 8.2 planned for around April this gives the transition a few months to settle. We can close the trac issue tracker and continue to serve it or preferably redirect users to the new ticket. I don't plan to migrate the wiki at this stage as I do not feel that the parser is robust enough although there are now few other technical challenges blocking this direction. From ekmett at gmail.com Wed Dec 21 13:10:39 2016 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 21 Dec 2016 08:10:39 -0500 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: <585A44A1.8070508@exmail.nottingham.ac.uk> References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> <585A44A1.8070508@exmail.nottingham.ac.uk> Message-ID: The S&D parser I was referring to was based on tracking FIRST sets, and provided a nice linear time parsing bound for (infinite) LL(1) grammars. (You can't really compute FOLLOW sets without knowing the grammar has a finite number of productions, but FIRST sets work perfectly well with infinite grammars.) By doing so you can transform parsing into more or less a series of map lookups for dispatch. You need to carry a set of all characters that a parser will consume in the case of legal parses, and whether or not the parser accepts the empty parse. http://www.cse.chalmers.se/~rjmh/afp-arrows.pdf mentions this style of FIRST-set tracking parser as the original motivation for arrows. Of course, they didn't see fit to stop puttering around with parsers after 1998, so referring to "the S&D parser" is quite ambiguous! =) -Edward On Wed, Dec 21, 2016 at 4:00 AM, Henrik Nilsson < Henrik.Nilsson at nottingham.ac.uk> wrote: > Hi Edward, > CC Others, > > On 12/21/2016 05:15 AM, Edward Kmett wrote: > >> Arrows haven't seen much love for a while. In part this is because many >> of the original applications for arrows have been shown to be perfectly >> suited to being handled by Applicatives. e.g. the Swiestra/Duponcheel >> parser that sort of kickstarted everything. >> > > Thanks for a very thorough reply. > > A quick side-remark: a parser library due to Sweistra (and maybe > Dupncheel, I can't remember) used an applicative structure a long time > before applicatives became apkicatives and even idioms. (I used a > variation of this library myself for the Freja compiler around 1995. > Freja was part of my PhD work and was close to what Haskell looked like at > the time.) > > I've never used arrows for parsing, or seen the need for arrows in that > context, but find arrows a very good fit for many EDSLs, including > stream-processing/FRP/Yampa of course, along with other circuit-like > abstractions, which I'd say were the original motivation for arrows. > Altenkirch have also used arrow-like notions in the context of quantum > computation. More recently for probabilistic programming and > Bayesian inference. Except then that the current hard-wired "pseudo- > product" in particular often gets in the way. Along with the fact > that there is no good support for constrained arrows (or monads). > > Best, > > /Henrik > > > > > > This message and any attachment are intended solely for the addressee > and may contain confidential information. If you have received this > message in error, please send it back to me, and immediately delete it. > Please do not use, copy or disclose the information contained in this > message or in any attachment. Any views or opinions expressed by the > author of this email do not necessarily reflect the views of the > University of Nottingham. > > This message has been checked for viruses but the contents of an > attachment may still contain software viruses which could damage your > computer system, you are advised to perform your own checks. Email > communications with the University of Nottingham may be monitored as > permitted by UK legislation. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lonetiger at gmail.com Wed Dec 21 13:46:38 2016 From: lonetiger at gmail.com (Phyx) Date: Wed, 21 Dec 2016 13:46:38 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: Message-ID: Hi Matthew, Great work! I must admit I'm one of the few that generally likes Trac but I'm liking this quite a lot. Just two questions, Is it possible for those like me who have a different username on trac and phabricator to get mapped correctly? And how does the ticket creation process look? I tried it but needed to login. Do you get a set of projects you have to pick? Like the custom fields we have now or is it just one box where you have to add stuff to all in one line? Kind regards, Tamar On Wed, 21 Dec 2016, 10:13 Matthew Pickering, wrote: > Dear devs, > > I have completed writing a migration which moves tickets from trac to > phabricator. The conversion is essentially lossless. The trac > transaction history is replayed which means all events are transferred > with their original authors and timestamps. I welcome comments on the > work I have done so far, especially bugs as I have definitely not > looked at all 12000 tickets. > > http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com > > All the user accounts are automatically generated. If you want to see > the tracker from your perspective then send me an email or ping me on > IRC and I can set the password of the relevant account. > > NOTE: This is not a decision, the existence of this prototype is to > show that the migration is feasible in a satisfactory way and to > remove hypothetical arguments from the discussion. > > I must also thank Dan Palmer and Herbert who helped me along the way. > Dan was responsible for the first implementation and setting up much > of the infrastructure at the Haskell Exchange hackathon in October. We > extensively used the API bindings which Herbert had been working on. > > Further information below! > > Matt > > ===================================================================== > > Reasons > ====== > > Why this change? The main argument is consolidation. Having many > different services is confusing for new and old contributors. > Phabricator has proved effective as a code review tool. It is modern > and actively developed with a powerful feature set which we currently > only use a small fraction of. > > Trac is showing signs of its age. It is old and slow, users regularly > lose comments through accidently refreshing their browser. Further to > this, the integration with other services is quite poor. Commits do > not close tickets which mention them and the only link to commits is a > comment. Querying the tickets is also quite difficult, I usually > resort to using google search or my emails to find the relevant > ticket. > > > Why is Phabricator better? > ==================== > > Through learning more about Phabricator, there are many small things > that I think it does better which will improve the usability of the > issue tracker. I will list a few but I urge you to try it out. > > * Commits which mention ticket numbers are currently posted as trac > comments. There is better integration in phabricator as linking to > commits has first-class support. > * Links with differentials are also more direct than the current > custom field which means you must update two places when posting a > differential. > * Fields are verified so that mispelling user names is not possible > (see #12623 where Ben mispelled his name for example) > * This is also true for projects and other fields. Inspecting these > fields on trac you will find that the formatting on each ticket is > often quite different. > * Keywords are much more useful as the set of used keywords is > discoverable. > * Related tickets are much more substantial as the status of related > tickets is reflected to parent ticket. > (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) > > Implementation > ============ > > Keywords are implemented as projects. A project is a combination of a > tag which can be used with any Phabricator object, a workboard to > organise tasks and a group of people who care about the topic. Not all > keywords are migrated. Only keywords with at least 5 tickets were > added to avoid lots of useless projects. The state of keywords is > still a bit unsatisfactory but I wanted to take this chance to clean > them up. > > Custom fields such as architecture and OS are replaced by *projects* > just like keywords. This has the same advantage as other projects. > Users can be subscribed to projects and receive emails when new > tickets are tagged with a project. The large majority of tickets have > very little additional metadata set. I also implemented these as > custom fields but found the the result to be less satisfactory. > > Some users who have trac accounts do not have phab accounts. > Fortunately it is easy to create new user accounts for these users > which have empty passwords which can be recovered by the appropriate > email address. This means tickets can be properly attributed in the > migration. > > The ticket numbers are maintained. I still advocate moving the > infrastructure tickets in order to maintain this mapping. Especially > as there has been little activity in thr the last year. > > Tickets are linked to the relevant commits, differentials and other > tickets. There are 3000 dummy differentials which are used to test > that the linking works correctly. Of course with real data, the proper > differential would be > linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) > > There are a couple of issues currently with the migration. There are a > few issues in the parser which converts trac markup to remarkup. Most > comments have very simple with just paragraphs and code blocks but > complex items like lists are sometimes parsed incorrectly. Definition > lists are converted to tables as there are no equivalent in remarkup. > Trac ticket links are converted to phab ticket links. > > The ideal time to migrate is before the end of January The busiest > time for the issue tracker is before and after a new major release. > With 8.2 planned for around April this gives the transition a few > months to settle. We can close the trac issue tracker and continue to > serve it or preferably redirect users to the new ticket. I don't plan > to migrate the wiki at this stage as I do not feel that the parser is > robust enough although there are now few other technical challenges > blocking this direction. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Wed Dec 21 13:59:20 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 21 Dec 2016 13:59:20 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: Message-ID: > Is it possible for those like me who have a different username on trac and > phabricator to get mapped correctly? Yes, definitely. I would need to work out exactly what to do about user accounts, for this I just created accounts with the same names. There are two things to consider about a more intelligent mapping. 1. People who have different names on each site. 2. People who don't have phab accounts but do have trac accounts. > > And how does the ticket creation process look? I tried it but needed to > login. Do you get a set of projects you have to pick? Like the custom fields > we have now or is it just one box where you have to add stuff to all in one > line? > I sent you some login details so you can try it out. > Kind regards, > Tamar Matt > > On Wed, 21 Dec 2016, 10:13 Matthew Pickering, > wrote: >> >> Dear devs, >> >> I have completed writing a migration which moves tickets from trac to >> phabricator. The conversion is essentially lossless. The trac >> transaction history is replayed which means all events are transferred >> with their original authors and timestamps. I welcome comments on the >> work I have done so far, especially bugs as I have definitely not >> looked at all 12000 tickets. >> >> http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com >> >> All the user accounts are automatically generated. If you want to see >> the tracker from your perspective then send me an email or ping me on >> IRC and I can set the password of the relevant account. >> >> NOTE: This is not a decision, the existence of this prototype is to >> show that the migration is feasible in a satisfactory way and to >> remove hypothetical arguments from the discussion. >> >> I must also thank Dan Palmer and Herbert who helped me along the way. >> Dan was responsible for the first implementation and setting up much >> of the infrastructure at the Haskell Exchange hackathon in October. We >> extensively used the API bindings which Herbert had been working on. >> >> Further information below! >> >> Matt >> >> ===================================================================== >> >> Reasons >> ====== >> >> Why this change? The main argument is consolidation. Having many >> different services is confusing for new and old contributors. >> Phabricator has proved effective as a code review tool. It is modern >> and actively developed with a powerful feature set which we currently >> only use a small fraction of. >> >> Trac is showing signs of its age. It is old and slow, users regularly >> lose comments through accidently refreshing their browser. Further to >> this, the integration with other services is quite poor. Commits do >> not close tickets which mention them and the only link to commits is a >> comment. Querying the tickets is also quite difficult, I usually >> resort to using google search or my emails to find the relevant >> ticket. >> >> >> Why is Phabricator better? >> ==================== >> >> Through learning more about Phabricator, there are many small things >> that I think it does better which will improve the usability of the >> issue tracker. I will list a few but I urge you to try it out. >> >> * Commits which mention ticket numbers are currently posted as trac >> comments. There is better integration in phabricator as linking to >> commits has first-class support. >> * Links with differentials are also more direct than the current >> custom field which means you must update two places when posting a >> differential. >> * Fields are verified so that mispelling user names is not possible >> (see #12623 where Ben mispelled his name for example) >> * This is also true for projects and other fields. Inspecting these >> fields on trac you will find that the formatting on each ticket is >> often quite different. >> * Keywords are much more useful as the set of used keywords is >> discoverable. >> * Related tickets are much more substantial as the status of related >> tickets is reflected to parent ticket. >> (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) >> >> Implementation >> ============ >> >> Keywords are implemented as projects. A project is a combination of a >> tag which can be used with any Phabricator object, a workboard to >> organise tasks and a group of people who care about the topic. Not all >> keywords are migrated. Only keywords with at least 5 tickets were >> added to avoid lots of useless projects. The state of keywords is >> still a bit unsatisfactory but I wanted to take this chance to clean >> them up. >> >> Custom fields such as architecture and OS are replaced by *projects* >> just like keywords. This has the same advantage as other projects. >> Users can be subscribed to projects and receive emails when new >> tickets are tagged with a project. The large majority of tickets have >> very little additional metadata set. I also implemented these as >> custom fields but found the the result to be less satisfactory. >> >> Some users who have trac accounts do not have phab accounts. >> Fortunately it is easy to create new user accounts for these users >> which have empty passwords which can be recovered by the appropriate >> email address. This means tickets can be properly attributed in the >> migration. >> >> The ticket numbers are maintained. I still advocate moving the >> infrastructure tickets in order to maintain this mapping. Especially >> as there has been little activity in thr the last year. >> >> Tickets are linked to the relevant commits, differentials and other >> tickets. There are 3000 dummy differentials which are used to test >> that the linking works correctly. Of course with real data, the proper >> differential would be >> linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) >> >> There are a couple of issues currently with the migration. There are a >> few issues in the parser which converts trac markup to remarkup. Most >> comments have very simple with just paragraphs and code blocks but >> complex items like lists are sometimes parsed incorrectly. Definition >> lists are converted to tables as there are no equivalent in remarkup. >> Trac ticket links are converted to phab ticket links. >> >> The ideal time to migrate is before the end of January The busiest >> time for the issue tracker is before and after a new major release. >> With 8.2 planned for around April this gives the transition a few >> months to settle. We can close the trac issue tracker and continue to >> serve it or preferably redirect users to the new ticket. I don't plan >> to migrate the wiki at this stage as I do not feel that the parser is >> robust enough although there are now few other technical challenges >> blocking this direction. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From sylvain at haskus.fr Wed Dec 21 14:01:46 2016 From: sylvain at haskus.fr (Sylvain Henry) Date: Wed, 21 Dec 2016 15:01:46 +0100 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: Message-ID: <4e51c411-48ac-6ced-6672-e188cb90bab6@haskus.fr> Nice work! Would it be possible to convert comment references too? For instance in http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T10547#182793 "comment:21" should be a link to the label #178747 If we do the transfer, we should redirect: https://ghc.haskell.org/trac/ghc/ticket/{NN}#comment:{CC} to phabricator.haskell.org/T{NN}#{tracToPhabComment(NN,CC)} where "tracToPhabComment" function remains to be written ;-) Thanks, Sylvain On 21/12/2016 11:12, Matthew Pickering wrote: > Dear devs, > > I have completed writing a migration which moves tickets from trac to > phabricator. The conversion is essentially lossless. The trac > transaction history is replayed which means all events are transferred > with their original authors and timestamps. I welcome comments on the > work I have done so far, especially bugs as I have definitely not > looked at all 12000 tickets. > > http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com > > All the user accounts are automatically generated. If you want to see > the tracker from your perspective then send me an email or ping me on > IRC and I can set the password of the relevant account. > > NOTE: This is not a decision, the existence of this prototype is to > show that the migration is feasible in a satisfactory way and to > remove hypothetical arguments from the discussion. > > I must also thank Dan Palmer and Herbert who helped me along the way. > Dan was responsible for the first implementation and setting up much > of the infrastructure at the Haskell Exchange hackathon in October. We > extensively used the API bindings which Herbert had been working on. > > Further information below! > > Matt > > ===================================================================== > > Reasons > ====== > > Why this change? The main argument is consolidation. Having many > different services is confusing for new and old contributors. > Phabricator has proved effective as a code review tool. It is modern > and actively developed with a powerful feature set which we currently > only use a small fraction of. > > Trac is showing signs of its age. It is old and slow, users regularly > lose comments through accidently refreshing their browser. Further to > this, the integration with other services is quite poor. Commits do > not close tickets which mention them and the only link to commits is a > comment. Querying the tickets is also quite difficult, I usually > resort to using google search or my emails to find the relevant > ticket. > > > Why is Phabricator better? > ==================== > > Through learning more about Phabricator, there are many small things > that I think it does better which will improve the usability of the > issue tracker. I will list a few but I urge you to try it out. > > * Commits which mention ticket numbers are currently posted as trac > comments. There is better integration in phabricator as linking to > commits has first-class support. > * Links with differentials are also more direct than the current > custom field which means you must update two places when posting a > differential. > * Fields are verified so that mispelling user names is not possible > (see #12623 where Ben mispelled his name for example) > * This is also true for projects and other fields. Inspecting these > fields on trac you will find that the formatting on each ticket is > often quite different. > * Keywords are much more useful as the set of used keywords is discoverable. > * Related tickets are much more substantial as the status of related > tickets is reflected to parent ticket. > (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) > > Implementation > ============ > > Keywords are implemented as projects. A project is a combination of a > tag which can be used with any Phabricator object, a workboard to > organise tasks and a group of people who care about the topic. Not all > keywords are migrated. Only keywords with at least 5 tickets were > added to avoid lots of useless projects. The state of keywords is > still a bit unsatisfactory but I wanted to take this chance to clean > them up. > > Custom fields such as architecture and OS are replaced by *projects* > just like keywords. This has the same advantage as other projects. > Users can be subscribed to projects and receive emails when new > tickets are tagged with a project. The large majority of tickets have > very little additional metadata set. I also implemented these as > custom fields but found the the result to be less satisfactory. > > Some users who have trac accounts do not have phab accounts. > Fortunately it is easy to create new user accounts for these users > which have empty passwords which can be recovered by the appropriate > email address. This means tickets can be properly attributed in the > migration. > > The ticket numbers are maintained. I still advocate moving the > infrastructure tickets in order to maintain this mapping. Especially > as there has been little activity in thr the last year. > > Tickets are linked to the relevant commits, differentials and other > tickets. There are 3000 dummy differentials which are used to test > that the linking works correctly. Of course with real data, the proper > differential would be > linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) > > There are a couple of issues currently with the migration. There are a > few issues in the parser which converts trac markup to remarkup. Most > comments have very simple with just paragraphs and code blocks but > complex items like lists are sometimes parsed incorrectly. Definition > lists are converted to tables as there are no equivalent in remarkup. > Trac ticket links are converted to phab ticket links. > > The ideal time to migrate is before the end of January The busiest > time for the issue tracker is before and after a new major release. > With 8.2 planned for around April this gives the transition a few > months to settle. We can close the trac issue tracker and continue to > serve it or preferably redirect users to the new ticket. I don't plan > to migrate the wiki at this stage as I do not feel that the parser is > robust enough although there are now few other technical challenges > blocking this direction. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Wed Dec 21 14:05:06 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 21 Dec 2016 14:05:06 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: <7a416d04-659a-4e1e-7b52-02f8e933e28f@haskus.fr> References: <7a416d04-659a-4e1e-7b52-02f8e933e28f@haskus.fr> Message-ID: I wondered if someone would ask me about this. In principle I don't see why not but I don't immediately know how to get the correct label. In fact, this is an interesting example as "comment:21" refers to a commit comment (https://ghc.haskell.org/trac/ghc/ticket/10547#comment:21) which I have filtered out whilst doing the conversion and replaced by actual links to the commits in a more idiomatic style. Thus "comment:21" should actually point to #178376 (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T10547#178376). On Wed, Dec 21, 2016 at 1:54 PM, Sylvain Henry wrote: > Nice work! > > Would it be possible to convert comment references too? For instance in > http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T10547#182793 > "comment:21" should be a link to the label #178747 > > If we do the transfer, we should redirect: > https://ghc.haskell.org/trac/ghc/ticket/{NN}#comment:{CC} > to > phabricator.haskell.org/T{NN}#{tracToPhabComment(NN,CC)} > where "tracToPhabComment" function remains to be written ;-) > > Thanks, > Sylvain > > > On 21/12/2016 11:12, Matthew Pickering wrote: >> >> Dear devs, >> >> I have completed writing a migration which moves tickets from trac to >> phabricator. The conversion is essentially lossless. The trac >> transaction history is replayed which means all events are transferred >> with their original authors and timestamps. I welcome comments on the >> work I have done so far, especially bugs as I have definitely not >> looked at all 12000 tickets. >> >> http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com >> >> All the user accounts are automatically generated. If you want to see >> the tracker from your perspective then send me an email or ping me on >> IRC and I can set the password of the relevant account. >> >> NOTE: This is not a decision, the existence of this prototype is to >> show that the migration is feasible in a satisfactory way and to >> remove hypothetical arguments from the discussion. >> >> I must also thank Dan Palmer and Herbert who helped me along the way. >> Dan was responsible for the first implementation and setting up much >> of the infrastructure at the Haskell Exchange hackathon in October. We >> extensively used the API bindings which Herbert had been working on. >> >> Further information below! >> >> Matt >> >> ===================================================================== >> >> Reasons >> ====== >> >> Why this change? The main argument is consolidation. Having many >> different services is confusing for new and old contributors. >> Phabricator has proved effective as a code review tool. It is modern >> and actively developed with a powerful feature set which we currently >> only use a small fraction of. >> >> Trac is showing signs of its age. It is old and slow, users regularly >> lose comments through accidently refreshing their browser. Further to >> this, the integration with other services is quite poor. Commits do >> not close tickets which mention them and the only link to commits is a >> comment. Querying the tickets is also quite difficult, I usually >> resort to using google search or my emails to find the relevant >> ticket. >> >> >> Why is Phabricator better? >> ==================== >> >> Through learning more about Phabricator, there are many small things >> that I think it does better which will improve the usability of the >> issue tracker. I will list a few but I urge you to try it out. >> >> * Commits which mention ticket numbers are currently posted as trac >> comments. There is better integration in phabricator as linking to >> commits has first-class support. >> * Links with differentials are also more direct than the current >> custom field which means you must update two places when posting a >> differential. >> * Fields are verified so that mispelling user names is not possible >> (see #12623 where Ben mispelled his name for example) >> * This is also true for projects and other fields. Inspecting these >> fields on trac you will find that the formatting on each ticket is >> often quite different. >> * Keywords are much more useful as the set of used keywords is >> discoverable. >> * Related tickets are much more substantial as the status of related >> tickets is reflected to parent ticket. >> (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) >> >> Implementation >> ============ >> >> Keywords are implemented as projects. A project is a combination of a >> tag which can be used with any Phabricator object, a workboard to >> organise tasks and a group of people who care about the topic. Not all >> keywords are migrated. Only keywords with at least 5 tickets were >> added to avoid lots of useless projects. The state of keywords is >> still a bit unsatisfactory but I wanted to take this chance to clean >> them up. >> >> Custom fields such as architecture and OS are replaced by *projects* >> just like keywords. This has the same advantage as other projects. >> Users can be subscribed to projects and receive emails when new >> tickets are tagged with a project. The large majority of tickets have >> very little additional metadata set. I also implemented these as >> custom fields but found the the result to be less satisfactory. >> >> Some users who have trac accounts do not have phab accounts. >> Fortunately it is easy to create new user accounts for these users >> which have empty passwords which can be recovered by the appropriate >> email address. This means tickets can be properly attributed in the >> migration. >> >> The ticket numbers are maintained. I still advocate moving the >> infrastructure tickets in order to maintain this mapping. Especially >> as there has been little activity in thr the last year. >> >> Tickets are linked to the relevant commits, differentials and other >> tickets. There are 3000 dummy differentials which are used to test >> that the linking works correctly. Of course with real data, the proper >> differential would be >> linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) >> >> There are a couple of issues currently with the migration. There are a >> few issues in the parser which converts trac markup to remarkup. Most >> comments have very simple with just paragraphs and code blocks but >> complex items like lists are sometimes parsed incorrectly. Definition >> lists are converted to tables as there are no equivalent in remarkup. >> Trac ticket links are converted to phab ticket links. >> >> The ideal time to migrate is before the end of January The busiest >> time for the issue tracker is before and after a new major release. >> With 8.2 planned for around April this gives the transition a few >> months to settle. We can close the trac issue tracker and continue to >> serve it or preferably redirect users to the new ticket. I don't plan >> to migrate the wiki at this stage as I do not feel that the parser is >> robust enough although there are now few other technical challenges >> blocking this direction. >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > From rae at cs.brynmawr.edu Wed Dec 21 14:39:59 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Wed, 21 Dec 2016 09:39:59 -0500 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <7a416d04-659a-4e1e-7b52-02f8e933e28f@haskus.fr> Message-ID: I regularly use comment references on Trac, and I know others do, too. While I'm not saying they need to be supported in a prototype before we elect to go ahead with this route, I would say that preserving comment references is a necessary part of this migration. Along similar lines, the comment numbers in Trac are useful. Does Phab support human-readable comment numbers? Or only those hashes? (I consider a 6-digit number too long to be human-readable.) Having nice comment numbers isn't a necessary feature for me, but losing them would be a small loss that might have to be balanced out by other gains. Thanks, Matthew, for doing this! For the record, this email does not express an opinion about the overall merit of this move, just a few technical points. I do not have a considered position on overall merit. Richard > On Dec 21, 2016, at 9:05 AM, Matthew Pickering wrote: > > I wondered if someone would ask me about this. In principle I don't > see why not but I don't immediately know how to get the correct label. > > In fact, this is an interesting example as "comment:21" refers to a > commit comment (https://ghc.haskell.org/trac/ghc/ticket/10547#comment:21) > which I have filtered out whilst doing the conversion and replaced by > actual links to the commits in a more idiomatic style. Thus > "comment:21" should actually point to #178376 > (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T10547#178376). > > > > On Wed, Dec 21, 2016 at 1:54 PM, Sylvain Henry wrote: >> Nice work! >> >> Would it be possible to convert comment references too? For instance in >> http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T10547#182793 >> "comment:21" should be a link to the label #178747 >> >> If we do the transfer, we should redirect: >> https://ghc.haskell.org/trac/ghc/ticket/{NN}#comment:{CC} >> to >> phabricator.haskell.org/T{NN}#{tracToPhabComment(NN,CC)} >> where "tracToPhabComment" function remains to be written ;-) >> >> Thanks, >> Sylvain >> >> >> On 21/12/2016 11:12, Matthew Pickering wrote: >>> >>> Dear devs, >>> >>> I have completed writing a migration which moves tickets from trac to >>> phabricator. The conversion is essentially lossless. The trac >>> transaction history is replayed which means all events are transferred >>> with their original authors and timestamps. I welcome comments on the >>> work I have done so far, especially bugs as I have definitely not >>> looked at all 12000 tickets. >>> >>> http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com >>> >>> All the user accounts are automatically generated. If you want to see >>> the tracker from your perspective then send me an email or ping me on >>> IRC and I can set the password of the relevant account. >>> >>> NOTE: This is not a decision, the existence of this prototype is to >>> show that the migration is feasible in a satisfactory way and to >>> remove hypothetical arguments from the discussion. >>> >>> I must also thank Dan Palmer and Herbert who helped me along the way. >>> Dan was responsible for the first implementation and setting up much >>> of the infrastructure at the Haskell Exchange hackathon in October. We >>> extensively used the API bindings which Herbert had been working on. >>> >>> Further information below! >>> >>> Matt >>> >>> ===================================================================== >>> >>> Reasons >>> ====== >>> >>> Why this change? The main argument is consolidation. Having many >>> different services is confusing for new and old contributors. >>> Phabricator has proved effective as a code review tool. It is modern >>> and actively developed with a powerful feature set which we currently >>> only use a small fraction of. >>> >>> Trac is showing signs of its age. It is old and slow, users regularly >>> lose comments through accidently refreshing their browser. Further to >>> this, the integration with other services is quite poor. Commits do >>> not close tickets which mention them and the only link to commits is a >>> comment. Querying the tickets is also quite difficult, I usually >>> resort to using google search or my emails to find the relevant >>> ticket. >>> >>> >>> Why is Phabricator better? >>> ==================== >>> >>> Through learning more about Phabricator, there are many small things >>> that I think it does better which will improve the usability of the >>> issue tracker. I will list a few but I urge you to try it out. >>> >>> * Commits which mention ticket numbers are currently posted as trac >>> comments. There is better integration in phabricator as linking to >>> commits has first-class support. >>> * Links with differentials are also more direct than the current >>> custom field which means you must update two places when posting a >>> differential. >>> * Fields are verified so that mispelling user names is not possible >>> (see #12623 where Ben mispelled his name for example) >>> * This is also true for projects and other fields. Inspecting these >>> fields on trac you will find that the formatting on each ticket is >>> often quite different. >>> * Keywords are much more useful as the set of used keywords is >>> discoverable. >>> * Related tickets are much more substantial as the status of related >>> tickets is reflected to parent ticket. >>> (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) >>> >>> Implementation >>> ============ >>> >>> Keywords are implemented as projects. A project is a combination of a >>> tag which can be used with any Phabricator object, a workboard to >>> organise tasks and a group of people who care about the topic. Not all >>> keywords are migrated. Only keywords with at least 5 tickets were >>> added to avoid lots of useless projects. The state of keywords is >>> still a bit unsatisfactory but I wanted to take this chance to clean >>> them up. >>> >>> Custom fields such as architecture and OS are replaced by *projects* >>> just like keywords. This has the same advantage as other projects. >>> Users can be subscribed to projects and receive emails when new >>> tickets are tagged with a project. The large majority of tickets have >>> very little additional metadata set. I also implemented these as >>> custom fields but found the the result to be less satisfactory. >>> >>> Some users who have trac accounts do not have phab accounts. >>> Fortunately it is easy to create new user accounts for these users >>> which have empty passwords which can be recovered by the appropriate >>> email address. This means tickets can be properly attributed in the >>> migration. >>> >>> The ticket numbers are maintained. I still advocate moving the >>> infrastructure tickets in order to maintain this mapping. Especially >>> as there has been little activity in thr the last year. >>> >>> Tickets are linked to the relevant commits, differentials and other >>> tickets. There are 3000 dummy differentials which are used to test >>> that the linking works correctly. Of course with real data, the proper >>> differential would be >>> linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) >>> >>> There are a couple of issues currently with the migration. There are a >>> few issues in the parser which converts trac markup to remarkup. Most >>> comments have very simple with just paragraphs and code blocks but >>> complex items like lists are sometimes parsed incorrectly. Definition >>> lists are converted to tables as there are no equivalent in remarkup. >>> Trac ticket links are converted to phab ticket links. >>> >>> The ideal time to migrate is before the end of January The busiest >>> time for the issue tracker is before and after a new major release. >>> With 8.2 planned for around April this gives the transition a few >>> months to settle. We can close the trac issue tracker and continue to >>> serve it or preferably redirect users to the new ticket. I don't plan >>> to migrate the wiki at this stage as I do not feel that the parser is >>> robust enough although there are now few other technical challenges >>> blocking this direction. >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From matthewtpickering at gmail.com Wed Dec 21 15:02:33 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 21 Dec 2016 15:02:33 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <7a416d04-659a-4e1e-7b52-02f8e933e28f@haskus.fr> Message-ID: I was interested to see how many times the raw comment syntax was used as I don't use it myself. Here are the three queries I ran. -- Occurrences of the piece of syntax SELECT COUNT(*) FROM ticket_change WHERE field='comment' AND newvalue LIKE '%comment:%'; > 3783 -- Instances of the syntax from using the reply button SELECT COUNT(*) FROM ticket_change WHERE field='comment' AND newvalue LIKE '%[comment:%'; > 2957 -- Total comments SELECT COUNT(*) FROM ticket_change WHERE field='comment'; > 75967 So the syntax is only used in about 1% of all comments. Then looking at the culprits for some fun: (simonpj,192) (goldfire,123) (bgamari,116) (thomie,102) (nomeata,30) (rwbarton,28) (RyanGlScott,19) (simonmar,18) were the most frequent comment referencers. I don't think keeping these internal references would be too difficult. I have now worked out where the number comes from and it is easy to get. Matt On Wed, Dec 21, 2016 at 2:39 PM, Richard Eisenberg wrote: > I regularly use comment references on Trac, and I know others do, too. While I'm not saying they need to be supported in a prototype before we elect to go ahead with this route, I would say that preserving comment references is a necessary part of this migration. Along similar lines, the comment numbers in Trac are useful. Does Phab support human-readable comment numbers? Or only those hashes? (I consider a 6-digit number too long to be human-readable.) Having nice comment numbers isn't a necessary feature for me, but losing them would be a small loss that might have to be balanced out by other gains. > > Thanks, Matthew, for doing this! > > For the record, this email does not express an opinion about the overall merit of this move, just a few technical points. I do not have a considered position on overall merit. > > Richard > >> On Dec 21, 2016, at 9:05 AM, Matthew Pickering wrote: >> >> I wondered if someone would ask me about this. In principle I don't >> see why not but I don't immediately know how to get the correct label. >> >> In fact, this is an interesting example as "comment:21" refers to a >> commit comment (https://ghc.haskell.org/trac/ghc/ticket/10547#comment:21) >> which I have filtered out whilst doing the conversion and replaced by >> actual links to the commits in a more idiomatic style. Thus >> "comment:21" should actually point to #178376 >> (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T10547#178376). >> >> >> >> On Wed, Dec 21, 2016 at 1:54 PM, Sylvain Henry wrote: >>> Nice work! >>> >>> Would it be possible to convert comment references too? For instance in >>> http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T10547#182793 >>> "comment:21" should be a link to the label #178747 >>> >>> If we do the transfer, we should redirect: >>> https://ghc.haskell.org/trac/ghc/ticket/{NN}#comment:{CC} >>> to >>> phabricator.haskell.org/T{NN}#{tracToPhabComment(NN,CC)} >>> where "tracToPhabComment" function remains to be written ;-) >>> >>> Thanks, >>> Sylvain >>> >>> >>> On 21/12/2016 11:12, Matthew Pickering wrote: >>>> >>>> Dear devs, >>>> >>>> I have completed writing a migration which moves tickets from trac to >>>> phabricator. The conversion is essentially lossless. The trac >>>> transaction history is replayed which means all events are transferred >>>> with their original authors and timestamps. I welcome comments on the >>>> work I have done so far, especially bugs as I have definitely not >>>> looked at all 12000 tickets. >>>> >>>> http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com >>>> >>>> All the user accounts are automatically generated. If you want to see >>>> the tracker from your perspective then send me an email or ping me on >>>> IRC and I can set the password of the relevant account. >>>> >>>> NOTE: This is not a decision, the existence of this prototype is to >>>> show that the migration is feasible in a satisfactory way and to >>>> remove hypothetical arguments from the discussion. >>>> >>>> I must also thank Dan Palmer and Herbert who helped me along the way. >>>> Dan was responsible for the first implementation and setting up much >>>> of the infrastructure at the Haskell Exchange hackathon in October. We >>>> extensively used the API bindings which Herbert had been working on. >>>> >>>> Further information below! >>>> >>>> Matt >>>> >>>> ===================================================================== >>>> >>>> Reasons >>>> ====== >>>> >>>> Why this change? The main argument is consolidation. Having many >>>> different services is confusing for new and old contributors. >>>> Phabricator has proved effective as a code review tool. It is modern >>>> and actively developed with a powerful feature set which we currently >>>> only use a small fraction of. >>>> >>>> Trac is showing signs of its age. It is old and slow, users regularly >>>> lose comments through accidently refreshing their browser. Further to >>>> this, the integration with other services is quite poor. Commits do >>>> not close tickets which mention them and the only link to commits is a >>>> comment. Querying the tickets is also quite difficult, I usually >>>> resort to using google search or my emails to find the relevant >>>> ticket. >>>> >>>> >>>> Why is Phabricator better? >>>> ==================== >>>> >>>> Through learning more about Phabricator, there are many small things >>>> that I think it does better which will improve the usability of the >>>> issue tracker. I will list a few but I urge you to try it out. >>>> >>>> * Commits which mention ticket numbers are currently posted as trac >>>> comments. There is better integration in phabricator as linking to >>>> commits has first-class support. >>>> * Links with differentials are also more direct than the current >>>> custom field which means you must update two places when posting a >>>> differential. >>>> * Fields are verified so that mispelling user names is not possible >>>> (see #12623 where Ben mispelled his name for example) >>>> * This is also true for projects and other fields. Inspecting these >>>> fields on trac you will find that the formatting on each ticket is >>>> often quite different. >>>> * Keywords are much more useful as the set of used keywords is >>>> discoverable. >>>> * Related tickets are much more substantial as the status of related >>>> tickets is reflected to parent ticket. >>>> (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) >>>> >>>> Implementation >>>> ============ >>>> >>>> Keywords are implemented as projects. A project is a combination of a >>>> tag which can be used with any Phabricator object, a workboard to >>>> organise tasks and a group of people who care about the topic. Not all >>>> keywords are migrated. Only keywords with at least 5 tickets were >>>> added to avoid lots of useless projects. The state of keywords is >>>> still a bit unsatisfactory but I wanted to take this chance to clean >>>> them up. >>>> >>>> Custom fields such as architecture and OS are replaced by *projects* >>>> just like keywords. This has the same advantage as other projects. >>>> Users can be subscribed to projects and receive emails when new >>>> tickets are tagged with a project. The large majority of tickets have >>>> very little additional metadata set. I also implemented these as >>>> custom fields but found the the result to be less satisfactory. >>>> >>>> Some users who have trac accounts do not have phab accounts. >>>> Fortunately it is easy to create new user accounts for these users >>>> which have empty passwords which can be recovered by the appropriate >>>> email address. This means tickets can be properly attributed in the >>>> migration. >>>> >>>> The ticket numbers are maintained. I still advocate moving the >>>> infrastructure tickets in order to maintain this mapping. Especially >>>> as there has been little activity in thr the last year. >>>> >>>> Tickets are linked to the relevant commits, differentials and other >>>> tickets. There are 3000 dummy differentials which are used to test >>>> that the linking works correctly. Of course with real data, the proper >>>> differential would be >>>> linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) >>>> >>>> There are a couple of issues currently with the migration. There are a >>>> few issues in the parser which converts trac markup to remarkup. Most >>>> comments have very simple with just paragraphs and code blocks but >>>> complex items like lists are sometimes parsed incorrectly. Definition >>>> lists are converted to tables as there are no equivalent in remarkup. >>>> Trac ticket links are converted to phab ticket links. >>>> >>>> The ideal time to migrate is before the end of January The busiest >>>> time for the issue tracker is before and after a new major release. >>>> With 8.2 planned for around April this gives the transition a few >>>> months to settle. We can close the trac issue tracker and continue to >>>> serve it or preferably redirect users to the new ticket. I don't plan >>>> to migrate the wiki at this stage as I do not feel that the parser is >>>> robust enough although there are now few other technical challenges >>>> blocking this direction. >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >>> >>> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From allbery.b at gmail.com Wed Dec 21 15:31:29 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 21 Dec 2016 10:31:29 -0500 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> Message-ID: On Wed, Dec 21, 2016 at 12:15 AM, Edward Kmett wrote: > > Given that little new code seems to be being written with Arrows in mind, > while some older code makes heavy use of it (hxt, etc.), refactoring the > arrow hierarchy is kind of a hard sell. It is by no means impossible, just > something that would require a fair bit of community wrangling and a lot of > work showing clear advantages to a new status quo at a time when its very > hard to get anybody to care about arrow notation at all. > The arrowized-FRP folks seem to care a fair bit. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From m at tweag.io Wed Dec 21 16:52:34 2016 From: m at tweag.io (Boespflug, Mathieu) Date: Wed, 21 Dec 2016 17:52:34 +0100 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> Message-ID: And Opaleye (a successor to haskellDB, for safe interaction with SQL databases) also uses arrow notation last I checked. As I recall do-notation is too powerful, whereas proc-notation provides exactly the right expressive power (no illegal SQL queries can be expressed). But that's not to say Tom (author of Opaleye) couldn't be content with a profunctor-based desugaring rather than an Arrow-based one? -- Mathieu Boespflug Founder at http://tweag.io. On 21 December 2016 at 16:31, Brandon Allbery wrote: > On Wed, Dec 21, 2016 at 12:15 AM, Edward Kmett wrote: >> >> Given that little new code seems to be being written with Arrows in mind, >> while some older code makes heavy use of it (hxt, etc.), refactoring the >> arrow hierarchy is kind of a hard sell. It is by no means impossible, just >> something that would require a fair bit of community wrangling and a lot of >> work showing clear advantages to a new status quo at a time when its very >> hard to get anybody to care about arrow notation at all. >> > > The arrowized-FRP folks seem to care a fair bit. > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas.frisby at gmail.com Wed Dec 21 17:08:42 2016 From: nicolas.frisby at gmail.com (Nicolas Frisby) Date: Wed, 21 Dec 2016 17:08:42 +0000 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> Message-ID: Exploring alternative formulations is great, but I think it's (mostly?) orthogonal to this thread's original email: Jan found the RebindableSyntax support for Arrow to be disappointing hamstrung. I've had a similar experience in the past; the occurrences of the combinators seem to have overly restrictive type ascriptions in the desugared terms. I don't think resolving that necessarily involves changing the Arrow class. Just the desugaring algorithm would have to change (hopefully). On Wed, Dec 21, 2016, 08:52 Boespflug, Mathieu wrote: > And Opaleye (a successor to haskellDB, for safe interaction with SQL > databases) also uses arrow notation last I checked. As I recall do-notation > is too powerful, whereas proc-notation provides exactly the right > expressive power (no illegal SQL queries can be expressed). But that's not > to say Tom (author of Opaleye) couldn't be content with a profunctor-based > desugaring rather than an Arrow-based one? > > -- > Mathieu Boespflug > Founder at http://tweag.io. > > On 21 December 2016 at 16:31, Brandon Allbery wrote: > > On Wed, Dec 21, 2016 at 12:15 AM, Edward Kmett wrote: > > Given that little new code seems to be being written with Arrows in mind, > while some older code makes heavy use of it (hxt, etc.), refactoring the > arrow hierarchy is kind of a hard sell. It is by no means impossible, just > something that would require a fair bit of community wrangling and a lot of > work showing clear advantages to a new status quo at a time when its very > hard to get anybody to care about arrow notation at all. > > > The arrowized-FRP folks seem to care a fair bit. > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Wed Dec 21 17:18:51 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Wed, 21 Dec 2016 17:18:51 +0000 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> Message-ID: <20161221171851.GR12302@weber> On Wed, Dec 21, 2016 at 05:52:34PM +0100, Boespflug, Mathieu wrote: > And Opaleye (a successor to haskellDB, for safe interaction with SQL > databases) also uses arrow notation last I checked. As I recall do-notation > is too powerful, whereas proc-notation provides exactly the right > expressive power (no illegal SQL queries can be expressed). But that's not > to say Tom (author of Opaleye) couldn't be content with a profunctor-based > desugaring rather than an Arrow-based one? I don't see any particular reason to oppose a profunctor-based desugaring. The structure of the computation would be the same, just encoded using different classes. Tom From amindfv at gmail.com Wed Dec 21 19:49:33 2016 From: amindfv at gmail.com (amindfv at gmail.com) Date: Wed, 21 Dec 2016 13:49:33 -0600 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> <808e9d01-6eb1-f02d-ffff-b18fec8bffd5@gmail.com> Message-ID: <1B3F2638-2ECA-4ABD-B098-3D7AF1A57C15@gmail.com> > El 21 dic 2016, a las 02:36, Simon Peyton Jones via ghc-devs escribió: > > > > I even wonder (whisper it) about taking it out altogether, when Edward says “many of the original applications for arrows have been shown to be perfectly suited to being handled by Applicatives” (i.e. with no extensions except AppliciativeDo. But I have no data on whether anyone (at all) is using arrow notation these days, and if so how mission-critical it is to them; and old packages like Yampa certainly use it. Unfortunately ApplicativeDo is for a very limited use-case, of the form: do a0 <- x0 a1 <- x1 -- x1 cannot refer to a0 ... pure ... -- last line must be "pure", "pure $", "return" or "return $" Additionally, Opaleye uses Arrow syntax pretty heavily iirc. I haven't actually prototyped it, but I dream of an ApplicativeDo or ArrowDo which desugars do blocks with join in place of >>= , so any do-block which doesn't use any joins doesn't require the monad constraint... Tom -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Wed Dec 21 19:04:55 2016 From: david.feuer at gmail.com (David Feuer) Date: Wed, 21 Dec 2016 14:04:55 -0500 Subject: Retro-Haskell: can we get seq somewhat under control? Message-ID: In the Old Days (some time before Haskell 98), `seq` wasn't fully polymorphic. It could only be applied to instances of a certain class. I don't know the name that class had, but let's say Seq. Apparently, some people didn't like that, and now it's gone. I'd love to be able to turn on a language extension, use an alternate Prelude, and get it back. I'm not ready to put up a full-scale proposal yet; I'm hoping some people may have suggestions for details. Some thoughts: 1. Why do you want that crazy thing, David? When implementing general-purpose lazy data structures, a *lot* of things need to be done strictly for efficiency. Often, the easiest way to do this is using either bang patterns or strict data constructors. Care is necessary to only ever force pieces of the data structure, and not the polymorphic data a user has stored in it. 2. Why does it need GHC support? It would certainly be possible to write alternative versions of `seq`, `$!`, and `evaluate` to use a user-supplied Seq class. It should even be possible to deal with strict data constructors by hand or (probably) using Template Haskell. For instance, data Foo a = Foo !Int !a would translate to normal GHC Haskell as data Foo a = Seq a => Foo !Int !a But only GHC can extend this to bang patterns, deal with the interactions with coercions, and optimize it thoroughly. 3. How does Seq interact with coercions and roles? I believe we'd probably want a special rule that (Seq a, Coercible a b) => Seq b Thanks to this rule, a Seq constraint on a type variable shouldn't prevent it from having a representational role. The downside of this rule is that if something *can* be forced, but we don't *want* it to be, then we have to hide it a little more carefully than we might like. This shouldn't be too hard, however, using a newtype defined in a separate module that exports a pattern synonym instead of a constructor, to hide the coercibility. 4. Optimize? What? Nobody wants Seq constraints blocking up specialization. Today, a function foo :: (Seq a, Foldable f) => f a -> () won't specialize to the Foldable instance if the Seq instance is unknown. This is lousy. Furthermore, all Seq instances are the same. The RTS doesn't actually need a dictionary to force something to WHNF. The situation is somewhat similar to that of Coercible, *but more so*. Coercible sometimes needs to pass evidence at runtime to maintain type safety. But Seq carries no type safety hazard whatsoever--when compiling in "production mode", we can just *assume* that Seq evidence is valid, and erase it immediately after type checking; the worst thing that could possibly happen is that someone will force a function and get weird semantics. Further, we should *unconditionally* erase Seq evidence from datatypes; this is necessary to maintain compatibility with the usual data representations. I don't know if this unconditional erasure could cause "laziness safety" issues, but the system would be essentially unusable without it. 4. What would the language extension do, exactly? a. Automatically satisfy Seq for data types and families. b. Propagate Seq constraints using the usual rules and the special Coercible rule. c. Modify the translation of strict fields to add Seq constraints as required. David Feuer From vlad.z.4096 at gmail.com Wed Dec 21 19:14:01 2016 From: vlad.z.4096 at gmail.com (Index Int) Date: Wed, 21 Dec 2016 22:14:01 +0300 Subject: Retro-Haskell: can we get seq somewhat under control? In-Reply-To: References: Message-ID: There's a related GHC Proposal: https://github.com/ghc-proposals/ghc-proposals/pull/27 On Wed, Dec 21, 2016 at 10:04 PM, David Feuer wrote: > In the Old Days (some time before Haskell 98), `seq` wasn't fully > polymorphic. It could only be applied to instances of a certain class. > I don't know the name that class had, but let's say Seq. Apparently, > some people didn't like that, and now it's gone. I'd love to be able > to turn on a language extension, use an alternate Prelude, and get it > back. I'm not ready to put up a full-scale proposal yet; I'm hoping > some people may have suggestions for details. Some thoughts: > > 1. Why do you want that crazy thing, David? > > When implementing general-purpose lazy data structures, a *lot* of > things need to be done strictly for efficiency. Often, the easiest way > to do this is using either bang patterns or strict data constructors. > Care is necessary to only ever force pieces of the data structure, and > not the polymorphic data a user has stored in it. > > 2. Why does it need GHC support? > > It would certainly be possible to write alternative versions of `seq`, > `$!`, and `evaluate` to use a user-supplied Seq class. It should even > be possible to deal with strict data constructors by hand or > (probably) using Template Haskell. For instance, > > data Foo a = Foo !Int !a > > would translate to normal GHC Haskell as > > data Foo a = Seq a => Foo !Int !a > > But only GHC can extend this to bang patterns, deal with the > interactions with coercions, and optimize it thoroughly. > > 3. How does Seq interact with coercions and roles? > > I believe we'd probably want a special rule that > > (Seq a, Coercible a b) => Seq b > > Thanks to this rule, a Seq constraint on a type variable shouldn't > prevent it from having a representational role. > > The downside of this rule is that if something *can* be forced, but we > don't *want* it to be, then we have to hide it a little more carefully > than we might like. This shouldn't be too hard, however, using a > newtype defined in a separate module that exports a pattern synonym > instead of a constructor, to hide the coercibility. > > 4. Optimize? What? > > Nobody wants Seq constraints blocking up specialization. Today, a function > > foo :: (Seq a, Foldable f) => f a -> () > > won't specialize to the Foldable instance if the Seq instance is > unknown. This is lousy. Furthermore, all Seq instances are the same. > The RTS doesn't actually need a dictionary to force something to WHNF. > The situation is somewhat similar to that of Coercible, *but more so*. > Coercible sometimes needs to pass evidence at runtime to maintain type > safety. But Seq carries no type safety hazard whatsoever--when > compiling in "production mode", we can just *assume* that Seq evidence > is valid, and erase it immediately after type checking; the worst > thing that could possibly happen is that someone will force a function > and get weird semantics. Further, we should *unconditionally* erase > Seq evidence from datatypes; this is necessary to maintain > compatibility with the usual data representations. I don't know if > this unconditional erasure could cause "laziness safety" issues, but > the system would be essentially unusable without it. > > 4. What would the language extension do, exactly? > > a. Automatically satisfy Seq for data types and families. > b. Propagate Seq constraints using the usual rules and the special > Coercible rule. > c. Modify the translation of strict fields to add Seq constraints as required. > > David Feuer > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ben at well-typed.com Wed Dec 21 19:14:37 2016 From: ben at well-typed.com (Ben Gamari) Date: Wed, 21 Dec 2016 14:14:37 -0500 Subject: Confused about the sub-modules In-Reply-To: <20161221201323.37da9eb918d12743556feb79@mega-nerd.com> References: <20161221173336.bfa2a14da3410d072751ff67@mega-nerd.com> <1482302869-sup-287@sabre> <1482305154-sup-3861@sabre> <20161221201323.37da9eb918d12743556feb79@mega-nerd.com> Message-ID: <87inqd5c2q.fsf@ben-laptop.smart-cactus.org> Erik de Castro Lopo writes: > Edward Z. Yang wrote: > >> Not any more. The commit just has to exist in the remote repo (that's >> what the lint checks.) > > So this is where I am running into trouble. Everything for process > and directory is fine, but for Cabal and containers, the git repo > on git.haskell.org is missing the commits I need. > Hmm, what in particular is missing? Cabal seems up-to-date (both git.haskell.org:packages/Cabal and github.com/haskell/Cabal master branches point to 09865f60caa55a7b02880f2a779c9dd8e1be5ac0). As does containers (both point to 71c64747120c3cd1b91f06731167009b0e5b2454). In general all of this should be reasonably automatic. However, when upstreams push non-fast-forward updates to their branches a bit of manual intervention is necessary; if in doubt just ask as you've done here. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ezyang at mit.edu Wed Dec 21 19:26:16 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 21 Dec 2016 14:26:16 -0500 Subject: Confused about the sub-modules In-Reply-To: <87inqd5c2q.fsf@ben-laptop.smart-cactus.org> References: <20161221173336.bfa2a14da3410d072751ff67@mega-nerd.com> <1482302869-sup-287@sabre> <1482305154-sup-3861@sabre> <20161221201323.37da9eb918d12743556feb79@mega-nerd.com> <87inqd5c2q.fsf@ben-laptop.smart-cactus.org> Message-ID: <1482348359-sup-9989@sabre> I *just* pushed a Cabal submodule update, so Erik probably hadn't gotten it. Excerpts from Ben Gamari's message of 2016-12-21 14:14:37 -0500: > Erik de Castro Lopo writes: > > > Edward Z. Yang wrote: > > > >> Not any more. The commit just has to exist in the remote repo (that's > >> what the lint checks.) > > > > So this is where I am running into trouble. Everything for process > > and directory is fine, but for Cabal and containers, the git repo > > on git.haskell.org is missing the commits I need. > > > Hmm, what in particular is missing? Cabal seems up-to-date (both > git.haskell.org:packages/Cabal and github.com/haskell/Cabal master > branches point to 09865f60caa55a7b02880f2a779c9dd8e1be5ac0). As does > containers (both point to 71c64747120c3cd1b91f06731167009b0e5b2454). > > In general all of this should be reasonably automatic. However, when > upstreams push non-fast-forward updates to their branches a bit of > manual intervention is necessary; if in doubt just ask as you've done > here. > > Cheers, > > - Ben From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Wed Dec 21 22:09:45 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Wed, 21 Dec 2016 22:09:45 +0000 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: <1B3F2638-2ECA-4ABD-B098-3D7AF1A57C15@gmail.com> References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> <808e9d01-6eb1-f02d-ffff-b18fec8bffd5@gmail.com> <1B3F2638-2ECA-4ABD-B098-3D7AF1A57C15@gmail.com> Message-ID: <20161221220945.GD22125@weber> On Wed, Dec 21, 2016 at 01:49:33PM -0600, amindfv at gmail.com wrote: > Additionally, Opaleye uses Arrow syntax pretty heavily iirc. If I were writing the Opaleye tutorial today (and if I rewrite it) I will shy away from arrows and encourage users to use applicative style. There's only one operator where applicative is not enough, 'restrict', and that can be wrapped up as a different combinator so that no one knows they're ever using arrows. Tom From matthewtpickering at gmail.com Wed Dec 21 23:02:25 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 21 Dec 2016 23:02:25 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: Message-ID: I just noticed that the instance was down because the disk quota had been reached. I expanded the size of storage and it should be working again. Matt On Wed, Dec 21, 2016 at 10:12 AM, Matthew Pickering wrote: > Dear devs, > > I have completed writing a migration which moves tickets from trac to > phabricator. The conversion is essentially lossless. The trac > transaction history is replayed which means all events are transferred > with their original authors and timestamps. I welcome comments on the > work I have done so far, especially bugs as I have definitely not > looked at all 12000 tickets. > > http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com > > All the user accounts are automatically generated. If you want to see > the tracker from your perspective then send me an email or ping me on > IRC and I can set the password of the relevant account. > > NOTE: This is not a decision, the existence of this prototype is to > show that the migration is feasible in a satisfactory way and to > remove hypothetical arguments from the discussion. > > I must also thank Dan Palmer and Herbert who helped me along the way. > Dan was responsible for the first implementation and setting up much > of the infrastructure at the Haskell Exchange hackathon in October. We > extensively used the API bindings which Herbert had been working on. > > Further information below! > > Matt > > ===================================================================== > > Reasons > ====== > > Why this change? The main argument is consolidation. Having many > different services is confusing for new and old contributors. > Phabricator has proved effective as a code review tool. It is modern > and actively developed with a powerful feature set which we currently > only use a small fraction of. > > Trac is showing signs of its age. It is old and slow, users regularly > lose comments through accidently refreshing their browser. Further to > this, the integration with other services is quite poor. Commits do > not close tickets which mention them and the only link to commits is a > comment. Querying the tickets is also quite difficult, I usually > resort to using google search or my emails to find the relevant > ticket. > > > Why is Phabricator better? > ==================== > > Through learning more about Phabricator, there are many small things > that I think it does better which will improve the usability of the > issue tracker. I will list a few but I urge you to try it out. > > * Commits which mention ticket numbers are currently posted as trac > comments. There is better integration in phabricator as linking to > commits has first-class support. > * Links with differentials are also more direct than the current > custom field which means you must update two places when posting a > differential. > * Fields are verified so that mispelling user names is not possible > (see #12623 where Ben mispelled his name for example) > * This is also true for projects and other fields. Inspecting these > fields on trac you will find that the formatting on each ticket is > often quite different. > * Keywords are much more useful as the set of used keywords is discoverable. > * Related tickets are much more substantial as the status of related > tickets is reflected to parent ticket. > (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) > > Implementation > ============ > > Keywords are implemented as projects. A project is a combination of a > tag which can be used with any Phabricator object, a workboard to > organise tasks and a group of people who care about the topic. Not all > keywords are migrated. Only keywords with at least 5 tickets were > added to avoid lots of useless projects. The state of keywords is > still a bit unsatisfactory but I wanted to take this chance to clean > them up. > > Custom fields such as architecture and OS are replaced by *projects* > just like keywords. This has the same advantage as other projects. > Users can be subscribed to projects and receive emails when new > tickets are tagged with a project. The large majority of tickets have > very little additional metadata set. I also implemented these as > custom fields but found the the result to be less satisfactory. > > Some users who have trac accounts do not have phab accounts. > Fortunately it is easy to create new user accounts for these users > which have empty passwords which can be recovered by the appropriate > email address. This means tickets can be properly attributed in the > migration. > > The ticket numbers are maintained. I still advocate moving the > infrastructure tickets in order to maintain this mapping. Especially > as there has been little activity in thr the last year. > > Tickets are linked to the relevant commits, differentials and other > tickets. There are 3000 dummy differentials which are used to test > that the linking works correctly. Of course with real data, the proper > differential would be > linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) > > There are a couple of issues currently with the migration. There are a > few issues in the parser which converts trac markup to remarkup. Most > comments have very simple with just paragraphs and code blocks but > complex items like lists are sometimes parsed incorrectly. Definition > lists are converted to tables as there are no equivalent in remarkup. > Trac ticket links are converted to phab ticket links. > > The ideal time to migrate is before the end of January The busiest > time for the issue tracker is before and after a new major release. > With 8.2 planned for around April this gives the transition a few > months to settle. We can close the trac issue tracker and continue to > serve it or preferably redirect users to the new ticket. I don't plan > to migrate the wiki at this stage as I do not feel that the parser is > robust enough although there are now few other technical challenges > blocking this direction. From matthewtpickering at gmail.com Wed Dec 21 23:05:36 2016 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 21 Dec 2016 23:05:36 +0000 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: Message-ID: Ahh, nightmare. The address changed on the reboot -- here is the new address. http://ec2-52-211-40-21.eu-west-1.compute.amazonaws.com\ Matt On Wed, Dec 21, 2016 at 10:12 AM, Matthew Pickering wrote: > Dear devs, > > I have completed writing a migration which moves tickets from trac to > phabricator. The conversion is essentially lossless. The trac > transaction history is replayed which means all events are transferred > with their original authors and timestamps. I welcome comments on the > work I have done so far, especially bugs as I have definitely not > looked at all 12000 tickets. > > http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com > > All the user accounts are automatically generated. If you want to see > the tracker from your perspective then send me an email or ping me on > IRC and I can set the password of the relevant account. > > NOTE: This is not a decision, the existence of this prototype is to > show that the migration is feasible in a satisfactory way and to > remove hypothetical arguments from the discussion. > > I must also thank Dan Palmer and Herbert who helped me along the way. > Dan was responsible for the first implementation and setting up much > of the infrastructure at the Haskell Exchange hackathon in October. We > extensively used the API bindings which Herbert had been working on. > > Further information below! > > Matt > > ===================================================================== > > Reasons > ====== > > Why this change? The main argument is consolidation. Having many > different services is confusing for new and old contributors. > Phabricator has proved effective as a code review tool. It is modern > and actively developed with a powerful feature set which we currently > only use a small fraction of. > > Trac is showing signs of its age. It is old and slow, users regularly > lose comments through accidently refreshing their browser. Further to > this, the integration with other services is quite poor. Commits do > not close tickets which mention them and the only link to commits is a > comment. Querying the tickets is also quite difficult, I usually > resort to using google search or my emails to find the relevant > ticket. > > > Why is Phabricator better? > ==================== > > Through learning more about Phabricator, there are many small things > that I think it does better which will improve the usability of the > issue tracker. I will list a few but I urge you to try it out. > > * Commits which mention ticket numbers are currently posted as trac > comments. There is better integration in phabricator as linking to > commits has first-class support. > * Links with differentials are also more direct than the current > custom field which means you must update two places when posting a > differential. > * Fields are verified so that mispelling user names is not possible > (see #12623 where Ben mispelled his name for example) > * This is also true for projects and other fields. Inspecting these > fields on trac you will find that the formatting on each ticket is > often quite different. > * Keywords are much more useful as the set of used keywords is discoverable. > * Related tickets are much more substantial as the status of related > tickets is reflected to parent ticket. > (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) > > Implementation > ============ > > Keywords are implemented as projects. A project is a combination of a > tag which can be used with any Phabricator object, a workboard to > organise tasks and a group of people who care about the topic. Not all > keywords are migrated. Only keywords with at least 5 tickets were > added to avoid lots of useless projects. The state of keywords is > still a bit unsatisfactory but I wanted to take this chance to clean > them up. > > Custom fields such as architecture and OS are replaced by *projects* > just like keywords. This has the same advantage as other projects. > Users can be subscribed to projects and receive emails when new > tickets are tagged with a project. The large majority of tickets have > very little additional metadata set. I also implemented these as > custom fields but found the the result to be less satisfactory. > > Some users who have trac accounts do not have phab accounts. > Fortunately it is easy to create new user accounts for these users > which have empty passwords which can be recovered by the appropriate > email address. This means tickets can be properly attributed in the > migration. > > The ticket numbers are maintained. I still advocate moving the > infrastructure tickets in order to maintain this mapping. Especially > as there has been little activity in thr the last year. > > Tickets are linked to the relevant commits, differentials and other > tickets. There are 3000 dummy differentials which are used to test > that the linking works correctly. Of course with real data, the proper > differential would be > linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) > > There are a couple of issues currently with the migration. There are a > few issues in the parser which converts trac markup to remarkup. Most > comments have very simple with just paragraphs and code blocks but > complex items like lists are sometimes parsed incorrectly. Definition > lists are converted to tables as there are no equivalent in remarkup. > Trac ticket links are converted to phab ticket links. > > The ideal time to migrate is before the end of January The busiest > time for the issue tracker is before and after a new major release. > With 8.2 planned for around April this gives the transition a few > months to settle. We can close the trac issue tracker and continue to > serve it or preferably redirect users to the new ticket. I don't plan > to migrate the wiki at this stage as I do not feel that the parser is > robust enough although there are now few other technical challenges > blocking this direction. From david.feuer at gmail.com Thu Dec 22 01:19:06 2016 From: david.feuer at gmail.com (David Feuer) Date: Wed, 21 Dec 2016 20:19:06 -0500 Subject: Improving DeriveTraversable Message-ID: The role system is not currently able to use GND to derive Traversable instances. While we wait for future research to solve that problem, I think it would be nice to address a problem that can arise with DeriveTraversable: when newtypes stack up, fmaps also stack up. I've come up with a trick that I think could help solve the problem in at least some important cases. There may be a nicer solution (perhaps using associated types?), but I haven't found it yet. What I don't know is whether this arrangement works for all important "shapes" of newtypes, or what might be involved in automating it. -- Represents a traversal that may come up with a type that's -- a bit off, but not too far off. If you think about Coyoneda, this type -- might make more sense. Whereas Coyoneda builds up larger and -- larger *function compositions*, we just keep changing the coercion -- types. data Trav t b where Trav :: Coercible x (t b) => (forall f a . Applicative f => (a -> f b) -> t a -> f x) -> Trav t b class (Foldable t, Functor t) => Traversable t where traverse :: Applicative f => (a -> f b) -> t a -> f (t b) -- This new method is not intended to be exported by Data.Traversable, -- but only by some ghc-special module. trav :: Trav t b trav = Trav traverse {-# INLINE trav #-} Here are some sample newtype instances. -- Convenience function from Data.Profunctor.Unsafe (#.) :: Coercible b c => (b -> c) -> (a -> b) -> a -> c _ #. g = coerce g {-# INLINE (#.) #-} -- Convenience function for changing a Trav type retrav :: Coercible u t => (forall a . u a -> t a) -> Trav t b -> Trav u b retrav extr (Trav t) = Trav ((. extr) #. t) -- Function for defining traverse proper. Note that this should -- *only* be used to define traverse for newtype wrappers; -- for other types, it will add an unnecessary fmap. travTraverse :: forall f t a b . (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b) travTraverse = case trav :: Trav t b of Trav t -> \f xs -> fmap coerce (t f xs) {-# INLINE travTraverse #-} -- Sample types newtype F t x = F {getF :: t x} deriving (Functor, Foldable) newtype G t x = G {getG :: t x} deriving (Functor, Foldable) newtype H f x = H {getH :: F (G f) x} deriving (Functor, Foldable) instance Traversable t => Traversable (F t) where traverse = travTraverse trav = retrav getF trav instance Traversable t => Traversable (G t) where traverse = travTraverse trav = retrav getG trav instance Traversable t => Traversable (H t) where traverse = travTraverse trav = retrav getH trav With these instances, traversing H t a will perform one fmap instead of three. David Feuer From mail at joachim-breitner.de Thu Dec 22 04:29:32 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 21 Dec 2016 20:29:32 -0800 Subject: Mailing list CCs (Was: Confused about the sub-modules) In-Reply-To: <20161221201323.37da9eb918d12743556feb79@mega-nerd.com> References: <20161221173336.bfa2a14da3410d072751ff67@mega-nerd.com> <1482302869-sup-287@sabre> <1482305154-sup-3861@sabre> <20161221201323.37da9eb918d12743556feb79@mega-nerd.com> Message-ID: <1482380972.1090.6.camel@joachim-breitner.de> Hi Erik, Am Mittwoch, den 21.12.2016, 20:13 +1100 schrieb Erik de Castro Lopo: > (No need to CC me, I'm subscribed to ghc-devs). I see where you are coming from, I also joined the Haskell community after having been socialized on Debian mailing lists. Slightly unfortunately, unsolicited CCs are the common social norm here, and we immigrants will have to adjust. It’s ok. :-) Greetings, Joachim -- Joachim Breitner mail at joachim-breitner.de http://www.joachim-breitner.de/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: This is a digitally signed message part URL: From ekmett at gmail.com Thu Dec 22 04:55:43 2016 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 21 Dec 2016 23:55:43 -0500 Subject: Retro-Haskell: can we get seq somewhat under control? In-Reply-To: References: Message-ID: Actually, if you go back to the original form of Seq it would translate to data Seq a => Foo a = Foo !Int !a which requires resurrecting DatatypeContexts, and not data Foo a = Seq a => Foo !Int !a The former requires Seq to call the constructor, but doesn't pack the dictionary into the constructor. The latter lets you get the dictionary out when you pattern match on it. meaning it has to carry the dictionary around! Unfortunately, non-trivial functionality is lost. With the old DatatypeContext translation you can't always unpack and repack a constructor. Whereas with a change to an existential encoding you're carrying around a lot of dictionaries in precisely the structures that least want to carry extra weight. Both of these options suck relative to the status quo for different reasons. -Edward On Wed, Dec 21, 2016 at 2:14 PM, Index Int wrote: > There's a related GHC Proposal: > https://github.com/ghc-proposals/ghc-proposals/pull/27 > > On Wed, Dec 21, 2016 at 10:04 PM, David Feuer > wrote: > > In the Old Days (some time before Haskell 98), `seq` wasn't fully > > polymorphic. It could only be applied to instances of a certain class. > > I don't know the name that class had, but let's say Seq. Apparently, > > some people didn't like that, and now it's gone. I'd love to be able > > to turn on a language extension, use an alternate Prelude, and get it > > back. I'm not ready to put up a full-scale proposal yet; I'm hoping > > some people may have suggestions for details. Some thoughts: > > > > 1. Why do you want that crazy thing, David? > > > > When implementing general-purpose lazy data structures, a *lot* of > > things need to be done strictly for efficiency. Often, the easiest way > > to do this is using either bang patterns or strict data constructors. > > Care is necessary to only ever force pieces of the data structure, and > > not the polymorphic data a user has stored in it. > > > > 2. Why does it need GHC support? > > > > It would certainly be possible to write alternative versions of `seq`, > > `$!`, and `evaluate` to use a user-supplied Seq class. It should even > > be possible to deal with strict data constructors by hand or > > (probably) using Template Haskell. For instance, > > > > data Foo a = Foo !Int !a > > > > would translate to normal GHC Haskell as > > > > data Foo a = Seq a => Foo !Int !a > > > > But only GHC can extend this to bang patterns, deal with the > > interactions with coercions, and optimize it thoroughly. > > > > 3. How does Seq interact with coercions and roles? > > > > I believe we'd probably want a special rule that > > > > (Seq a, Coercible a b) => Seq b > > > > Thanks to this rule, a Seq constraint on a type variable shouldn't > > prevent it from having a representational role. > > > > The downside of this rule is that if something *can* be forced, but we > > don't *want* it to be, then we have to hide it a little more carefully > > than we might like. This shouldn't be too hard, however, using a > > newtype defined in a separate module that exports a pattern synonym > > instead of a constructor, to hide the coercibility. > > > > 4. Optimize? What? > > > > Nobody wants Seq constraints blocking up specialization. Today, a > function > > > > foo :: (Seq a, Foldable f) => f a -> () > > > > won't specialize to the Foldable instance if the Seq instance is > > unknown. This is lousy. Furthermore, all Seq instances are the same. > > The RTS doesn't actually need a dictionary to force something to WHNF. > > The situation is somewhat similar to that of Coercible, *but more so*. > > Coercible sometimes needs to pass evidence at runtime to maintain type > > safety. But Seq carries no type safety hazard whatsoever--when > > compiling in "production mode", we can just *assume* that Seq evidence > > is valid, and erase it immediately after type checking; the worst > > thing that could possibly happen is that someone will force a function > > and get weird semantics. Further, we should *unconditionally* erase > > Seq evidence from datatypes; this is necessary to maintain > > compatibility with the usual data representations. I don't know if > > this unconditional erasure could cause "laziness safety" issues, but > > the system would be essentially unusable without it. > > > > 4. What would the language extension do, exactly? > > > > a. Automatically satisfy Seq for data types and families. > > b. Propagate Seq constraints using the usual rules and the special > > Coercible rule. > > c. Modify the translation of strict fields to add Seq constraints as > required. > > > > David Feuer > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Thu Dec 22 05:13:37 2016 From: david.feuer at gmail.com (David Feuer) Date: Thu, 22 Dec 2016 00:13:37 -0500 Subject: Retro-Haskell: can we get seq somewhat under control? In-Reply-To: References: Message-ID: I don't want to actually put the dictionary there. I want to *pretend* to put the dictionary there. In testing mode, I want to be able to "take one out" by making it out of whole cloth; in production mode I want to just assume there are no bottoms in the constraints and never ever make the dictionaries. But this all is probably better discussed on the existing proposal, now that I know it exists. There are some considerable complications raised there. On Dec 21, 2016 11:55 PM, "Edward Kmett" wrote: > Actually, if you go back to the original form of Seq it would translate to > > data Seq a => Foo a = Foo !Int !a > > which requires resurrecting DatatypeContexts, and not > > data Foo a = Seq a => Foo !Int !a > > The former requires Seq to call the constructor, but doesn't pack the > dictionary into the constructor. The latter lets you get the dictionary out > when you pattern match on it. meaning it has to carry the dictionary around! > > Unfortunately, non-trivial functionality is lost. With the old > DatatypeContext translation you can't always unpack and repack a > constructor. Whereas with a change to an existential encoding you're > carrying around a lot of dictionaries in precisely the structures that > least want to carry extra weight. > > Both of these options suck relative to the status quo for different > reasons. > > -Edward > > On Wed, Dec 21, 2016 at 2:14 PM, Index Int wrote: > >> There's a related GHC Proposal: >> https://github.com/ghc-proposals/ghc-proposals/pull/27 >> >> On Wed, Dec 21, 2016 at 10:04 PM, David Feuer >> wrote: >> > In the Old Days (some time before Haskell 98), `seq` wasn't fully >> > polymorphic. It could only be applied to instances of a certain class. >> > I don't know the name that class had, but let's say Seq. Apparently, >> > some people didn't like that, and now it's gone. I'd love to be able >> > to turn on a language extension, use an alternate Prelude, and get it >> > back. I'm not ready to put up a full-scale proposal yet; I'm hoping >> > some people may have suggestions for details. Some thoughts: >> > >> > 1. Why do you want that crazy thing, David? >> > >> > When implementing general-purpose lazy data structures, a *lot* of >> > things need to be done strictly for efficiency. Often, the easiest way >> > to do this is using either bang patterns or strict data constructors. >> > Care is necessary to only ever force pieces of the data structure, and >> > not the polymorphic data a user has stored in it. >> > >> > 2. Why does it need GHC support? >> > >> > It would certainly be possible to write alternative versions of `seq`, >> > `$!`, and `evaluate` to use a user-supplied Seq class. It should even >> > be possible to deal with strict data constructors by hand or >> > (probably) using Template Haskell. For instance, >> > >> > data Foo a = Foo !Int !a >> > >> > would translate to normal GHC Haskell as >> > >> > data Foo a = Seq a => Foo !Int !a >> > >> > But only GHC can extend this to bang patterns, deal with the >> > interactions with coercions, and optimize it thoroughly. >> > >> > 3. How does Seq interact with coercions and roles? >> > >> > I believe we'd probably want a special rule that >> > >> > (Seq a, Coercible a b) => Seq b >> > >> > Thanks to this rule, a Seq constraint on a type variable shouldn't >> > prevent it from having a representational role. >> > >> > The downside of this rule is that if something *can* be forced, but we >> > don't *want* it to be, then we have to hide it a little more carefully >> > than we might like. This shouldn't be too hard, however, using a >> > newtype defined in a separate module that exports a pattern synonym >> > instead of a constructor, to hide the coercibility. >> > >> > 4. Optimize? What? >> > >> > Nobody wants Seq constraints blocking up specialization. Today, a >> function >> > >> > foo :: (Seq a, Foldable f) => f a -> () >> > >> > won't specialize to the Foldable instance if the Seq instance is >> > unknown. This is lousy. Furthermore, all Seq instances are the same. >> > The RTS doesn't actually need a dictionary to force something to WHNF. >> > The situation is somewhat similar to that of Coercible, *but more so*. >> > Coercible sometimes needs to pass evidence at runtime to maintain type >> > safety. But Seq carries no type safety hazard whatsoever--when >> > compiling in "production mode", we can just *assume* that Seq evidence >> > is valid, and erase it immediately after type checking; the worst >> > thing that could possibly happen is that someone will force a function >> > and get weird semantics. Further, we should *unconditionally* erase >> > Seq evidence from datatypes; this is necessary to maintain >> > compatibility with the usual data representations. I don't know if >> > this unconditional erasure could cause "laziness safety" issues, but >> > the system would be essentially unusable without it. >> > >> > 4. What would the language extension do, exactly? >> > >> > a. Automatically satisfy Seq for data types and families. >> > b. Propagate Seq constraints using the usual rules and the special >> > Coercible rule. >> > c. Modify the translation of strict fields to add Seq constraints as >> required. >> > >> > David Feuer >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chak at justtesting.org Thu Dec 22 06:24:59 2016 From: chak at justtesting.org (Manuel M T Chakravarty) Date: Thu, 22 Dec 2016 17:24:59 +1100 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: Message-ID: <46C8494F-C98C-45BC-B0B0-D34546350671@justtesting.org> This looks good! Manuel > Am 21.12.2016 um 21:12 schrieb Matthew Pickering : > > Dear devs, > > I have completed writing a migration which moves tickets from trac to > phabricator. The conversion is essentially lossless. The trac > transaction history is replayed which means all events are transferred > with their original authors and timestamps. I welcome comments on the > work I have done so far, especially bugs as I have definitely not > looked at all 12000 tickets. > > http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com > > All the user accounts are automatically generated. If you want to see > the tracker from your perspective then send me an email or ping me on > IRC and I can set the password of the relevant account. > > NOTE: This is not a decision, the existence of this prototype is to > show that the migration is feasible in a satisfactory way and to > remove hypothetical arguments from the discussion. > > I must also thank Dan Palmer and Herbert who helped me along the way. > Dan was responsible for the first implementation and setting up much > of the infrastructure at the Haskell Exchange hackathon in October. We > extensively used the API bindings which Herbert had been working on. > > Further information below! > > Matt > > ===================================================================== > > Reasons > ====== > > Why this change? The main argument is consolidation. Having many > different services is confusing for new and old contributors. > Phabricator has proved effective as a code review tool. It is modern > and actively developed with a powerful feature set which we currently > only use a small fraction of. > > Trac is showing signs of its age. It is old and slow, users regularly > lose comments through accidently refreshing their browser. Further to > this, the integration with other services is quite poor. Commits do > not close tickets which mention them and the only link to commits is a > comment. Querying the tickets is also quite difficult, I usually > resort to using google search or my emails to find the relevant > ticket. > > > Why is Phabricator better? > ==================== > > Through learning more about Phabricator, there are many small things > that I think it does better which will improve the usability of the > issue tracker. I will list a few but I urge you to try it out. > > * Commits which mention ticket numbers are currently posted as trac > comments. There is better integration in phabricator as linking to > commits has first-class support. > * Links with differentials are also more direct than the current > custom field which means you must update two places when posting a > differential. > * Fields are verified so that mispelling user names is not possible > (see #12623 where Ben mispelled his name for example) > * This is also true for projects and other fields. Inspecting these > fields on trac you will find that the formatting on each ticket is > often quite different. > * Keywords are much more useful as the set of used keywords is discoverable. > * Related tickets are much more substantial as the status of related > tickets is reflected to parent ticket. > (http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724) > > Implementation > ============ > > Keywords are implemented as projects. A project is a combination of a > tag which can be used with any Phabricator object, a workboard to > organise tasks and a group of people who care about the topic. Not all > keywords are migrated. Only keywords with at least 5 tickets were > added to avoid lots of useless projects. The state of keywords is > still a bit unsatisfactory but I wanted to take this chance to clean > them up. > > Custom fields such as architecture and OS are replaced by *projects* > just like keywords. This has the same advantage as other projects. > Users can be subscribed to projects and receive emails when new > tickets are tagged with a project. The large majority of tickets have > very little additional metadata set. I also implemented these as > custom fields but found the the result to be less satisfactory. > > Some users who have trac accounts do not have phab accounts. > Fortunately it is easy to create new user accounts for these users > which have empty passwords which can be recovered by the appropriate > email address. This means tickets can be properly attributed in the > migration. > > The ticket numbers are maintained. I still advocate moving the > infrastructure tickets in order to maintain this mapping. Especially > as there has been little activity in thr the last year. > > Tickets are linked to the relevant commits, differentials and other > tickets. There are 3000 dummy differentials which are used to test > that the linking works correctly. Of course with real data, the proper > differential would be > linked.(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T11044) > > There are a couple of issues currently with the migration. There are a > few issues in the parser which converts trac markup to remarkup. Most > comments have very simple with just paragraphs and code blocks but > complex items like lists are sometimes parsed incorrectly. Definition > lists are converted to tables as there are no equivalent in remarkup. > Trac ticket links are converted to phab ticket links. > > The ideal time to migrate is before the end of January The busiest > time for the issue tracker is before and after a new major release. > With 8.2 planned for around April this gives the transition a few > months to settle. We can close the trac issue tracker and continue to > serve it or preferably redirect users to the new ticket. I don't plan > to migrate the wiki at this stage as I do not feel that the parser is > robust enough although there are now few other technical challenges > blocking this direction. > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From gracjanpolak at gmail.com Thu Dec 22 11:08:38 2016 From: gracjanpolak at gmail.com (Gracjan Polak) Date: Thu, 22 Dec 2016 12:08:38 +0100 Subject: Trying to resurrect nofib/fibon Message-ID: Hi all, I went into the rabbit hole that starts here https://ghc.haskell.org/trac/ghc/ticket/11501 So far I know that: 1. fibon is not built regularly by any CI and it bitrotted significantly. 2. Makefile's for fibon use undefined variables like INPLACE_HSC2HS_PGM. Probably those were sourced from ghc/mk/*.mk, but not now. 3. Haskell courses are pre-AMP (this is easiest to fix). 4. Makefile's build object files one by one and that fails in case modules depend on each other. ghc-paths.mk sorts objects and that gives wrong compile order. Does anybody know a commit-hash or date when it did compile last time? Knowing that this module (fibno) wasn't used for so long does it make sense to fix it? How about removing it? -- Gracjan -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Dec 22 12:26:16 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 22 Dec 2016 12:26:16 +0000 Subject: User manual broken Message-ID: I’m getting tons of this stuff from the user manual type setting. Might someone fix it? Simon /5playpen/simonpj/HEAD-4/docs/users_guide/8.0.2-notes.rst:162: WARNING: Inline interpreted text or phrase reference start-string without end-string. /5playpen/simonpj/HEAD-4/docs/users_guide/8.2.1-notes.rst:242: WARNING: Bullet list ends without a blank line; unexpected unindent. /5playpen/simonpj/HEAD-4/docs/users_guide/editing-guide.rst:350: ERROR: Error in "ghci-cmd" directive: invalid option block. .. ghci-cmd:: :module [*] Load a module /5playpen/simonpj/HEAD-4/docs/users_guide/eventlog-formats.rst:35: ERROR: Unexpected indentation. /5playpen/simonpj/HEAD-4/docs/users_guide/eventlog-formats.rst:41: WARNING: Block quote ends without a blank line; unexpected unindent. /5playpen/simonpj/HEAD-4/docs/users_guide/eventlog-formats.rst:58: ERROR: Unexpected indentation. /5playpen/simonpj/HEAD-4/docs/users_guide/eventlog-formats.rst:104: ERROR: Unexpected indentation. /5playpen/simonpj/HEAD-4/docs/users_guide/ghci.rst:509: ERROR: Error in "warning" directive: invalid option block. .. warning:: Temporary bindings introduced at the prompt only last until the next :ghci-cmd:`:load` or :ghci-cmd:`:reload` command, at which time they will be simply lost. However, they do survive a change of context with :ghci-cmd:`:module`: the temporary bindings just move to the new location. /5playpen/simonpj/HEAD-4/docs/users_guide/ghci.rst:515: ERROR: Error in "hint" directive: invalid option block. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwiegley at gmail.com Thu Dec 22 16:55:48 2016 From: jwiegley at gmail.com (John Wiegley) Date: Thu, 22 Dec 2016 08:55:48 -0800 Subject: Work on mail.haskell.org beginning, please report any problems Message-ID: Hello Haskellers, Beginning today, I am upgrading our Postfix installation on mail.haskell.org, and introducing some new options to reduce the amount of spam that hits our mailman server. If you experience delivery problems, or any bounced mail, please send a copy of the full bounce message to my address: jwiegley at gmail.com. I'll be making more changes gradually over the next few days, and watching the mail logs, but it's possible that mail accepted before will suddenly start getting rejected, depending on how well-behaved your sending mail server is. Activities planned for this Christmas break are: - [x] Upgrade Postfix to 2.11 - [X] Enable postscreen for pre-queue RBL filtering - [ ] DKIM sign messages sent from mailman - [ ] Implement DMARC policy (i.e., reject incoming messages improperly DKIM signed, or failing SPF check) - [ ] Prevent mail being spoofed from haskell.org addresses - [ ] Tighten sender and recipient restrictions - [ ] Re-assess inbound and outbound rate limits - [ ] Use SpamAssassin for post-queue filtering - [ ] If helpful, enable deep protocol pre-filtering - [ ] Document all the above, so others can help with e-mail admin Thank you, John Wiegley Haskell.org, infrastructure team From ben at well-typed.com Thu Dec 22 17:42:44 2016 From: ben at well-typed.com (Ben Gamari) Date: Thu, 22 Dec 2016 12:42:44 -0500 Subject: User manual broken In-Reply-To: References: Message-ID: <8737hf97xn.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > I’m getting tons of this stuff from the user manual type setting. Might someone fix it? Yes, I'm on it. Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at well-typed.com Thu Dec 22 18:12:14 2016 From: ben at well-typed.com (Ben Gamari) Date: Thu, 22 Dec 2016 13:12:14 -0500 Subject: User manual broken In-Reply-To: References: Message-ID: <87zijn7s01.fsf@ben-laptop.smart-cactus.org> Simon Peyton Jones via ghc-devs writes: > I’m getting tons of this stuff from the user manual type setting. Might someone fix it? Hmm, I've tried a few environments and have so far been unable to reproduce this. what version of sphinx-build are you using (e.g. sphinx-build --version)? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 454 bytes Desc: not available URL: From ben at well-typed.com Thu Dec 22 19:38:58 2016 From: ben at well-typed.com (Ben Gamari) Date: Thu, 22 Dec 2016 14:38:58 -0500 Subject: [ANNOUNCE] GHC 8.0.2 release candidate 2 Message-ID: <87vaub7nzh.fsf@ben-laptop.smart-cactus.org> Hello everyone, The GHC team is happy to announce the second candiate of the 8.0.2 release of the Glasgow Haskell Compiler. Source and binary distributions are available at http://downloads.haskell.org/~ghc/8.0.2-rc2/ This is the second and likely final release candidate leading up the 8.0.2 release. This release will fix a number of bugs found in 8.0.1 including, * Interface file build determinism (#4012). * Compatibility with macOS Sierra and GCC compilers which compile position-independent executables by default * Runtime linker fixes on Windows (see #12797) * A compiler bug which resulted in undefined reference errors while compiling some packages (see #12076) * Compatability with systems which use the gold linker * A number of memory consistency bugs in the runtime system * A number of efficiency issues in the threaded runtime which manifest on larger core counts and large numbers of bound threads. * A typechecker bug which caused some programs using -XDefaultSignatures to be incorrectly accepted. * More than two-hundred other bugs. See Trac [1] for a complete listing. This release candidate fixes a number of issues present in -rc1, * #12757, which lead to broken runtime behavior and even crashes in the presence of primitive strings. * #12844, a type inference issue affecting partial type signatures. * A bump of the `directory` library, fixing buggy path canonicalization behavior (#12894). Unfortunately this required a major version bump in `directory` and minor bumps in several other libraries. * #12912, where use of the `select` system call would lead to runtime system failures with large numbers of open file handles. If all goes well we should have a final 8.0.2 release out shortly after the new year. As always, let us know if you encounter trouble. Thanks to everyone who has contributed so far! Happy testing, - Ben [1] https://ghc.haskell.org/trac/ghc/query?status=closed&milestone=8.0.2&col=id&col=summary&col=status&col=type&col=priority&col=milestone&col=component&order=priority -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From chrisdone at gmail.com Thu Dec 22 21:53:07 2016 From: chrisdone at gmail.com (Christopher Done) Date: Thu, 22 Dec 2016 21:53:07 +0000 Subject: Competing with C in a simple loop In-Reply-To: References: Message-ID: Purely as an experiment, I've written a function that uses ByteString to simply elemIndex it's way across a string here. Look for <, then look for >. Repeat until done. https://github.com/chrisdone/xeno (under src/Xeno.hs) But if you scroll down the README to the 182kb file example, you see that hexml takes 33us and xeno takes 111us. That's surprising to me because I'm doing just a walk across a string and hexml is doing a full parse. It's written in C, but still, 3x faster AND doing allocations and more work. I tried replacing the ByteString with a raw Ptr Word8 and it didn't make a difference, actually increased time a little bit. My weigh results indicate that it's not doing any allocations during the process, at least nothing linear or above. So, I haven't looked at the core or asm yet, but I'm guessing it's simply doing more instructions and/or indirections than necessary. You can reproduce this with stack build --bench xeno Can anyone make an improvement to the speed? I already nerd sniped myself enough with this, so I'm spreading the bug elsewhere. I think it's a pretty good "raw" performance exercise and possibly something that could serve as a tutorial on haskell performance. Ciao! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Thu Dec 22 21:58:44 2016 From: ekmett at gmail.com (Edward Kmett) Date: Thu, 22 Dec 2016 16:58:44 -0500 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: On Mon, Dec 12, 2016 at 1:31 PM, David Feuer wrote: > On Dec 12, 2016 1:15 PM, "Edward Kmett" wrote: > > A few thoughts in no particular order: > > Unlike this proposal, the existing 'reify' itself as core can actually be > made well typed. > > > Can you explain this? > I mean just that. If you look at the core generated by the existing 'reify' combinator, nothing it does is 'evil'. We're allowing it to construct a dictionary. That isn't unsound where core is concerned. Where the surface language is concerned the uniqueness of that dictionary is preserved by the quantifier introducing a new type generatively in the local context, so the usual problems with dictionary construction are defused. Tagged in the example could be replaced with explicit type application if > backwards compatibility isn't a concern. OTOH, it is. > > > Would that help Core typing? > It doesn't make a difference there. The only thing is it avoids needing to make up something like Tagged. > > On the other other hand, if you're going to be magic, you might as well > go all the way to something like: > > reify# :: (p => r) -> a -> r > > > How would we implement reify in terms of this variant? > That I don't have the answer to. It seems like it should work though. and admit both fundep and TF forms. I mean, if you're going to lie you > might as well lie big. > > > Definitely. > > There are a very large number of instances out there scattered across > dozens of packages that would be broken by switching from Proxy to Tagged > or explicit type application internally. (I realize that this is a lesser > concern that can be resolved by a major version bump and some community > friction, but it does mean pragmatically that migrating to something like > this would need a plan.) > > > I just want to make sure that we do what we need to get Really Good Code, > if we're going to the trouble of adding compiler support. > That makes sense to me. -Edward -------------- next part -------------- An HTML attachment was scrubbed... URL: From mle+hs at mega-nerd.com Thu Dec 22 22:43:51 2016 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Fri, 23 Dec 2016 09:43:51 +1100 Subject: Competing with C in a simple loop In-Reply-To: References: Message-ID: <20161223094351.e0783d1a73918642d7703ab5@mega-nerd.com> Christopher Done wrote: > But if you scroll down the README to the 182kb file example, you see that > hexml takes 33us and xeno takes 111us. That's surprising to me because I'm > doing just a walk across a string and hexml is doing a full parse. It's > written in C, but still, 3x faster AND doing allocations and more work. > > I tried replacing the ByteString with a raw Ptr Word8 and it didn't make a > difference, actually increased time a little bit. The code you have written still looks like Haskell code. When I write Haskell code that needs to compete speedwise with C, it usually ends up looking like C as well. My suggestion is to drop `Data.ByteString.elemIndex` in favour of direct unsafe array accesses. If I find a bit of time over the next couple of days I might have a crack at this. Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From george.colpitts at gmail.com Thu Dec 22 23:23:58 2016 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 22 Dec 2016 23:23:58 +0000 Subject: [ANNOUNCE] GHC 8.0.2 release candidate 2 In-Reply-To: <87vaub7nzh.fsf@ben-laptop.smart-cactus.org> References: <87vaub7nzh.fsf@ben-laptop.smart-cactus.org> Message-ID: compiled from source with no issues on Mac OS 10.12.2 with XCode 8.2.1. Compiled vector package with it, did some smoke testing of the runtime, seems fine On Thu, Dec 22, 2016 at 3:39 PM Ben Gamari wrote: > > Hello everyone, > > The GHC team is happy to announce the second candiate of the > 8.0.2 release of the Glasgow Haskell Compiler. Source and binary > distributions are available at > > http://downloads.haskell.org/~ghc/8.0.2-rc2/ > > This is the second and likely final release candidate leading up the > 8.0.2 release. This release will fix a number of bugs found in 8.0.1 > including, > > * Interface file build determinism (#4012). > > * Compatibility with macOS Sierra and GCC compilers which compile > position-independent executables by default > > * Runtime linker fixes on Windows (see #12797) > > * A compiler bug which resulted in undefined reference errors while > compiling some packages (see #12076) > > * Compatability with systems which use the gold linker > > * A number of memory consistency bugs in the runtime system > > * A number of efficiency issues in the threaded runtime which manifest > on larger core counts and large numbers of bound threads. > > * A typechecker bug which caused some programs using > -XDefaultSignatures to be incorrectly accepted. > > * More than two-hundred other bugs. See Trac [1] for a complete > listing. > > This release candidate fixes a number of issues present in -rc1, > > * #12757, which lead to broken runtime behavior and even crashes in > the presence of primitive strings. > > * #12844, a type inference issue affecting partial type signatures. > > * A bump of the `directory` library, fixing buggy path > canonicalization behavior (#12894). Unfortunately this required a > major version bump in `directory` and minor bumps in several other > libraries. > > * #12912, where use of the `select` system call would lead to runtime > system failures with large numbers of open file handles. > > If all goes well we should have a final 8.0.2 release out shortly after > the new year. As always, let us know if you encounter trouble. Thanks to > everyone who has contributed so far! > > Happy testing, > > - Ben > > > [1] > https://ghc.haskell.org/trac/ghc/query?status=closed&milestone=8.0.2&col=id&col=summary&col=status&col=type&col=priority&col=milestone&col=component&order=priority > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Thu Dec 22 23:55:41 2016 From: david.feuer at gmail.com (David Feuer) Date: Thu, 22 Dec 2016 18:55:41 -0500 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: On Thu, Dec 22, 2016 at 4:58 PM, Edward Kmett wrote: > On Mon, Dec 12, 2016 at 1:31 PM, David Feuer wrote: >> >> On Dec 12, 2016 1:15 PM, "Edward Kmett" wrote: >> >> A few thoughts in no particular order: >> >> Unlike this proposal, the existing 'reify' itself as core can actually be >> made well typed. >> >> >> Can you explain this? > > I mean just that. If you look at the core generated by the existing 'reify' > combinator, nothing it does is 'evil'. We're allowing it to construct a > dictionary. That isn't unsound where core is concerned. So what *is* evil about my Tagged approach? Or do you just mean that the excessive polymorphism is evil? There's no doubt that it is, but the only ways I see to avoid it are to bake in a particular Reifies class, which is a different kind of evil, or to come up with a way to express the constraint that the class has exactly one method, which is Extreme Overkill. > Where the surface language is concerned the uniqueness of that dictionary is > preserved by the quantifier introducing a new type generatively in the local > context, so the usual problems with dictionary construction are defused. >> On the other other hand, if you're going to be magic, you might as well >> go all the way to something like: >> >> reify# :: (p => r) -> a -> r >> >> >> How would we implement reify in terms of this variant? > > That I don't have the answer to. It seems like it should work though. I think it does. I've changed the reify# type a bit to avoid an ambiguity I couldn't resolve. newtype Constrain p r = Constrain (p => r) reify# :: Constrain p r -> a -> r Using my Tagged definition of Reifies, we get reify' :: forall a r . (forall s . Reifies s a => Tagged s r) -> a -> r reify' f = reify# (Constrain (unTagged (f :: Tagged s r)) :: forall s . Constrain (Reifies s a) r) reify :: forall a r . a -> (forall s . Reifies s a => Proxy s -> r) -> r reify a f = reify# (Constrain (f (Proxy :: Proxy s)) :: forall s . Constrain (Reifies s a) r) a Using your proxy version, things are trickier, but I think it's reify :: forall a r . a -> (forall s . Reifies s a => Proxy s -> r) -> r reify a f = (reify# (Constrain (f (Proxy :: Proxy s)) :: forall s . Constrain (Reifies s a) r)) (const a :: forall proxy s . proxy s -> a) David From george.colpitts at gmail.com Fri Dec 23 00:05:49 2016 From: george.colpitts at gmail.com (George Colpitts) Date: Fri, 23 Dec 2016 00:05:49 +0000 Subject: [ANNOUNCE] GHC 8.0.2 release candidate 2 In-Reply-To: References: <87vaub7nzh.fsf@ben-laptop.smart-cactus.org> Message-ID: binary works fine also, same env, same testing as with compiled source I did notice a very minor infelicity in the TOC for the pdf User's Guide. I think it was there in rc1 also The formatting seems to assume there are are never more than 3 digits on lines such as the following and is missing a space when there are 4 digits, e.g. 10.7 Pattern synonyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 10.8 Class and instances declarations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 10.9 Type families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 10.10Datatype promotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 10.11Kind polymorphism and Type-in-Type . . . . . . . . . . . . . . . . . . . . . . . . . . 311 10.12Runtime representation polymorphism . . On Thu, Dec 22, 2016 at 7:23 PM George Colpitts wrote: > compiled from source with no issues on Mac OS 10.12.2 with XCode 8.2.1. > Compiled vector package with it, did some smoke testing of the runtime, > seems fine > > On Thu, Dec 22, 2016 at 3:39 PM Ben Gamari wrote: > > > Hello everyone, > > The GHC team is happy to announce the second candiate of the > 8.0.2 release of the Glasgow Haskell Compiler. Source and binary > distributions are available at > > http://downloads.haskell.org/~ghc/8.0.2-rc2/ > > This is the second and likely final release candidate leading up the > 8.0.2 release. This release will fix a number of bugs found in 8.0.1 > including, > > * Interface file build determinism (#4012). > > * Compatibility with macOS Sierra and GCC compilers which compile > position-independent executables by default > > * Runtime linker fixes on Windows (see #12797) > > * A compiler bug which resulted in undefined reference errors while > compiling some packages (see #12076) > > * Compatability with systems which use the gold linker > > * A number of memory consistency bugs in the runtime system > > * A number of efficiency issues in the threaded runtime which manifest > on larger core counts and large numbers of bound threads. > > * A typechecker bug which caused some programs using > -XDefaultSignatures to be incorrectly accepted. > > * More than two-hundred other bugs. See Trac [1] for a complete > listing. > > This release candidate fixes a number of issues present in -rc1, > > * #12757, which lead to broken runtime behavior and even crashes in > the presence of primitive strings. > > * #12844, a type inference issue affecting partial type signatures. > > * A bump of the `directory` library, fixing buggy path > canonicalization behavior (#12894). Unfortunately this required a > major version bump in `directory` and minor bumps in several other > libraries. > > * #12912, where use of the `select` system call would lead to runtime > system failures with large numbers of open file handles. > > If all goes well we should have a final 8.0.2 release out shortly after > the new year. As always, let us know if you encounter trouble. Thanks to > everyone who has contributed so far! > > Happy testing, > > - Ben > > > [1] > https://ghc.haskell.org/trac/ghc/query?status=closed&milestone=8.0.2&col=id&col=summary&col=status&col=type&col=priority&col=milestone&col=component&order=priority > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Fri Dec 23 00:19:28 2016 From: david.feuer at gmail.com (David Feuer) Date: Thu, 22 Dec 2016 19:19:28 -0500 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: I meant to define reify for the Tagged representation in terms of reify': reify :: forall a r . a -> (forall (s :: *) . Reifies s a => Proxy s -> r) -> r reify a f = reify' (unproxy f) a Further, I figured I'd look into reifyNat, and I came up with this: reifyNat' :: forall a r . (forall (n :: Nat) . KnownNat n => Tagged n r) -> Integer -> r reifyNat' f = reify# (Constrain (unTagged (f :: Tagged n r)) :: forall (n :: Nat) . Constrain (KnownNat n) r) On Thu, Dec 22, 2016 at 6:55 PM, David Feuer wrote: > On Thu, Dec 22, 2016 at 4:58 PM, Edward Kmett wrote: >> On Mon, Dec 12, 2016 at 1:31 PM, David Feuer wrote: >>> >>> On Dec 12, 2016 1:15 PM, "Edward Kmett" wrote: >>> >>> A few thoughts in no particular order: >>> >>> Unlike this proposal, the existing 'reify' itself as core can actually be >>> made well typed. >>> >>> >>> Can you explain this? >> >> I mean just that. If you look at the core generated by the existing 'reify' >> combinator, nothing it does is 'evil'. We're allowing it to construct a >> dictionary. That isn't unsound where core is concerned. > > So what *is* evil about my Tagged approach? Or do you just mean that > the excessive polymorphism is evil? There's no doubt that it is, but > the only ways I see to avoid it are to bake in a particular Reifies > class, which is a different kind of evil, or to come up with a way to > express the constraint that the class has exactly one method, which is > Extreme Overkill. > >> Where the surface language is concerned the uniqueness of that dictionary is >> preserved by the quantifier introducing a new type generatively in the local >> context, so the usual problems with dictionary construction are defused. > >>> On the other other hand, if you're going to be magic, you might as well >>> go all the way to something like: >>> >>> reify# :: (p => r) -> a -> r >>> >>> >>> How would we implement reify in terms of this variant? >> >> That I don't have the answer to. It seems like it should work though. > > I think it does. I've changed the reify# type a bit to avoid an > ambiguity I couldn't resolve. > > newtype Constrain p r = Constrain (p => r) > > reify# :: Constrain p r -> a -> r > > Using my Tagged definition of Reifies, we get > > reify' :: forall a r . (forall s . Reifies s a => Tagged s r) -> a -> r > reify' f = reify# (Constrain (unTagged (f :: Tagged s r)) :: forall s > . Constrain (Reifies s a) r) > > reify :: forall a r . a -> (forall s . Reifies s a => Proxy s -> r) -> r > reify a f = reify# (Constrain (f (Proxy :: Proxy s)) :: forall s . > Constrain (Reifies s a) r) a > > Using your proxy version, things are trickier, but I think it's > > reify :: forall a r . a -> (forall s . Reifies s a => Proxy s -> r) -> r > reify a f = (reify# (Constrain (f (Proxy :: Proxy s)) :: forall s . > Constrain (Reifies s a) r)) (const a :: forall proxy s . proxy s -> a) > > David From fumiexcel at gmail.com Fri Dec 23 01:14:44 2016 From: fumiexcel at gmail.com (Fumiaki Kinoshita) Date: Fri, 23 Dec 2016 10:14:44 +0900 Subject: [ANNOUNCE] GHC 8.0.2 release candidate 2 In-Reply-To: <87vaub7nzh.fsf@ben-laptop.smart-cactus.org> References: <87vaub7nzh.fsf@ben-laptop.smart-cactus.org> Message-ID: Hello, is there any chance to get this in the 8.0.2 release? The bug has been the obstacle to our plan of switching to GHC 8.0. Now that there is an easy fix, I'd really like to see this in the next release. https://git.haskell.org/ghc.git/commitdiff/0d213c18b6962bb65e2b3035a258dd 3f5bf454dd (addresses #12899) 2016-12-23 4:38 GMT+09:00 Ben Gamari : > > Hello everyone, > > The GHC team is happy to announce the second candiate of the > 8.0.2 release of the Glasgow Haskell Compiler. Source and binary > distributions are available at > > http://downloads.haskell.org/~ghc/8.0.2-rc2/ > > This is the second and likely final release candidate leading up the > 8.0.2 release. This release will fix a number of bugs found in 8.0.1 > including, > > * Interface file build determinism (#4012). > > * Compatibility with macOS Sierra and GCC compilers which compile > position-independent executables by default > > * Runtime linker fixes on Windows (see #12797) > > * A compiler bug which resulted in undefined reference errors while > compiling some packages (see #12076) > > * Compatability with systems which use the gold linker > > * A number of memory consistency bugs in the runtime system > > * A number of efficiency issues in the threaded runtime which manifest > on larger core counts and large numbers of bound threads. > > * A typechecker bug which caused some programs using > -XDefaultSignatures to be incorrectly accepted. > > * More than two-hundred other bugs. See Trac [1] for a complete > listing. > > This release candidate fixes a number of issues present in -rc1, > > * #12757, which lead to broken runtime behavior and even crashes in > the presence of primitive strings. > > * #12844, a type inference issue affecting partial type signatures. > > * A bump of the `directory` library, fixing buggy path > canonicalization behavior (#12894). Unfortunately this required a > major version bump in `directory` and minor bumps in several other > libraries. > > * #12912, where use of the `select` system call would lead to runtime > system failures with large numbers of open file handles. > > If all goes well we should have a final 8.0.2 release out shortly after > the new year. As always, let us know if you encounter trouble. Thanks to > everyone who has contributed so far! > > Happy testing, > > - Ben > > > [1] https://ghc.haskell.org/trac/ghc/query?status=closed& > milestone=8.0.2&col=id&col=summary&col=status&col=type& > col=priority&col=milestone&col=component&order=priority > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Fri Dec 23 12:46:38 2016 From: ekmett at gmail.com (Edward Kmett) Date: Fri, 23 Dec 2016 07:46:38 -0500 Subject: Magical function to support reflection In-Reply-To: References: Message-ID: I wasn't referring to Tagged itself being evil. I was referring to giving an excessively general type to reify# that can be used to generate segfaults as being evil. The existing reify combinator doesn't have that property, but can't be used to build KnownNat and KnownSymbol dictionaries. (Hence why there are specialized combinators for those in reflection.) -Edward On Thu, Dec 22, 2016 at 6:55 PM, David Feuer wrote: > On Thu, Dec 22, 2016 at 4:58 PM, Edward Kmett wrote: > > On Mon, Dec 12, 2016 at 1:31 PM, David Feuer > wrote: > >> > >> On Dec 12, 2016 1:15 PM, "Edward Kmett" wrote: > >> > >> A few thoughts in no particular order: > >> > >> Unlike this proposal, the existing 'reify' itself as core can actually > be > >> made well typed. > >> > >> > >> Can you explain this? > > > > I mean just that. If you look at the core generated by the existing > 'reify' > > combinator, nothing it does is 'evil'. We're allowing it to construct a > > dictionary. That isn't unsound where core is concerned. > > So what *is* evil about my Tagged approach? Or do you just mean that > the excessive polymorphism is evil? There's no doubt that it is, but > the only ways I see to avoid it are to bake in a particular Reifies > class, which is a different kind of evil, or to come up with a way to > express the constraint that the class has exactly one method, which is > Extreme Overkill. > > > Where the surface language is concerned the uniqueness of that > dictionary is > > preserved by the quantifier introducing a new type generatively in the > local > > context, so the usual problems with dictionary construction are defused. > > >> On the other other hand, if you're going to be magic, you might as well > >> go all the way to something like: > >> > >> reify# :: (p => r) -> a -> r > >> > >> > >> How would we implement reify in terms of this variant? > > > > That I don't have the answer to. It seems like it should work though. > > I think it does. I've changed the reify# type a bit to avoid an > ambiguity I couldn't resolve. > > newtype Constrain p r = Constrain (p => r) > > reify# :: Constrain p r -> a -> r > > Using my Tagged definition of Reifies, we get > > reify' :: forall a r . (forall s . Reifies s a => Tagged s r) -> a -> r > reify' f = reify# (Constrain (unTagged (f :: Tagged s r)) :: forall s > . Constrain (Reifies s a) r) > > reify :: forall a r . a -> (forall s . Reifies s a => Proxy s -> r) -> r > reify a f = reify# (Constrain (f (Proxy :: Proxy s)) :: forall s . > Constrain (Reifies s a) r) a > > Using your proxy version, things are trickier, but I think it's > > reify :: forall a r . a -> (forall s . Reifies s a => Proxy s -> r) -> r > reify a f = (reify# (Constrain (f (Proxy :: Proxy s)) :: forall s . > Constrain (Reifies s a) r)) (const a :: forall proxy s . proxy s -> a) > > David > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harendra.kumar at gmail.com Fri Dec 23 14:55:42 2016 From: harendra.kumar at gmail.com (Harendra Kumar) Date: Fri, 23 Dec 2016 20:25:42 +0530 Subject: Competing with C in a simple loop In-Reply-To: References: Message-ID: On 23 December 2016 at 03:23, Christopher Done wrote: > > But if you scroll down the README to the 182kb file example, you see that > hexml takes 33us and xeno takes 111us. That's surprising to me because I'm > doing just a walk across a string and hexml is doing a full parse. It's > written in C, but still, 3x faster AND doing allocations and more work. > hexml being a full parser might fail, on the other hand your program unconditionally walks the bytestring. Are you sure hexml is actually completing and not aborting or short-circuiting because of a parse error or some other error? In all other data points xeno is taking much less time than hexml except this one. So I am suspecting it could be a problem with the input, making hexml fail silently. I see that the file used on this data point has japanese characters, maybe hexml is not able to handle those? -harendra -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Fri Dec 23 15:08:44 2016 From: chrisdone at gmail.com (Christopher Done) Date: Fri, 23 Dec 2016 15:08:44 +0000 Subject: Competing with C in a simple loop In-Reply-To: References: Message-ID: Oh, you're correct! It's unable to parse that file! The files is a test suite file from the XML spec, I guess hexml is unable to parse this one. I'll remove it from my benchmark suite in favor of something that does parse. Cheers! On 23 December 2016 at 14:55, Harendra Kumar wrote: > > On 23 December 2016 at 03:23, Christopher Done > wrote: >> >> But if you scroll down the README to the 182kb file example, you see that >> hexml takes 33us and xeno takes 111us. That's surprising to me because I'm >> doing just a walk across a string and hexml is doing a full parse. It's >> written in C, but still, 3x faster AND doing allocations and more work. >> > > hexml being a full parser might fail, on the other hand your program > unconditionally walks the bytestring. Are you sure hexml is actually > completing and not aborting or short-circuiting because of a parse error or > some other error? In all other data points xeno is taking much less time > than hexml except this one. So I am suspecting it could be a problem with > the input, making hexml fail silently. I see that the file used on this > data point has japanese characters, maybe hexml is not able to handle those? > > -harendra > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Sat Dec 24 02:46:10 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Fri, 23 Dec 2016 21:46:10 -0500 Subject: I have Aphronted Phab Message-ID: Hi Austin, Ben, In trying to write a comment on Phab, I got this: Unhandled Exception ("AphrontCSRFException") You are trying to save some data to Phabricator, but the request your browser made included an incorrect token. Reload the page and try again. You may need to clear your cookies. This was a Web request. This request had an invalid CSRF token. I followed the suggestions, and even tried a different browser (one I never use), to no avail. This still may very well be my fault, but it seems something is awry somewhere. Do you have any help you can offer? Thanks! Richard -------------- next part -------------- An HTML attachment was scrubbed... URL: From rae at cs.brynmawr.edu Sat Dec 24 03:27:15 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Fri, 23 Dec 2016 22:27:15 -0500 Subject: Trac to Phabricator (Maniphest) migration prototype In-Reply-To: References: <7a416d04-659a-4e1e-7b52-02f8e933e28f@haskus.fr> Message-ID: <84D7C33C-A16A-4625-B0C2-1FA3A973BD12@cs.brynmawr.edu> > On Dec 21, 2016, at 10:02 AM, Matthew Pickering wrote: > > Then looking at the culprits for some fun: > > (simonpj,192) > (goldfire,123) > (bgamari,116) > (thomie,102) > (nomeata,30) > (rwbarton,28) > (RyanGlScott,19) > (simonmar,18) Ha. I also use the long form comment:XX:ticket:YY sometimes, for cross-ticket and from-wiki comment referencing. These are somewhat rare, but I doubt it would be hard to preserve this form, if you're aware of it. Thanks for looking into it! Richard From rae at cs.brynmawr.edu Sat Dec 24 03:32:16 2016 From: rae at cs.brynmawr.edu (Richard Eisenberg) Date: Fri, 23 Dec 2016 22:32:16 -0500 Subject: Help needed: Restrictions of proc-notation with RebindableSyntax In-Reply-To: <20161221220945.GD22125@weber> References: <84B44086-45A5-41D8-AAC9-DCB848C1CD39@cs.brynmawr.edu> <808e9d01-6eb1-f02d-ffff-b18fec8bffd5@gmail.com> <1B3F2638-2ECA-4ABD-B098-3D7AF1A57C15@gmail.com> <20161221220945.GD22125@weber> Message-ID: To clarify my comments in this thread around desugaring: I was referring to the concrete Haskell code as written in GHC, not at all to an abstract desugaring algorithm. The implementation of arrows in GHC uses fixM, which is a nuisance. And I don't understand the code well enough to be able to untie the knot. I have a solid workaround for the time being, but it's indeed a workaround. Richard > On Dec 21, 2016, at 5:09 PM, Tom Ellis wrote: > > On Wed, Dec 21, 2016 at 01:49:33PM -0600, amindfv at gmail.com wrote: >> Additionally, Opaleye uses Arrow syntax pretty heavily iirc. > > If I were writing the Opaleye tutorial today (and if I rewrite it) I will > shy away from arrows and encourage users to use applicative style. There's > only one operator where applicative is not enough, 'restrict', and that can > be wrapped up as a different combinator so that no one knows they're ever > using arrows. > > Tom > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From ezyang at mit.edu Sat Dec 24 04:30:54 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 23 Dec 2016 23:30:54 -0500 Subject: I have Aphronted Phab In-Reply-To: References: Message-ID: <1482553804-sup-9492@sabre> Hi Richard, The last time this happened to me, it was because I was accessing Phabricator on http:// rather than https://. Tkae a look. Edward Excerpts from Richard Eisenberg's message of 2016-12-23 21:46:10 -0500: > Hi Austin, Ben, > > In trying to write a comment on Phab, I got this: > > Unhandled Exception ("AphrontCSRFException") > You are trying to save some data to Phabricator, but the request your browser made included an incorrect token. Reload the page and try again. You may need to clear your cookies. > > This was a Web request. > This request had an invalid CSRF token. > I followed the suggestions, and even tried a different browser (one I never use), to no avail. This still may very well be my fault, but it seems something is awry somewhere. Do you have any help you can offer? > > Thanks! > Richard From gracjanpolak at gmail.com Sat Dec 24 06:29:45 2016 From: gracjanpolak at gmail.com (Gracjan Polak) Date: Sat, 24 Dec 2016 07:29:45 +0100 Subject: [GHC] #13005: Mac OSX uses MAP_ANON in place of MAP_ANONYMOUS In-Reply-To: <061.b648798dd15584eecd220be23f713fc4@haskell.org> References: <046.d43a09b5f43348df6b22d776514a766b@haskell.org> <061.b648798dd15584eecd220be23f713fc4@haskell.org> Message-ID: No, `master`. 2016-12-23 23:53 GMT+01:00 GHC : > #13005: Mac OSX uses MAP_ANON in place of MAP_ANONYMOUS > -------------------------------------+---------------------- > --------------- > Reporter: gracjan | Owner: > Type: bug | Status: merge > Priority: high | Milestone: 8.0.2 > Component: Compiler | Version: 8.0.1 > (Linking) | > Resolution: | Keywords: > Operating System: MacOS X | Architecture: > Type of failure: Building GHC | Unknown/Multiple > failed | Test Case: > Blocked By: | Blocking: > Related Tickets: | Differential Rev(s): Phab:D2881 > Wiki Page: | > -------------------------------------+---------------------- > --------------- > Changes (by bgamari): > > * status: patch => merge > * milestone: => 8.0.2 > > > Comment: > > Did you really observe this on 8.0.1, gracjan? > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz at lichtzwerge.de Sat Dec 24 08:58:27 2016 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Sat, 24 Dec 2016 15:58:27 +0700 Subject: I have Aphronted Phab In-Reply-To: <1482553804-sup-9492@sabre> References: <1482553804-sup-9492@sabre> Message-ID: <6D180244-3AA4-4F55-84A2-7F8864262D52@lichtzwerge.de> While we are at it, can we please permanently redirect http to https for phabricator? Logging in via http also doesn't Seenot work properly the last time I tried. Pretty please? Cheers, moritz Sent from my iPhone > On 24 Dec 2016, at 11:30 AM, Edward Z. Yang wrote: > > Hi Richard, > > The last time this happened to me, it was because I was accessing > Phabricator on http:// rather than https://. Tkae a look. > > Edward > > Excerpts from Richard Eisenberg's message of 2016-12-23 21:46:10 -0500: >> Hi Austin, Ben, >> >> In trying to write a comment on Phab, I got this: >> >> Unhandled Exception ("AphrontCSRFException") >> You are trying to save some data to Phabricator, but the request your browser made included an incorrect token. Reload the page and try again. You may need to clear your cookies. >> >> This was a Web request. >> This request had an invalid CSRF token. >> I followed the suggestions, and even tried a different browser (one I never use), to no avail. This still may very well be my fault, but it seems something is awry somewhere. Do you have any help you can offer? >> >> Thanks! >> Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From takenobu.hs at gmail.com Sun Dec 25 08:35:09 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sun, 25 Dec 2016 17:35:09 +0900 Subject: many haskell's mails are detected as spam on gmail Message-ID: Hi, I'm using gmail. Recently, many haskell's mails are detected as spam on gmail. (ghc-devs, haskell-cafe, ghc-commit, ...) Does anyone know why? Do you know the workaround? Regards, Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From takenobu.hs at gmail.com Sun Dec 25 10:37:44 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sun, 25 Dec 2016 19:37:44 +0900 Subject: [Haskell-cafe] many haskell's mails are detected as spam on gmail In-Reply-To: References: Message-ID: Hi Arian , Thank you for information. At least from about 11th December, detection of the spam have been increasing. I'll report them after I understand it. Regards, Takenobu 2016-12-25 18:34 GMT+09:00 Arian van Putten : > If I recall correctly it's being worked on. There is a plan to harden the > haskell.org domain during the holidays by introducing DKIM and setting up > DMARC. There is a thread in haskell-cafe titled "[Haskell-cafe] Work on > mail.haskell.org beginning, please report any problems" with more info. > > On Sun, 25 Dec 2016, 09:35 Takenobu Tani, wrote: > > Hi, > > I'm using gmail. > Recently, many haskell's mails are detected as spam on gmail. > (ghc-devs, haskell-cafe, ghc-commit, ...) > > Does anyone know why? > Do you know the workaround? > > Regards, > Takenobu > > _______________________________________________ > Haskell-Cafe mailing list > To (un)subscribe, modify options or view archives go to: > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > Only members subscribed via the mailman list are allowed to post. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnw at newartisans.com Sun Dec 25 18:20:25 2016 From: johnw at newartisans.com (John Wiegley) Date: Sun, 25 Dec 2016 10:20:25 -0800 Subject: many haskell's mails are detected as spam on gmail In-Reply-To: (Takenobu Tani's message of "Sun, 25 Dec 2016 17:35:09 +0900") References: Message-ID: >>>>> "TT" == Takenobu Tani writes: TT> I'm using gmail. TT> Recently, many haskell's mails are detected as spam on gmail. TT> (ghc-devs, haskell-cafe, ghc-commit, ...) TT> Does anyone know why? TT> Do you know the workaround? This could be due to changes I've made recently on the mail server. Can you please send the full text of some of those mails to jwiegley at gmail.com? Thanks, -- John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2 From mail at nh2.me Tue Dec 27 10:11:31 2016 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Tue, 27 Dec 2016 11:11:31 +0100 Subject: many haskell's mails are detected as spam on gmail In-Reply-To: References: Message-ID: <03ba2967-2a3e-caf9-7937-9ebf9bf315e2@nh2.me> Despite Google's public claims to the contrary, I have found the Gmail spam filter not to work too reliably; I've had cases where it blocked important emails like "OK, here's my invoice (PDF attached)" in the middle of long email threads, of which messages were otherwise let through without problem. As a result, I disabled the Gmail spam filter completely; these instructions worked for me: http://webapps.stackexchange.com/questions/69442/how-to-disable-gmail-anti-spam-completely You may consider this too if the various technical things people are working on in the other replies don't improve the situation for you. On 25/12/16 09:35, Takenobu Tani wrote: > Hi, > > I'm using gmail. > Recently, many haskell's mails are detected as spam on gmail. > (ghc-devs, haskell-cafe, ghc-commit, ...) > > Does anyone know why? > Do you know the workaround? > > Regards, > Takenobu > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From allbery.b at gmail.com Tue Dec 27 13:46:44 2016 From: allbery.b at gmail.com (Brandon Allbery) Date: Tue, 27 Dec 2016 08:46:44 -0500 Subject: [Haskell-cafe] many haskell's mails are detected as spam on gmail In-Reply-To: <03ba2967-2a3e-caf9-7937-9ebf9bf315e2@nh2.me> References: <03ba2967-2a3e-caf9-7937-9ebf9bf315e2@nh2.me> Message-ID: On Tue, Dec 27, 2016 at 5:11 AM, Niklas Hambüchen wrote: > Despite Google's public claims to the contrary, I have found the Gmail > spam filter not to work too reliably > I think it depends on your use case (and it's rather indicative of the core problem of spam detection that spam is hard to distinguish from real messages about e.g. attached invoices). I've had maybe 8 Haskell list messages land in my spamtrap, gradually getting rarer as I "mark as not spam" them. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Tue Dec 27 22:29:43 2016 From: david.feuer at gmail.com (David Feuer) Date: Tue, 27 Dec 2016 17:29:43 -0500 Subject: Can the definition of alwaysSucceeds be streamlined? Message-ID: Currently, `GHC.Conc` has alwaysSucceeds :: STM a -> STM () alwaysSucceeds i = do ( i >> retry ) `orElse` ( return () ) checkInv i If I understand what's going on here (which I may not), I think this should be equivalent to alwaysSucceeds i = (i >> retry) `orElse` checkInv i David Feuer From juhpetersen at gmail.com Wed Dec 28 01:46:39 2016 From: juhpetersen at gmail.com (Jens Petersen) Date: Wed, 28 Dec 2016 10:46:39 +0900 Subject: [ANNOUNCE] GHC 8.0.2 release candidate 2 In-Reply-To: <87vaub7nzh.fsf@ben-laptop.smart-cactus.org> References: <87vaub7nzh.fsf@ben-laptop.smart-cactus.org> Message-ID: On 23 December 2016 at 04:38, Ben Gamari wrote: > The GHC team is happy to announce the second candidate of the > 8.0.2 release of the Glasgow Haskell Compiler. > Thanks Fedora users can install it from my Fedora Copr Repo: https://copr.fedorainfracloud.org/coprs/petersen/ghc-8.0.2/ There will be an EPEL7 build too for the final release. Jens -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.bennet83 at gmail.com Wed Dec 28 17:23:00 2016 From: daniel.bennet83 at gmail.com (Daniel Bennet) Date: Wed, 28 Dec 2016 11:23:00 -0600 Subject: Lightweight Concurrency Branch Message-ID: The lightweight concurrency branch is highly interesting and relevant to my interests, however, the ghc-lwc2 branch hasn't been updated in several years even though it's listed as an active branch at https://ghc.haskell.org/trac/ghc/wiki/ActiveBranches The wiki page for the work hasn't been updated in almost two years either, https://ghc.haskell.org/trac/ghc/wiki/LightweightConcurrency Relevant papers: Composable Scheduler Activations for Haskell (2014) https://timharris.uk/papers/2014-composable-tr.pdf Composable Scheduler Activations for Haskell (2016) http://kcsrk.info/papers/schedact_jfp16.pdf What remains for integrating this branch into GHC? -------------- next part -------------- An HTML attachment was scrubbed... URL: