From alan.zimm at gmail.com Thu Jan 1 11:03:40 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 1 Jan 2015 13:03:40 +0200 Subject: Possible issue with isBuiltInOcc_maybe Message-ID: I am busy checking that I can exactprint all the RdrNames produced by the parser, and came across this isBuiltInOcc_maybe occ = case occNameString occ of "[]" -> choose_ns listTyCon nilDataCon ":" -> Just consDataConName "[::]" -> Just parrTyConName "(##)" -> choose_ns unboxedUnitTyCon unboxedUnitDataCon "()" -> choose_ns unitTyCon unitDataCon '(':'#':',':rest -> parse_tuple UnboxedTuple 2 rest '(':',':rest -> parse_tuple BoxedTuple 2 rest _other -> Nothing The above code does not allow any spaces between '[' and ']', or '[:' and ':]' (for example) However, the parse rules DO allow spaces | '[' ']' {% ams (sLL $1 $> $ listTyCon_RDR) [mos $1,mcs $2] } | '[:' ':]' {% ams (sLL $1 $> $ parrTyCon_RDR) [mo $1,mc $2] } Is this a problem? Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Jan 1 12:30:38 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 01 Jan 2015 13:30:38 +0100 Subject: cryptarithm2 +8.16% Message-ID: <1420115438.17917.2.camel@joachim-breitner.de> Hi, due to #9938 ghcspeed did not measure every commit but somewhere in these commits, cryptarithm2 regressed by 8%: $ git log 625dd..8d62f9 --oneline 8d62f92 Update nofib submodule, unbreak cryptarithm2 9521a58 Refine test case for #9938 65e3e0b Test case for #9938 4e1e776 Skip T2276_ghci on Darwin, since stdcall is not supported. b32c227 Fix system linker on Mac OS X 40561cd Fix `heapSizeSuggesionAuto` typo (#9934) 58ac9c8 LlvmCodeGen cross-compiling fixes (#9895) 0cc0cc8 Support pattern synonyms in GHCi (fixes #9900) 6c86635 Update validate-settings.mk 1fefb59 Update parallel submodule to 3.2.0.6 release d6e7f5d Add export lists to some modules. 6b9e958 Update hoopl and hpc submodules c55fefc Avoid redundant-import warning (w/o CPP) bd01af9 Update hsc2hs submodule for de-tabbing 0899caa Use directory-style database for bootstrapping database c0ab767 We do emit a warning for stdcall now. 1dcef98 Run T9762 only if dynamic libraries are available 9ae78b0 Copy GHC's config.guess/sub over libffi's versions add6a30 2nd attempt to fix T9032 test-case 3e3aa92 Fix linker interaction between Template Haskell and HPC (#9762) cc510b4 Make ghc -e fail on invalid declarations 878910e Make ghc -e not exit on valid import commands (#9905) 7a2c9dd Fixup edd233acc19d269385 (T9032 test) edd233a Test earlier for self-import (Trac #9032) c3394e0 Attempt to improve cleaning 679a661 A bit of refactoring to TcErrors c407b5a Comments only 3e96d89 Add a couple of missing cases to isTcReflCo and isTcReflCo_maybe a6f0f5a Eliminate so-called "silent superclass parameters" My guess: Related to this: * It had unexpected peformance costs, shown up by Trac #3064 and its test case. In monad-transformer code, when constructing a Monad dictionary you had to pass an Applicative dictionary; and to construct that you neede a Functor dictionary. Yet these extra dictionaries were often never used. (All this got much worse when we added Applicative as a superclass of Monad.) Test T3064 compiled *far* faster after silent superclasses were eliminated. Happy new year! Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From george.colpitts at gmail.com Thu Jan 1 13:58:40 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 1 Jan 2015 09:58:40 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - feedback on Mac OS Message-ID: I built from source on Mac OS and found the following issues: - llvm , compiling with llvm (3.4.2) gives the following warnings: - $ ghc -fllvm cubeFast.hs [1 of 1] Compiling Main ( cubeFast.hs, cubeFast.o ) clang: warning: argument unused during compilation: '-fno-stack-protector' clang: warning: argument unused during compilation: '-D TABLES_NEXT_TO_CODE' clang: warning: argument unused during compilation: '-I .' clang: warning: argument unused during compilation: '-fno-common' clang: warning: argument unused during compilation: '-U __PIC__' clang: warning: argument unused during compilation: '-D __PIC__' Linking cubeFast ... - running the resulting executable crashes (compiling without -fllvm gives no warnings and executable works properly) - cat bigCube.txt | ./cubeFast > /dev/null Segmentation fault: 11 - Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0xfffffffd5bfd8460 - ?cabal install vector fails: - [ 5 of 19] Compiling Data.Vector.Fusion.Stream.Monadic ( Data/Vector/Fusion/Stream/Monadic.hs, dist/build/Data/Vector/Fusion/Stream/Monadic.o ) : can't load .so/.DLL for: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/lib/libiconv.dylib (dlopen(/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/lib/libiconv.dylib, 5): no suitable image found. Did find: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/lib/libiconv.dylib: mach-o, but wrong filetype) - ?cabal install cpphs fails:? - cabal install cpphs Resolving dependencies... Configuring cpphs-1.13... Building cpphs-1.13... Failed to install cpphs-1.13 Build log ( /Users/gcolpitts/.cabal/logs/cpphs-1.13.log ): Warning: cpphs.cabal: Unknown fields: build-depends (line 5) Fields allowed in this section: name, version, cabal-version, build-type, license, license-file, license-files, copyright, maintainer, stability, homepage, package-url, bug-reports, synopsis, description, category, author, tested-with, data-files, data-dir, extra-source-files, extra-tmp-files, extra-doc-files Configuring cpphs-1.13... Building cpphs-1.13... Preprocessing library cpphs-1.13... - Language/Preprocessor/Cpphs.hs:1:1: Could not find module ?Prelude? It is a member of the hidden package ?base-4.8.0.0?. Perhaps you need to add ?base? to the build-depends in your .cabal file. Use -v to see a list of the files searched for. Language/Preprocessor/Cpphs/CppIfdef.hs:32:8: Could not find module ?Numeric? It is a member of the hidden package ?base-4.8.0.0?. Perhaps you need to add ?base? to the build-depends in your .cabal file. Use -v to see a list of the files searched for. Language/Preprocessor/Cpphs/CppIfdef.hs:33:8: Could not find module ?System.IO.Unsafe? It is a member of the hidden package ?base-4.8.0.0?. Perhaps you need to add ?base? to the build-depends in your .cabal file. Use -v to see a list of the files searched for. Language/Preprocessor/Cpphs/CppIfdef.hs:34:8: Could not find module ?System.IO? It is a member of the hidden package ?base-4.8.0.0?. Perhaps you need to add ?base? to the build-depends in your .cabal file. Use -v to see a list of the files searched for. Language/Preprocessor/Cpphs/MacroPass.hs:29:8: Could not find module ?Control.Monad? It is a member of the hidden package ?base-4.8.0.0?. Perhaps you need to add ?base? to the build-depends in your .cabal file. Use -v to see a list of the files searched for. Language/Preprocessor/Cpphs/MacroPass.hs:30:8: Could not find module ?System.Time? Perhaps you meant System.CPUTime (needs flag -package-key base-4.8.0.0) System.Cmd (needs flag -package-key process-1.2.1.0 at proce_ADbmNMhxdsoDn9NrOWjezu) System.Mem (needs flag -package-key base-4.8.0.0) Use -v to see a list of the files searched for. Language/Preprocessor/Cpphs/MacroPass.hs:31:8: Could not find module ?System.Locale? Use -v to see a list of the files searched for. Language/Preprocessor/Cpphs/Options.hs:22:8: Could not find module ?Data.Maybe? It is a member of the hidden package ?base-4.8.0.0?. Perhaps you need to add ?base? to the build-depends in your .cabal file. Use -v to see a list of the files searched for. Language/Preprocessor/Cpphs/ReadFirst.hs:19:8: Could not find module ?System.Directory? It is a member of the hidden package ?directory-1.2.1.1 at direc_3m6Ew9I164U5MIkATLCdb8?. Perhaps you need to add ?directory? to the build-depends in your .cabal file. Use -v to see a list of the files searched for. Language/Preprocessor/Unlit.hs:5:8: Could not find module ?Data.Char? It is a member of the hidden package ?base-4.8.0.0?. Perhaps you need to add ?base? to the build-depends in your .cabal file. Use -v to see a list of the files searched for. Language/Preprocessor/Unlit.hs:6:8: Could not find module ?Data.List? It is a member of the hidden package ?base-4.8.0.0?. Perhaps you need to add ?base? to the build-depends in your .cabal file. Use -v to see a list of the files searched for. cabal: Error: some packages failed to install: cpphs-1.13 failed during the building phase. The exception was: ExitFailure 1 ?Configuration details: - Mac OS 10.10.1 (Yosemite) - uname -a Darwin iMac27-5.local 14.0.0 Darwin Kernel Version 14.0.0: Fri Sep 19 00:26:44 PDT 2014; root:xnu-2782.1.97~2/RELEASE_X86_64 x86_64 - llvm info: - opt --version LLVM (http://llvm.org/): LLVM version 3.4.2 Optimized build with assertions. Built Oct 31 2014 (23:14:30). Default target: x86_64-apple-darwin14.0.0 Host CPU: corei7 - gcc --version gcc (Homebrew gcc 4.9.1) 4.9.1 Copyright (C) 2014 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - ? /usr/bin/ghc --info [("Project name","The Glorious Glasgow Haskell Compilation System") ,("GCC extra via C opts"," -fwrapv") ,("C compiler command","/usr/bin/gcc") ,("C compiler flags"," -m64 -fno-stack-protector") ,("C compiler link flags"," -m64") ,("Haskell CPP command","/usr/bin/gcc") ,("Haskell CPP flags","-E -undef -traditional -Wno-invalid-pp-token -Wno-unicode -Wno-trigraphs") ,("ld command","/usr/bin/ld") ,("ld flags"," -arch x86_64") ,("ld supports compact unwind","YES") ,("ld supports build-id","NO") ,("ld supports filelist","YES") ,("ld is GNU ld","NO") ,("ar command","/usr/bin/ar") ,("ar flags","clqs") ,("ar supports at file","NO") ,("touch command","touch") ,("dllwrap command","/bin/false") ,("windres command","/bin/false") ,("libtool command","libtool") ,("perl command","/usr/bin/perl") ,("target os","OSDarwin") ,("target arch","ArchX86_64") ,("target word size","8") ,("target has GNU nonexec stack","False") ,("target has .ident directive","True") ,("target has subsections via symbols","True") ,("Unregisterised","NO") ,("LLVM llc command","llc") ,("LLVM opt command","opt") ,("Project version","7.8.3") ,("Booter version","7.6.3") ,("Stage","2") ,("Build platform","x86_64-apple-darwin") ,("Host platform","x86_64-apple-darwin") ,("Target platform","x86_64-apple-darwin") ,("Have interpreter","YES") ,("Object splitting supported","YES") ,("Have native code generator","YES") ,("Support SMP","YES") ,("Tables next to code","YES") ,("RTS ways","l debug thr thr_debug thr_l thr_p dyn debug_dyn thr_dyn thr_debug_dyn l_dyn thr_l_dyn") ,("Support dynamic-too","YES") ,("Support parallel --make","YES") ,("Dynamic by default","NO") ,("GHC Dynamic","YES") ,("Leading underscore","YES") ,("Debug on","False") ,("LibDir","/Library/Frameworks/GHC.framework/Versions/7.8.3-x86_64/usr/lib/ghc-7.8.3") ,("Global Package DB","/Library/Frameworks/GHC.framework/Versions/7.8.3-x86_64/usr/lib/ghc-7.8.3/package.conf.d") ] - Not sure I found the correct instructions for building from source, I used the following: - $ autoreconf $ ./configure $ make $ make install On Tue, Dec 23, 2014 at 10:36 AM, Austin Seipp wrote: > We are pleased to announce the first release candidate for GHC 7.10.1: > > https://downloads.haskell.org/~ghc/7.10.1-rc1/ > > This includes the source tarball and bindists for 64bit/32bit Linux > and Windows. Binary builds for other platforms will be available > shortly. (CentOS 6.5 binaries are not available at this time like they > were for 7.8.x). These binaries and tarballs have an accompanying > SHA256SUMS file signed by my GPG key id (0x3B58D86F). > > We plan to make the 7.10.1 release sometime in February of 2015. We > expect another RC to occur during January of 2015. > > Please test as much as possible; bugs are much cheaper if we find them > before the release! > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Thu Jan 1 14:23:18 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 01 Jan 2015 15:23:18 +0100 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - feedback on Mac OS In-Reply-To: (George Colpitts's message of "Thu, 1 Jan 2015 09:58:40 -0400") References: Message-ID: <87vbkqfull.fsf@gmail.com> On 2015-01-01 at 14:58:40 +0100, George Colpitts wrote: > I built from source on Mac OS and found the following issues: [...] > - ?cabal install cpphs fails:? > - cabal install cpphs > Resolving dependencies... > Configuring cpphs-1.13... > Building cpphs-1.13... > Failed to install cpphs-1.13 > Build log ( /Users/gcolpitts/.cabal/logs/cpphs-1.13.log ): > Warning: cpphs.cabal: Unknown fields: build-depends (line 5) > Fields allowed in this section: > name, version, cabal-version, build-type, license, license-file, > license-files, copyright, maintainer, stability, homepage, > package-url, bug-reports, synopsis, description, category, author, > tested-with, data-files, data-dir, extra-source-files, > extra-tmp-files, extra-doc-files > Configuring cpphs-1.13... > Building cpphs-1.13... > Preprocessing library cpphs-1.13... > - Language/Preprocessor/Cpphs.hs:1:1: > Could not find module ?Prelude? > It is a member of the hidden package ?base-4.8.0.0?. > Perhaps you need to add ?base? to the build-depends in your > .cabal file. > Use -v to see a list of the files searched for. [...] This is a known issue; cpphs-1.18.6 would actually work with GHC 7.10/base-4.8, but it depends on polyparse, but there isn't yet a polyparse version compatible w/ base-4.8 (due to AMP) on Hackage[1] Otoh, cpphs-1.13 is selected even though Hackage shows that it has a constraint `base <4.8`. However, that's rather a bug in Hackage, as the `.cabal` file is actually invalid, as it has the `build-depends` at the wrong level. So effectively it has no build-depends line at all, so cabal-install is led to believe that it works w/o any build-deps at all.. I did file an issue about that[3] [1]: Coincidentally, I sent Malcolm a AMP-compatibility patch for polyparse just earlier today... [2]: http://hackage.haskell.org/package/cpphs-1.13 [3]: https://github.com/haskell/hackage-server/issues/303 From malcolm.wallace at me.com Thu Jan 1 14:43:02 2015 From: malcolm.wallace at me.com (Malcolm Wallace) Date: Thu, 01 Jan 2015 14:43:02 +0000 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - feedback on Mac OS In-Reply-To: References: Message-ID: <817C2B57-5168-4903-8EF7-BFC31062FE96@me.com> On 1 Jan 2015, at 13:58, George Colpitts wrote: > Configuring cpphs-1.13... > Building cpphs-1.13... > Warning: cpphs.cabal: Unknown fields: build-depends (line 5) > Could not find module ?Prelude? > It is a member of the hidden package ?base-4.8.0.0?. > Perhaps you need to add ?base? to the build-depends in your .cabal file. The two statements "unknown field build-depends" and "add package to build-depends" seem rather contradictory. How can this be fixed? Regards, Malcolm From george.colpitts at gmail.com Thu Jan 1 17:05:06 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 1 Jan 2015 13:05:06 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - cpphs issue Message-ID: Herbert Thanks for all the helpful, quick responses, yes this now works! Regards George On Thu, Jan 1, 2015 at 11:07 AM, Herbert Valerio Riedel wrote: > > FYI, Malcolm uploaded polyparse-1.11 about half an hour ago; so if you > retry 'cabal install cpphs' it should just work w/ GHC 7.10.1RC > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Thu Jan 1 17:08:44 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 1 Jan 2015 13:08:44 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install Message-ID: ?$ ? cabal update Downloading the latest package list from hackage.haskell.org Note: *there is a new version of cabal-install available.* To upgrade, run: cabal install cabal-install bash-3.2$ *cabal install -j3 cabal-install * *?...?* *Resolving dependencies...cabal: Could not resolve dependencies:* trying: cabal-install-1.20.0.6 (user goal) trying: base-4.8.0.0/installed-779... (dependency of cabal-install-1.20.0.6) next goal: process (dependency of cabal-install-1.20.0.6) rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0, process => unix==2.7.1.0/installed-4ae...) trying: process-1.2.1.0 next goal: directory (dependency of cabal-install-1.20.0.6) rejecting: directory-1.2.1.1/installed-b08... (conflict: directory => time==1.5.0.1/installed-c23..., cabal-install => time>=1.1 && <1.5) rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779..., directory => base>=4.5 && <4.8) rejecting: directory-1.2.0.1, 1.2.0.0 (conflict: base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.7) rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.6) rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.5) rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.4) rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 (conflict: process => directory>=1.1 && <1.3) Dependency tree exhaustively searched. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Thu Jan 1 17:34:43 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 1 Jan 2015 12:34:43 -0500 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install In-Reply-To: References: Message-ID: Try cabal install --allow-newer=base -j3 cabal-install Once GHC 7.10 is out we might make another Cabal 1.20 release to bump the upper bound on the base dependency if 1.20 is indeed compatible with the latest base. On Thu, Jan 1, 2015 at 12:08 PM, George Colpitts wrote: > > > ?$ ? > cabal update > Downloading the latest package list from hackage.haskell.org > Note: *there is a new version of cabal-install available.* > To upgrade, run: cabal install cabal-install > bash-3.2$ *cabal install -j3 cabal-install * > *?...?* > > > *Resolving dependencies...cabal: Could not resolve dependencies:* > trying: cabal-install-1.20.0.6 (user goal) > trying: base-4.8.0.0/installed-779... (dependency of > cabal-install-1.20.0.6) > next goal: process (dependency of cabal-install-1.20.0.6) > rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0, > process > => unix==2.7.1.0/installed-4ae...) > trying: process-1.2.1.0 > next goal: directory (dependency of cabal-install-1.20.0.6) > rejecting: directory-1.2.1.1/installed-b08... (conflict: directory => > time==1.5.0.1/installed-c23..., cabal-install => time>=1.1 && <1.5) > rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.5 && <4.8) > rejecting: directory-1.2.0.1, 1.2.0.0 (conflict: > base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.7) > rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.2 && <4.6) > rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.2 && <4.5) > rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.2 && <4.4) > rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 (conflict: > process => directory>=1.1 && <1.3) > Dependency tree exhaustively searched. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Thu Jan 1 18:00:20 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 1 Jan 2015 14:00:20 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install In-Reply-To: References: Message-ID: Thanks, there seems to be dependency issues: cabal install --allow-newer=base -j3 cabal-install Resolving dependencies... In order, the following would be installed: deepseq-1.3.0.2 (latest: 1.4.0.0) (new version) bytestring-0.10.4.1 (new version) containers-0.5.6.2 (reinstall) changes: deepseq-1.4.0.0 -> 1.3.0.2 pretty-1.1.2.0 (new version) text-1.2.0.3 (reinstall) changes: bytestring-0.10.6.0 -> 0.10.4.1, deepseq-1.4.0.0 -> 1.3.0.2 parsec-3.1.7 (reinstall) changes: bytestring-0.10.6.0 -> 0.10.4.1 network-uri-2.6.0.1 (new package) time-1.4.2 (latest: 1.5.0.1) (new version) random-1.1 (reinstall) changes: time-1.5.0.1 -> 1.4.2 unix-2.7.1.0 (reinstall) changes: bytestring-0.10.6.0 -> 0.10.4.1, time-1.5.0.1 -> 1.4.2 directory-1.2.1.0 (new version) network-2.6.0.2 (new package) HTTP-4000.2.19 (new package) process-1.2.1.0 (reinstall) changes: deepseq-1.4.0.0 -> 1.3.0.2, directory-1.2.1.1 -> 1.2.1.0 Cabal-1.20.0.3 (new version) zlib-0.5.4.2 (new package) cabal-install-1.20.0.6 (new package) cabal: The following packages are likely to be broken by the reinstalls: semigroups-0.16.0.1 void-0.7 contravariant-1.2.0.1 semigroupoids-4.2 bifunctors-4.2 comonad-4.2.2 parallel-3.2.0.6 hscolour-1.20.3 hpc-0.6.0.2 ghc-7.10.0.20141222 hoopl-3.10.0.2 hastache-0.6.1 haskeline-0.7.2.0 cereal-0.4.1.0 monad-par-extras-0.3.3 binary-0.7.2.3 bin-package-db-0.0.0.0 Cabal-1.22.0.0 attoparsec-0.12.1.2 abstract-deque-0.3 Glob-0.7.5 scientific-0.3.3.3 polyparse-1.11 cpphs-1.18.6 haskell-src-exts-1.16.0.1 hashable-1.2.3.1 unordered-containers-0.2.5.1 blaze-builder-0.3.3.4 MonadRandom-0.3.0.1 extra-1.0 cmdargs-0.10.12 directory-1.2.1.1 ansi-terminal-0.6.2.1 ansi-wl-pprint-0.6.7.1 Use --force-reinstalls if you want to install anyway. On Thu, Jan 1, 2015 at 1:34 PM, Johan Tibell wrote: > Try > > cabal install --allow-newer=base -j3 cabal-install > > Once GHC 7.10 is out we might make another Cabal 1.20 release to bump the > upper bound on the base dependency if 1.20 is indeed compatible with the > latest base. > > On Thu, Jan 1, 2015 at 12:08 PM, George Colpitts < > george.colpitts at gmail.com> wrote: > >> >> >> ?$ ? >> cabal update >> Downloading the latest package list from hackage.haskell.org >> Note: *there is a new version of cabal-install available.* >> To upgrade, run: cabal install cabal-install >> bash-3.2$ *cabal install -j3 cabal-install * >> *?...?* >> >> >> *Resolving dependencies...cabal: Could not resolve dependencies:* >> trying: cabal-install-1.20.0.6 (user goal) >> trying: base-4.8.0.0/installed-779... (dependency of >> cabal-install-1.20.0.6) >> next goal: process (dependency of cabal-install-1.20.0.6) >> rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0, >> process >> => unix==2.7.1.0/installed-4ae...) >> trying: process-1.2.1.0 >> next goal: directory (dependency of cabal-install-1.20.0.6) >> rejecting: directory-1.2.1.1/installed-b08... (conflict: directory => >> time==1.5.0.1/installed-c23..., cabal-install => time>=1.1 && <1.5) >> rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779..., >> directory => base>=4.5 && <4.8) >> rejecting: directory-1.2.0.1, 1.2.0.0 (conflict: >> base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.7) >> rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779..., >> directory => base>=4.2 && <4.6) >> rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779..., >> directory => base>=4.2 && <4.5) >> rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779..., >> directory => base>=4.2 && <4.4) >> rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 >> (conflict: >> process => directory>=1.1 && <1.3) >> Dependency tree exhaustively searched. >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.e.foster at gmail.com Thu Jan 1 18:01:53 2015 From: martin.e.foster at gmail.com (Martin Foster) Date: Thu, 1 Jan 2015 18:01:53 +0000 Subject: Windows build gotchas Message-ID: Hello all, I've been spending some of my winter break trying my hand at compiling GHC, with a mind to hopefully contributing down the line. I've got it working, but I ran into a few things along the way that I figure might be worth fixing and/or documenting. In the approximate order I encountered them: - The first pacman mirror on the list bundled with MSYS2 is down, with the result that every download pacman makes takes ~10sec longer than it should. It downloads a lot, so that really adds up - but it's easy to fix, just "pacman -Sy pacman-mirrors" before doing anything else with it. Is that worth mentioning on the wiki? I was thinking a line on https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows could be helpful. - That page mentions "If you see errors related to fork(), try closing and reopening the shell" - I've determined that you can reliably avoid that problem by following the instructions at http://sourceforge.net/p/msys2/wiki/MSYS2%20installation/#iii-updating-packages, ie by running "pacman --needed -S bash pacman msys2-runtime", then closing & re-opening the MSYS shell, before you tell pacman to install the GHC prerequisite packages. - A minor point: I found it helpful to include "man-db" in the list of packages to install - without it, "git help" breaks down with " failed to exec 'man'". - I note "./sync-all --help" says, under "Flags", that "--windows also clones the ghc-tarballs repository (enabled by default on Windows)", and I've confirmed that default behaviour experimentally - but https://ghc.haskell.org/trac/ghc/wiki/Building/GettingTheSources tells you to manually clone ghc-tarballs when on Windows. Is that line on the wiki obsolete, or am I overlooking something? - And finally, the big one: cabal and/or ghc-pkg put some files outside the MSYS root directory, and caused me no end of trouble in doing so... I made a bit of a mess at one point, and tried to fix it by starting over completely from scratch. I expected uninstalling & reinstalling MSYS to achieve this (it deletes its root directory when you uninstall it), but that left me with a huge pile of errors when I tried to run "cabal install -j --prefix=/usr/local alex happy", of the form "Could not find module `...': There are files missing in the `...' package". I noticed that the cabal output made reference to "C:\Users\Martin\AppData\Roaming\cabal\", so tried moving that out of the way, but it only made the problem worse. I did figure it out eventually: in addition to that directory, "%APPDATA%\cabal", there were also files left over in "%APPDATA%\ghc". Once I removed that directory as well, things started working again - but it took me a lot of time & frustration to get there. I'm not entirely sure, but I think the copy of Cabal I already had from installing the Platform may also have been storing files in those directories, in which case this process completely mangled them - which isn't great. It seems to me that, ideally, the "build GHC inside MSYS" procedure would keep itself entirely inside the MSYS directory structure: if it were wholly self-contained, you'd know where everything is, and it couldn't break anything outside. As far as I can tell, the only breach is those two directories courtesy of Cabal, so I didn't think it would be too difficult - but none of the things I've tried (the --package-db cabal flag, a custom cabal --config-file, setting the GHC_PACKAGE_PATH environment variable, maybe some others I've forgotten) had the desired effect. Is it possible? Is it even a good idea? If that's just how it has to be, I feel like there should be an obvious note somewhere for the sake of the next person to trip over it. I'd be happy to amend the wiki for any/all of the first four points, if people think it'd be appropriate; I'm not sure at all what to do about the last one. Any thoughts? - Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.dead.shall.rise at gmail.com Thu Jan 1 18:15:13 2015 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Thu, 1 Jan 2015 19:15:13 +0100 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install In-Reply-To: References: Message-ID: Hi, On 1 January 2015 at 19:00, George Colpitts wrote: > Thanks, there seems to be dependency issues: Try also adding '--allow-newer=bytestring,deepseq'. From george.colpitts at gmail.com Thu Jan 1 18:27:43 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 1 Jan 2015 14:27:43 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install In-Reply-To: References: Message-ID: Thanks but that doesn't seem to work either: cabal install --allow-newer=base --allow-newer=bytestring,deepseq -j3 cabal-install Resolving dependencies... cabal: Could not resolve dependencies: trying: cabal-install-1.20.0.6 (user goal) trying: base-4.8.0.0/installed-779... (dependency of cabal-install-1.20.0.6) next goal: process (dependency of cabal-install-1.20.0.6) rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0, process => unix==2.7.1.0/installed-4ae...) trying: process-1.2.1.0 next goal: directory (dependency of cabal-install-1.20.0.6) rejecting: directory-1.2.1.1/installed-b08... (conflict: directory => time==1.5.0.1/installed-c23..., cabal-install => time>=1.1 && <1.5) rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779..., directory => base>=4.5 && <4.8) rejecting: directory-1.2.0.1, 1.2.0.0 (conflict: base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.7) rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.6) rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.5) rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.4) rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 (conflict: process => directory>=1.1 && <1.3) Dependency tree exhaustively searched. On Thu, Jan 1, 2015 at 2:15 PM, Mikhail Glushenkov < the.dead.shall.rise at gmail.com> wrote: > Hi, > > On 1 January 2015 at 19:00, George Colpitts > wrote: > > Thanks, there seems to be dependency issues: > > Try also adding '--allow-newer=bytestring,deepseq'. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Thu Jan 1 18:34:49 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 1 Jan 2015 14:34:49 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install In-Reply-To: References: Message-ID: following solves dependency problems, added a few more packages, thanks! cabal install --allow-newer=base,bytestring,deepseq,unix,process,time,random -j3 cabal-install On Thu, Jan 1, 2015 at 2:27 PM, George Colpitts wrote: > Thanks but that doesn't seem to work either: > > cabal install --allow-newer=base --allow-newer=bytestring,deepseq -j3 > cabal-install > Resolving dependencies... > cabal: Could not resolve dependencies: > trying: cabal-install-1.20.0.6 (user goal) > trying: base-4.8.0.0/installed-779... (dependency of > cabal-install-1.20.0.6) > next goal: process (dependency of cabal-install-1.20.0.6) > rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0, > process > => unix==2.7.1.0/installed-4ae...) > trying: process-1.2.1.0 > next goal: directory (dependency of cabal-install-1.20.0.6) > rejecting: directory-1.2.1.1/installed-b08... (conflict: directory => > time==1.5.0.1/installed-c23..., cabal-install => time>=1.1 && <1.5) > rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.5 && <4.8) > rejecting: directory-1.2.0.1, 1.2.0.0 (conflict: > base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.7) > rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.2 && <4.6) > rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.2 && <4.5) > rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.2 && <4.4) > rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 (conflict: > process => directory>=1.1 && <1.3) > Dependency tree exhaustively searched. > > On Thu, Jan 1, 2015 at 2:15 PM, Mikhail Glushenkov < > the.dead.shall.rise at gmail.com> wrote: > >> Hi, >> >> On 1 January 2015 at 19:00, George Colpitts >> wrote: >> > Thanks, there seems to be dependency issues: >> >> Try also adding '--allow-newer=bytestring,deepseq'. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Thu Jan 1 18:42:33 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 01 Jan 2015 19:42:33 +0100 Subject: Windows build gotchas In-Reply-To: (Martin Foster's message of "Thu, 1 Jan 2015 18:01:53 +0000") References: Message-ID: <87y4pme412.fsf@gmail.com> Hello Martin, Here's just some minor additional context information... On 2015-01-01 at 19:01:53 +0100, Martin Foster wrote: [...] > - I note "./sync-all --help" says, under "Flags", that "--windows also > clones the ghc-tarballs repository (enabled by default on Windows)", and > I've confirmed that default behaviour experimentally - but > https://ghc.haskell.org/trac/ghc/wiki/Building/GettingTheSources tells > you to manually clone ghc-tarballs when on Windows. Is that line on the > wiki obsolete, or am I overlooking something? Somewhat related: when https://phabricator.haskell.org/D339 is landed, we can finally forget about having to clone that objectionable ghc-tarballs repo... [...] > I noticed that the cabal output made reference to > "C:\Users\Martin\AppData\Roaming\cabal\", so tried moving that out of the > way, but it only made the problem worse. I did figure it out eventually: in > addition to that directory, "%APPDATA%\cabal", there were also files left > over in "%APPDATA%\ghc". Once I removed that directory as well, things > started working again - but it took me a lot of time & frustration to get > there. That's btw because Cabal/GHC uses `getAppUserDataDirectory "cabal"` and `getAppUserDataDirectory "ghc"` respectively... Cheers, hvr From george.colpitts at gmail.com Thu Jan 1 18:42:10 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 1 Jan 2015 14:42:10 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install In-Reply-To: References: Message-ID: however still fails to install but now due to problems with cabal itself [76 of 76] Compiling Main ( /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/Cabal-1.20.0.3-62215/Cabal-1.20.0.3/dist/setup/setup.hs, /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/Cabal-1.20.0.3-62215/Cabal-1.20.0.3/dist/setup/Main.o ) Linking /var/folders/9b/rh4y2gy92hgdb6ktv4df1jv00000gn/T/Cabal-1.20.0.3-62215/Cabal-1.20.0.3/dist/setup/setup ... Configuring Cabal-1.20.0.3... Building Cabal-1.20.0.3... Preprocessing library Cabal-1.20.0.3... on the commandline: Warning: -package-name is deprecated: Use -this-package-key instead ghc: ghc no longer supports single-file style package databases (dist/package.conf.inplace) use 'ghc-pkg init' to create the database with the correct format. Updating documentation index /Users/gcolpitts/Library/Haskell/share/doc/index.html cabal: Error: some packages failed to install: Cabal-1.20.0.3 failed during the building phase. The exception was: ExitFailure 1 cabal-install-1.20.0.6 depends on Cabal-1.20.0.3 which failed to install. On Thu, Jan 1, 2015 at 2:34 PM, George Colpitts wrote: > following solves dependency problems, added a few more packages, thanks! > > cabal install > --allow-newer=base,bytestring,deepseq,unix,process,time,random -j3 > cabal-install > > > On Thu, Jan 1, 2015 at 2:27 PM, George Colpitts > wrote: > >> Thanks but that doesn't seem to work either: >> >> cabal install --allow-newer=base --allow-newer=bytestring,deepseq -j3 >> cabal-install >> Resolving dependencies... >> cabal: Could not resolve dependencies: >> trying: cabal-install-1.20.0.6 (user goal) >> trying: base-4.8.0.0/installed-779... (dependency of >> cabal-install-1.20.0.6) >> next goal: process (dependency of cabal-install-1.20.0.6) >> rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0, >> process >> => unix==2.7.1.0/installed-4ae...) >> trying: process-1.2.1.0 >> next goal: directory (dependency of cabal-install-1.20.0.6) >> rejecting: directory-1.2.1.1/installed-b08... (conflict: directory => >> time==1.5.0.1/installed-c23..., cabal-install => time>=1.1 && <1.5) >> rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779..., >> directory => base>=4.5 && <4.8) >> rejecting: directory-1.2.0.1, 1.2.0.0 (conflict: >> base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.7) >> rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779..., >> directory => base>=4.2 && <4.6) >> rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779..., >> directory => base>=4.2 && <4.5) >> rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779..., >> directory => base>=4.2 && <4.4) >> rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 >> (conflict: >> process => directory>=1.1 && <1.3) >> Dependency tree exhaustively searched. >> >> On Thu, Jan 1, 2015 at 2:15 PM, Mikhail Glushenkov < >> the.dead.shall.rise at gmail.com> wrote: >> >>> Hi, >>> >>> On 1 January 2015 at 19:00, George Colpitts >>> wrote: >>> > Thanks, there seems to be dependency issues: >>> >>> Try also adding '--allow-newer=bytestring,deepseq'. >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Thu Jan 1 18:54:47 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 01 Jan 2015 13:54:47 -0500 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install In-Reply-To: References: Message-ID: <1420138452-sup-5875@sabre> If you still have your old GHC around, it will be much better to compile the newest cabal-install using the *old GHC*, and then use that copy to bootstrap a copy of the newest cabal-install. Edward Excerpts from George Colpitts's message of 2015-01-01 12:08:44 -0500: > ?$ ? > cabal update > Downloading the latest package list from hackage.haskell.org > Note: *there is a new version of cabal-install available.* > To upgrade, run: cabal install cabal-install > bash-3.2$ *cabal install -j3 cabal-install * > *?...?* > > > *Resolving dependencies...cabal: Could not resolve dependencies:* > trying: cabal-install-1.20.0.6 (user goal) > trying: base-4.8.0.0/installed-779... (dependency of cabal-install-1.20.0.6) > next goal: process (dependency of cabal-install-1.20.0.6) > rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0, > process > => unix==2.7.1.0/installed-4ae...) > trying: process-1.2.1.0 > next goal: directory (dependency of cabal-install-1.20.0.6) > rejecting: directory-1.2.1.1/installed-b08... (conflict: directory => > time==1.5.0.1/installed-c23..., cabal-install => time>=1.1 && <1.5) > rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.5 && <4.8) > rejecting: directory-1.2.0.1, 1.2.0.0 (conflict: > base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.7) > rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.2 && <4.6) > rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.2 && <4.5) > rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779..., > directory => base>=4.2 && <4.4) > rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 (conflict: > process => directory>=1.1 && <1.3) > Dependency tree exhaustively searched. From george.colpitts at gmail.com Thu Jan 1 19:23:50 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 1 Jan 2015 15:23:50 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install In-Reply-To: <1420138452-sup-5875@sabre> References: <1420138452-sup-5875@sabre> Message-ID: I still have 7.8.3 but it doesn't seem to want to build the latest cabal: ghc --version The Glorious Glasgow Haskell Compilation System, version 7.8.3 bash-3.2$ cabal install cabal-install Resolving dependencies... Configuring cabal-install-1.20.0.6... Building cabal-install-1.20.0.6... Installed cabal-install-1.20.0.6 Updating documentation index /Users/gcolpitts/Library/Haskell/share/doc/index.html On Thu, Jan 1, 2015 at 2:54 PM, Edward Z. Yang wrote: > If you still have your old GHC around, it will be much better to > compile the newest cabal-install using the *old GHC*, and then > use that copy to bootstrap a copy of the newest cabal-install. > > Edward > > Excerpts from George Colpitts's message of 2015-01-01 12:08:44 -0500: > > ?$ ? > > cabal update > > Downloading the latest package list from hackage.haskell.org > > Note: *there is a new version of cabal-install available.* > > To upgrade, run: cabal install cabal-install > > bash-3.2$ *cabal install -j3 cabal-install * > > *?...?* > > > > > > *Resolving dependencies...cabal: Could not resolve dependencies:* > > trying: cabal-install-1.20.0.6 (user goal) > > trying: base-4.8.0.0/installed-779... (dependency of > cabal-install-1.20.0.6) > > next goal: process (dependency of cabal-install-1.20.0.6) > > rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0, > > process > > => unix==2.7.1.0/installed-4ae...) > > trying: process-1.2.1.0 > > next goal: directory (dependency of cabal-install-1.20.0.6) > > rejecting: directory-1.2.1.1/installed-b08... (conflict: directory => > > time==1.5.0.1/installed-c23..., cabal-install => time>=1.1 && <1.5) > > rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779..., > > directory => base>=4.5 && <4.8) > > rejecting: directory-1.2.0.1, 1.2.0.0 (conflict: > > base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.7) > > rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779..., > > directory => base>=4.2 && <4.6) > > rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779..., > > directory => base>=4.2 && <4.5) > > rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779..., > > directory => base>=4.2 && <4.4) > > rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 > (conflict: > > process => directory>=1.1 && <1.3) > > Dependency tree exhaustively searched. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Thu Jan 1 19:37:08 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 01 Jan 2015 14:37:08 -0500 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install In-Reply-To: References: <1420138452-sup-5875@sabre> Message-ID: <1420141005-sup-3185@sabre> Oh, because Cabal HQ hasn't cut a release yet. Try installing out of Git. https://github.com/haskell/cabal/ Edward Excerpts from George Colpitts's message of 2015-01-01 14:23:50 -0500: > I still have 7.8.3 but it doesn't seem to want to build the latest cabal: > > ghc --version > The Glorious Glasgow Haskell Compilation System, version 7.8.3 > bash-3.2$ cabal install cabal-install > Resolving dependencies... > Configuring cabal-install-1.20.0.6... > Building cabal-install-1.20.0.6... > Installed cabal-install-1.20.0.6 > Updating documentation index > /Users/gcolpitts/Library/Haskell/share/doc/index.html > > On Thu, Jan 1, 2015 at 2:54 PM, Edward Z. Yang wrote: > > > If you still have your old GHC around, it will be much better to > > compile the newest cabal-install using the *old GHC*, and then > > use that copy to bootstrap a copy of the newest cabal-install. > > > > Edward > > > > Excerpts from George Colpitts's message of 2015-01-01 12:08:44 -0500: > > > ?$ ? > > > cabal update > > > Downloading the latest package list from hackage.haskell.org > > > Note: *there is a new version of cabal-install available.* > > > To upgrade, run: cabal install cabal-install > > > bash-3.2$ *cabal install -j3 cabal-install * > > > *?...?* > > > > > > > > > *Resolving dependencies...cabal: Could not resolve dependencies:* > > > trying: cabal-install-1.20.0.6 (user goal) > > > trying: base-4.8.0.0/installed-779... (dependency of > > cabal-install-1.20.0.6) > > > next goal: process (dependency of cabal-install-1.20.0.6) > > > rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0, > > > process > > > => unix==2.7.1.0/installed-4ae...) > > > trying: process-1.2.1.0 > > > next goal: directory (dependency of cabal-install-1.20.0.6) > > > rejecting: directory-1.2.1.1/installed-b08... (conflict: directory => > > > time==1.5.0.1/installed-c23..., cabal-install => time>=1.1 && <1.5) > > > rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779..., > > > directory => base>=4.5 && <4.8) > > > rejecting: directory-1.2.0.1, 1.2.0.0 (conflict: > > > base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.7) > > > rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779..., > > > directory => base>=4.2 && <4.6) > > > rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779..., > > > directory => base>=4.2 && <4.5) > > > rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779..., > > > directory => base>=4.2 && <4.4) > > > rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 > > (conflict: > > > process => directory>=1.1 && <1.3) > > > Dependency tree exhaustively searched. > > From alan.zimm at gmail.com Thu Jan 1 20:16:50 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 1 Jan 2015 22:16:50 +0200 Subject: Possible issue with isBuiltInOcc_maybe In-Reply-To: References: Message-ID: Never mind, I see it gets the required RdrName, e.g.listTyCon_RDR Alan On Thu, Jan 1, 2015 at 1:03 PM, Alan & Kim Zimmerman wrote: > I am busy checking that I can exactprint all the RdrNames produced by the > parser, and came across this > > isBuiltInOcc_maybe occ > = case occNameString occ of > "[]" -> choose_ns listTyCon nilDataCon > ":" -> Just consDataConName > "[::]" -> Just parrTyConName > "(##)" -> choose_ns unboxedUnitTyCon unboxedUnitDataCon > "()" -> choose_ns unitTyCon unitDataCon > '(':'#':',':rest -> parse_tuple UnboxedTuple 2 rest > '(':',':rest -> parse_tuple BoxedTuple 2 rest > _other -> Nothing > > The above code does not allow any spaces between '[' and ']', or '[:' and > ':]' (for example) > > However, the parse rules DO allow spaces > > | '[' ']' {% ams (sLL $1 $> $ listTyCon_RDR) [mos > $1,mcs $2] } > | '[:' ':]' {% ams (sLL $1 $> $ parrTyCon_RDR) [mo > $1,mc $2] } > > Is this a problem? > > Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jan 1 20:34:14 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 1 Jan 2015 20:34:14 +0000 Subject: Haddock error Message-ID: <618BE556AADD624C9C918AA5D5911BEF56293640@DB3PRD3001MB020.064d.mgd.msft.net> Folks I'm getting this Haddock error (see below) from a clean build on Windows. Does it ring any bells for anyone? Anyone have any idea how to fix? My build isn't exactly HEAD but I'd be very surprised if my changes are the cause. Thanks Simon "C:/code/HEAD/inplace/bin/haddock" --odir="libraries/ghc-prim/dist-install/doc/html/ghc-prim" --no-tmp-comp-dir --dump-interface=libraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock --html --hoogle --title="ghc-prim-0.3.1.0: GHC primitives" --prologue="libraries/ghc-prim/dist-install/haddock-prologue.txt" --optghc=-hisuf --optghc=hi --optghc=-osuf --optghc=o --optghc=-hcsuf --optghc=hc --optghc=-static --optghc=-H32m --optghc=-O --optghc=-Werror --optghc=-Wall --optghc=-H64m --optghc=-O0 --optghc=-this-package-key --optghc=ghcpr_FgrV6cgh2JHBlbcx1OSlwt --optghc=-hide-all-packages --optghc=-i --optghc=-ilibraries/ghc-prim/. --optghc=-ilibraries/ghc-prim/dist-install/build --optghc=-ilibraries/ghc-prim/dist-install/build/autogen --optghc=-Ilibraries/ghc-prim/dist-install/build --optghc=-Ilibraries/ghc-prim/dist-install/build/autogen --optghc=-Ilibraries/ghc-prim/. --optghc=-optP-include --optghc=-optPlibraries/ghc-prim/dist-install/build/autogen/cabal_macros.h --optghc=-package-key --optghc=rts --optghc=-this-package-key --optghc=ghc-prim --optghc=-XHaskell2010 --optghc=-O2 --optghc=-O --optghc=-dcore-lint --optghc=-fno-warn-deprecated-flags --optghc=-fno-warn-tabs --optghc=-Wwarn --optghc=-no-user-package-db --optghc=-rtsopts --optghc=-fno-warn-trustworthy-safe --optghc=-odir --optghc=libraries/ghc-prim/dist-install/build --optghc=-hidir --optghc=libraries/ghc-prim/dist-install/build --optghc=-stubdir --optghc=libraries/ghc-prim/dist-install/build libraries/ghc-prim/./GHC/CString.hs libraries/ghc-prim/./GHC/Classes.hs libraries/ghc-prim/./GHC/Debug.hs libraries/ghc-prim/./GHC/IntWord64.hs libraries/ghc-prim/./GHC/Magic.hs libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs libraries/ghc-prim/./GHC/Tuple.hs libraries/ghc-prim/./GHC/Types.hs libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs +RTS -tlibraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock.t --machine-readable Haddock coverage: Warning: Couldn't find .haddock for export Int64# Warning: Couldn't find .haddock for export Word64# 3% ( 1 / 38) in 'GHC.IntWord64' Missing documentation for: Int64# Word64# eqWord64# (libraries/ghc-prim/./GHC/IntWord64.hs:29) neWord64# (libraries/ghc-prim/./GHC/IntWord64.hs:30) ltWord64# (libraries/ghc-prim/./GHC/IntWord64.hs:31) leWord64# (libraries/ghc-prim/./GHC/IntWord64.hs:32) gtWord64# (libraries/ghc-prim/./GHC/IntWord64.hs:33) geWord64# (libraries/ghc-prim/./GHC/IntWord64.hs:34) eqInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:36) neInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:37) ltInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:38) leInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:39) gtInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:40) geInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:41) quotInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:42) remInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:43) plusInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:45) minusInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:46) timesInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:47) negateInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:48) quotWord64# (libraries/ghc-prim/./GHC/IntWord64.hs:49) remWord64# (libraries/ghc-prim/./GHC/IntWord64.hs:50) and64# (libraries/ghc-prim/./GHC/IntWord64.hs:52) or64# (libraries/ghc-prim/./GHC/IntWord64.hs:53) xor64# (libraries/ghc-prim/./GHC/IntWord64.hs:54) not64# (libraries/ghc-prim/./GHC/IntWord64.hs:55) uncheckedShiftL64# (libraries/ghc-prim/./GHC/IntWord64.hs:57) uncheckedShiftRL64# (libraries/ghc-prim/./GHC/IntWord64.hs:58) uncheckedIShiftL64# (libraries/ghc-prim/./GHC/IntWord64.hs:59) uncheckedIShiftRA64# (libraries/ghc-prim/./GHC/IntWord64.hs:60) uncheckedIShiftRL64# (libraries/ghc-prim/./GHC/IntWord64.hs:61) int64ToWord64# (libraries/ghc-prim/./GHC/IntWord64.hs:63) word64ToInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:64) intToInt64# (libraries/ghc-prim/./GHC/IntWord64.hs:65) int64ToInt# (libraries/ghc-prim/./GHC/IntWord64.hs:66) wordToWord64# (libraries/ghc-prim/./GHC/IntWord64.hs:67) word64ToWord# (libraries/ghc-prim/./GHC/IntWord64.hs:68) 3% ( 2 / 63) in 'GHC.Tuple' Missing documentation for: (,) (libraries/ghc-prim/./GHC/Tuple.hs:26) (,,) (libraries/ghc-prim/./GHC/Tuple.hs:27) (,,,) (libraries/ghc-prim/./GHC/Tuple.hs:28) (,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:29) (,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:30) (,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:31) (,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:32) (,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:33) (,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:34) (,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:35) (,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:36) (,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:37) (,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:38) (,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:39) (,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:40) (,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:41) (,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:43) (,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:45) (,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:47) (,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:49) (,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:51) (,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:53) (,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:55) (,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:57) (,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:59) (,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:61) (,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:63) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:65) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:67) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:69) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:71) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:73) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:75) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:77) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:79) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:81) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:83) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:85) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:87) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:89) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:91) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:93) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:95) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:97) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:99) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:101) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:103) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:105) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:107) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:109) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:111) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:113) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:115) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:117) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:119) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:121) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:123) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:125) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:127) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:129) (,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,) (libraries/ghc-prim/./GHC/Tuple.hs:131) 0% ( 0 /409) in 'GHC.PrimopWrappers' Missing documentation for: Module header gtChar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:7) geChar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:10) eqChar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:13) neChar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:16) ltChar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:19) leChar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:22) ord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:25) +# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:28) -# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:31) *# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:34) mulIntMayOflo# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:37) quotInt# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:40) remInt# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:43) quotRemInt# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:46) andI# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:49) orI# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:52) xorI# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:55) notI# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:58) negateInt# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:61) addIntC# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:64) subIntC# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:67) ># (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:70) >=# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:73) ==# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:76) /=# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:79) <# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:82) <=# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:85) chr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:88) int2Word# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:91) int2Float# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:94) int2Double# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:97) word2Float# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:100) word2Double# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:103) uncheckedIShiftL# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:106) uncheckedIShiftRA# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:109) uncheckedIShiftRL# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:112) plusWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:115) plusWord2# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:118) minusWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:121) timesWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:124) timesWord2# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:127) quotWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:130) remWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:133) quotRemWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:136) quotRemWord2# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:139) and# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:142) or# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:145) xor# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:148) not# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:151) uncheckedShiftL# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:154) uncheckedShiftRL# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:157) word2Int# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:160) gtWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:163) geWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:166) eqWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:169) neWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:172) ltWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:175) leWord# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:178) popCnt8# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:181) popCnt16# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:184) popCnt32# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:187) popCnt64# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:190) popCnt# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:193) clz8# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:196) clz16# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:199) clz32# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:202) clz64# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:205) clz# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:208) ctz8# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:211) ctz16# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:214) ctz32# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:217) ctz64# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:220) ctz# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:223) byteSwap16# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:226) byteSwap32# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:229) byteSwap64# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:232) byteSwap# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:235) narrow8Int# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:238) narrow16Int# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:241) narrow32Int# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:244) narrow8Word# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:247) narrow16Word# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:250) narrow32Word# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:253) >## (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:256) >=## (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:259) ==## (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:262) /=## (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:265) <## (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:268) <=## (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:271) +## (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:274) -## (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:277) *## (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:280) /## (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:283) negateDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:286) double2Int# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:289) double2Float# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:292) expDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:295) logDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:298) sqrtDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:301) sinDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:304) cosDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:307) tanDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:310) asinDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:313) acosDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:316) atanDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:319) sinhDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:322) coshDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:325) tanhDouble# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:328) **## (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:331) decodeDouble_2Int# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:334) decodeDouble_Int64# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:337) gtFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:340) geFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:343) eqFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:346) neFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:349) ltFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:352) leFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:355) plusFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:358) minusFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:361) timesFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:364) divideFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:367) negateFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:370) float2Int# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:373) expFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:376) logFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:379) sqrtFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:382) sinFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:385) cosFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:388) tanFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:391) asinFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:394) acosFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:397) atanFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:400) sinhFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:403) coshFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:406) tanhFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:409) powerFloat# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:412) float2Double# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:415) decodeFloat_Int# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:418) newArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:421) sameMutableArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:424) readArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:427) writeArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:430) sizeofArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:433) sizeofMutableArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:436) indexArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:439) unsafeFreezeArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:442) unsafeThawArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:445) copyArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:448) copyMutableArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:451) cloneArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:454) cloneMutableArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:457) freezeArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:460) thawArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:463) casArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:466) newSmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:469) sameSmallMutableArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:472) readSmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:475) writeSmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:478) sizeofSmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:481) sizeofSmallMutableArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:484) indexSmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:487) unsafeFreezeSmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:490) unsafeThawSmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:493) copySmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:496) copySmallMutableArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:499) cloneSmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:502) cloneSmallMutableArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:505) freezeSmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:508) thawSmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:511) casSmallArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:514) newByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:517) newPinnedByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:520) newAlignedPinnedByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:523) byteArrayContents# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:526) sameMutableByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:529) shrinkMutableByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:532) resizeMutableByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:535) unsafeFreezeByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:538) sizeofByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:541) sizeofMutableByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:544) indexCharArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:547) indexWideCharArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:550) indexIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:553) indexWordArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:556) indexAddrArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:559) indexFloatArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:562) indexDoubleArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:565) indexStablePtrArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:568) indexInt8Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:571) indexInt16Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:574) indexInt32Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:577) indexInt64Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:580) indexWord8Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:583) indexWord16Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:586) indexWord32Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:589) indexWord64Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:592) readCharArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:595) readWideCharArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:598) readIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:601) readWordArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:604) readAddrArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:607) readFloatArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:610) readDoubleArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:613) readStablePtrArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:616) readInt8Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:619) readInt16Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:622) readInt32Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:625) readInt64Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:628) readWord8Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:631) readWord16Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:634) readWord32Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:637) readWord64Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:640) writeCharArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:643) writeWideCharArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:646) writeIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:649) writeWordArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:652) writeAddrArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:655) writeFloatArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:658) writeDoubleArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:661) writeStablePtrArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:664) writeInt8Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:667) writeInt16Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:670) writeInt32Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:673) writeInt64Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:676) writeWord8Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:679) writeWord16Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:682) writeWord32Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:685) writeWord64Array# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:688) copyByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:691) copyMutableByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:694) copyByteArrayToAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:697) copyMutableByteArrayToAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:700) copyAddrToByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:703) setByteArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:706) atomicReadIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:709) atomicWriteIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:712) casIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:715) fetchAddIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:718) fetchSubIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:721) fetchAndIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:724) fetchNandIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:727) fetchOrIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:730) fetchXorIntArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:733) newArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:736) sameMutableArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:739) unsafeFreezeArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:742) sizeofArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:745) sizeofMutableArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:748) indexByteArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:751) indexArrayArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:754) readByteArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:757) readMutableByteArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:760) readArrayArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:763) readMutableArrayArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:766) writeByteArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:769) writeMutableByteArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:772) writeArrayArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:775) writeMutableArrayArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:778) copyArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:781) copyMutableArrayArray# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:784) plusAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:787) minusAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:790) remAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:793) addr2Int# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:796) int2Addr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:799) gtAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:802) geAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:805) eqAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:808) neAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:811) ltAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:814) leAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:817) indexCharOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:820) indexWideCharOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:823) indexIntOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:826) indexWordOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:829) indexAddrOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:832) indexFloatOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:835) indexDoubleOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:838) indexStablePtrOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:841) indexInt8OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:844) indexInt16OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:847) indexInt32OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:850) indexInt64OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:853) indexWord8OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:856) indexWord16OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:859) indexWord32OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:862) indexWord64OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:865) readCharOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:868) readWideCharOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:871) readIntOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:874) readWordOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:877) readAddrOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:880) readFloatOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:883) readDoubleOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:886) readStablePtrOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:889) readInt8OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:892) readInt16OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:895) readInt32OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:898) readInt64OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:901) readWord8OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:904) readWord16OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:907) readWord32OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:910) readWord64OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:913) writeCharOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:916) writeWideCharOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:919) writeIntOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:922) writeWordOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:925) writeAddrOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:928) writeFloatOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:931) writeDoubleOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:934) writeStablePtrOffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:937) writeInt8OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:940) writeInt16OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:943) writeInt32OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:946) writeInt64OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:949) writeWord8OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:952) writeWord16OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:955) writeWord32OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:958) writeWord64OffAddr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:961) newMutVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:964) readMutVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:967) writeMutVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:970) sameMutVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:973) atomicModifyMutVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:976) casMutVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:979) catch# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:982) raise# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:985) raiseIO# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:988) maskAsyncExceptions# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:991) maskUninterruptible# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:994) unmaskAsyncExceptions# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:997) getMaskingState# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1000) atomically# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1003) retry# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1006) catchRetry# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1009) catchSTM# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1012) check# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1015) newTVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1018) readTVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1021) readTVarIO# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1024) writeTVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1027) sameTVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1030) newMVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1033) takeMVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1036) tryTakeMVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1039) putMVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1042) tryPutMVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1045) readMVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1048) tryReadMVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1051) sameMVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1054) isEmptyMVar# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1057) delay# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1060) waitRead# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1063) waitWrite# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1066) asyncRead# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1069) asyncWrite# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1072) asyncDoProc# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1075) fork# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1078) forkOn# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1081) killThread# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1084) yield# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1087) myThreadId# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1090) labelThread# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1093) isCurrentThreadBound# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1096) noDuplicate# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1099) threadStatus# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1102) mkWeak# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1105) mkWeakNoFinalizer# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1108) addCFinalizerToWeak# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1111) deRefWeak# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1114) finalizeWeak# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1117) touch# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1120) makeStablePtr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1123) deRefStablePtr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1126) eqStablePtr# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1129) makeStableName# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1132) eqStableName# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1135) stableNameToInt# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1138) reallyUnsafePtrEquality# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1141) spark# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1144) getSpark# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1147) numSparks# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1150) dataToTag# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1153) addrToAny# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1156) mkApUpd0# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1159) newBCO# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1162) unpackClosure# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1165) getApStackVal# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1168) getCCSOf# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1171) getCurrentCCS# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1174) traceEvent# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1177) traceMarker# (lib libraries\ghc-prim\dist-install\build\autogen\GHC\Prim.hs:3335:11: Redundant constraint: Coercible a b In the type signature for: coerce :: Coercible a b => a -> b raries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1180) prefetchByteArray3# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1183) prefetchMutableByteArray3# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1186) prefetchAddr3# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1189) prefetchValue3# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1192) prefetchByteArray2# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1195) prefetchMutableByteArray2# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1198) prefetchAddr2# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1201) prefetchValue2# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1204) prefetchByteArray1# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1207) prefetchMutableByteArray1# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1210) prefetchAddr1# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1213) prefetchValue1# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1216) prefetchByteArray0# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1219) prefetchMutableByteArray0# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1222) prefetchAddr0# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1225) prefetchValue0# (libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs:1228) 100% ( 4 / 4) in 'GHC.Magic' 83% ( 10 / 12) in 'GHC.Types' Missing documentation for: Bool (libraries/ghc-prim/./GHC/Types.hs:35) Ordering (libraries/ghc-prim/./GHC/Types.hs:68) 73% (868 /1186) in 'GHC.Prim' Missing documentation for: Char# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1447) gtChar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1449) geChar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1452) eqChar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1455) neChar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1458) ltChar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1461) leChar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1464) ord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1467) Int# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1470) +# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1473) -# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1477) andI# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1525) orI# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1528) xorI# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1531) notI# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1534) negateInt# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1537) ># (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1559) >=# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1563) ==# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1567) /=# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1571) <# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1575) <=# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1579) chr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1582) int2Word# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1585) int2Float# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1588) int2Double# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1591) word2Float# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1594) word2Double# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1597) Word# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1618) plusWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1620) plusWord2# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1623) minusWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1626) timesWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1629) timesWord2# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1632) quotWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1635) remWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1638) quotRemWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1641) quotRemWord2# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1644) and# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1647) or# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1650) xor# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1653) not# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1656) word2Int# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1671) gtWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1674) geWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1677) eqWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1680) neWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1683) ltWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1686) leWord# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1689) narrow8Int# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1787) narrow16Int# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1790) narrow32Int# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1793) narrow8Word# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1796) narrow16Word# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1799) narrow32Word# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1802) Int64# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1805) Word64# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1807) Double# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1809) >## (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1812) >=## (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1816) ==## (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1820) /=## (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1824) <## (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1828) <=## (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1832) +## (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1836) -## (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1840) *## (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1844) /## (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1848) negateDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1851) double2Float# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1861) expDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1864) logDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1867) sqrtDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1870) sinDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1873) cosDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1876) tanDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1879) asinDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1882) acosDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1885) atanDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1888) sinhDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1891) coshDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1894) tanhDouble# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1897) Float# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1918) gtFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1920) geFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1923) eqFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1926) neFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1929) ltFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1932) leFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1935) plusFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1938) minusFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1941) timesFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1944) divideFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1947) negateFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1950) expFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1960) logFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1963) sqrtFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1966) sinFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1969) cosFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1972) tanFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1975) asinFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1978) acosFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1981) atanFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1984) sinhFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1987) coshFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1990) tanhFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1993) powerFloat# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1996) float2Double# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:1999) Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2008) MutableArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2010) sameMutableArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2019) SmallArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2116) SmallMutableArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2118) sameSmallMutableArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2127) ByteArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2224) MutableByteArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2226) sameMutableByteArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2249) indexIntArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2299) indexWordArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2302) indexAddrArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2305) indexFloatArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2308) indexDoubleArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2311) indexStablePtrArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2314) readAddrArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2377) readFloatArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2380) readDoubleArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2383) readStablePtrArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2386) readInt8Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2389) readInt16Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2392) readInt32Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2395) readInt64Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2398) readWord8Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2401) readWord16Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2404) readWord32Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2407) readWord64Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2410) writeIntArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2423) writeWordArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2426) writeAddrArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2429) writeFloatArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2432) writeDoubleArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2435) writeStablePtrArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2438) writeInt8Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2441) writeInt16Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2444) writeInt32Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2447) writeInt64Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2450) writeWord8Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2453) writeWord16Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2456) writeWord32Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2459) writeWord64Array# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2462) ArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2573) MutableArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2575) sameMutableArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2584) indexByteArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2602) indexArrayArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2605) readByteArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2608) readMutableByteArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2611) readArrayArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2614) readMutableArrayArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2617) writeByteArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2620) writeMutableByteArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2623) writeArrayArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2626) writeMutableArrayArrayArray# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2629) plusAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2656) gtAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2681) geAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2684) eqAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2687) neAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2690) ltAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2693) leAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2696) indexIntOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2709) indexWordOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2712) indexAddrOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2715) indexFloatOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2718) indexDoubleOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2721) indexStablePtrOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2724) indexInt8OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2727) indexInt16OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2730) indexInt32OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2733) indexInt64OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2736) indexWord8OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2739) indexWord16OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2742) indexWord32OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2745) indexWord64OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2748) readIntOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2761) readWordOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2764) readAddrOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2767) readFloatOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2770) readDoubleOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2773) readStablePtrOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2776) readInt8OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2779) readInt16OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2782) readInt32OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2785) readInt64OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2788) readWord8OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2791) readWord16OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2794) readWord32OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2797) readWord64OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2800) writeCharOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2803) writeWideCharOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2806) writeIntOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2809) writeWordOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2812) writeAddrOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2815) writeFloatOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2818) writeDoubleOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2821) writeStablePtrOffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2824) writeInt8OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2827) writeInt16OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2830) writeInt32OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2833) writeInt64OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2836) writeWord8OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2839) writeWord16OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2842) writeWord32OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2845) writeWord64OffAddr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2848) sameMutVar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2870) atomicModifyMutVar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2873) casMutVar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2876) catch# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2879) raise# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2882) raiseIO# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2885) maskAsyncExceptions# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2888) maskUninterruptible# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2891) unmaskAsyncExceptions# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2894) getMaskingState# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2897) TVar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2900) atomically# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2902) retry# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2905) catchRetry# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2908) catchSTM# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2911) check# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2914) sameTVar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2937) sameMVar# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:2988) fork# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3046) forkOn# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3049) killThread# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3052) yield# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3055) myThreadId# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3058) labelThread# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3061) isCurrentThreadBound# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3064) noDuplicate# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3067) threadStatus# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3070) Weak# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3073) mkWeak# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3075) mkWeakNoFinalizer# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3078) deRefWeak# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3091) finalizeWeak# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3094) touch# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3097) StablePtr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3100) StableName# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3102) makeStablePtr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3104) deRefStablePtr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3107) eqStablePtr# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3110) makeStableName# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3113) eqStableName# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3116) stableNameToInt# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3119) reallyUnsafePtrEquality# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3122) par# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3125) spark# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3128) seq# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3131) getSpark# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3134) parGlobal# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3142) parLocal# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3145) parAt# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3148) parAtAbs# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3151) parAtRel# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3154) parAtForNow# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3157) dataToTag# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3160) tagToEnum# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3163) mkApUpd0# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3175) newBCO# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3178) unpackClosure# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3181) getApStackVal# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3184) getC:\code\HEAD\libraries\base\dist-install\build\GHC\Base.hi Declaration for $fOrdMaybe: attempting to use module 'GHC.Classes' (libraries/ghc-prim/./GHC/Classes.hs) which is not loaded CCSOf# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3187) Int8X16# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3338) Int16X8# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3340) Int32X4# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3342) Int64X2# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3344) Int8X32# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3346) Int16X16# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3348) Int32X8# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3350) Int64X4# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3352) Int8X64# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3354) Int16X32# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3356) Int32X16# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3358) Int64X8# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3360) Word8X16# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3362) Word16X8# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3364) Word32X4# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3366) Word64X2# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3368) Word8X32# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3370) Word16X16# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3372) Word32X8# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3374) Word64X4# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3376) Word8X64# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3378) Word16X32# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3380) Word32X16# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3382) Word64X8# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3384) FloatX4# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3386) DoubleX2# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3388) FloatX8# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3390) DoubleX4# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3392) FloatX16# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3394) DoubleX8# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3396) prefetchByteArray3# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6608) prefetchMutableByteArray3# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6611) prefetchAddr3# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6614) prefetchValue3# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6617) prefetchByteArray2# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6620) prefetchMutableByteArray2# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6623) prefetchAddr2# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6626) prefetchValue2# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6629) prefetchByteArray1# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6632) prefetchMutableByteArray1# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6635) prefetchAddr1# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6638) prefetchValue1# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6641) prefetchByteArray0# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6644) prefetchMutableByteArray0# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6647) prefetchAddr0# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6650) prefetchValue0# (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:6653) Cannot continue after interface file error libraries/ghc-prim/ghc.mk:4: recipe for target 'libraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock' failed make[1]: *** [libraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock] Error 1 Makefile:71: recipe for target 'all' failed make: *** [all] Error 2 HEAD (master)$ -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Thu Jan 1 20:57:59 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 1 Jan 2015 16:57:59 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install In-Reply-To: <1420141005-sup-3185@sabre> References: <1420138452-sup-5875@sabre> <1420141005-sup-3185@sabre> Message-ID: Thanks, I seem to have got that to work On Thu, Jan 1, 2015 at 3:37 PM, Edward Z. Yang wrote: > Oh, because Cabal HQ hasn't cut a release yet. > > Try installing out of Git. https://github.com/haskell/cabal/ > > Edward > > Excerpts from George Colpitts's message of 2015-01-01 14:23:50 -0500: > > I still have 7.8.3 but it doesn't seem to want to build the latest cabal: > > > > ghc --version > > The Glorious Glasgow Haskell Compilation System, version 7.8.3 > > bash-3.2$ cabal install cabal-install > > Resolving dependencies... > > Configuring cabal-install-1.20.0.6... > > Building cabal-install-1.20.0.6... > > Installed cabal-install-1.20.0.6 > > Updating documentation index > > /Users/gcolpitts/Library/Haskell/share/doc/index.html > > > > On Thu, Jan 1, 2015 at 2:54 PM, Edward Z. Yang wrote: > > > > > If you still have your old GHC around, it will be much better to > > > compile the newest cabal-install using the *old GHC*, and then > > > use that copy to bootstrap a copy of the newest cabal-install. > > > > > > Edward > > > > > > Excerpts from George Colpitts's message of 2015-01-01 12:08:44 -0500: > > > > ?$ ? > > > > cabal update > > > > Downloading the latest package list from hackage.haskell.org > > > > Note: *there is a new version of cabal-install available.* > > > > To upgrade, run: cabal install cabal-install > > > > bash-3.2$ *cabal install -j3 cabal-install * > > > > *?...?* > > > > > > > > > > > > *Resolving dependencies...cabal: Could not resolve dependencies:* > > > > trying: cabal-install-1.20.0.6 (user goal) > > > > trying: base-4.8.0.0/installed-779... (dependency of > > > cabal-install-1.20.0.6) > > > > next goal: process (dependency of cabal-install-1.20.0.6) > > > > rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0, > > > > process > > > > => unix==2.7.1.0/installed-4ae...) > > > > trying: process-1.2.1.0 > > > > next goal: directory (dependency of cabal-install-1.20.0.6) > > > > rejecting: directory-1.2.1.1/installed-b08... (conflict: directory => > > > > time==1.5.0.1/installed-c23..., cabal-install => time>=1.1 && <1.5) > > > > rejecting: directory-1.2.1.0 (conflict: base== > 4.8.0.0/installed-779..., > > > > directory => base>=4.5 && <4.8) > > > > rejecting: directory-1.2.0.1, 1.2.0.0 (conflict: > > > > base==4.8.0.0/installed-779..., directory => base>=4.2 && <4.7) > > > > rejecting: directory-1.1.0.2 (conflict: base== > 4.8.0.0/installed-779..., > > > > directory => base>=4.2 && <4.6) > > > > rejecting: directory-1.1.0.1 (conflict: base== > 4.8.0.0/installed-779..., > > > > directory => base>=4.2 && <4.5) > > > > rejecting: directory-1.1.0.0 (conflict: base== > 4.8.0.0/installed-779..., > > > > directory => base>=4.2 && <4.4) > > > > rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 > > > (conflict: > > > > process => directory>=1.1 && <1.3) > > > > Dependency tree exhaustively searched. > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fuuzetsu at fuuzetsu.co.uk Fri Jan 2 04:08:48 2015 From: fuuzetsu at fuuzetsu.co.uk (Mateusz Kowalczyk) Date: Fri, 02 Jan 2015 04:08:48 +0000 Subject: Haddock error In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF56293640@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF56293640@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <54A619D0.30704@fuuzetsu.co.uk> On 01/01/2015 08:34 PM, Simon Peyton Jones wrote: > Folks I'm getting this Haddock error (see below) from a clean build > on Windows. Does it ring any bells for anyone? Anyone have any idea > how to fix? My build isn't exactly HEAD but I'd be very surprised if > my changes are the cause. Thanks Simon > > "C:/code/HEAD/inplace/bin/haddock" > --odir="libraries/ghc-prim/dist-install/doc/html/ghc-prim" > --no-tmp-comp-dir > --dump-interface=libraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock > --html --hoogle --title="ghc-prim-0.3.1.0: GHC primitives" > --prologue="libraries/ghc-prim/dist-install/haddock-prologue.txt" > --optghc=-hisuf --optghc=hi --optghc=-osuf --optghc=o --optghc=-hcsuf > --optghc=hc --optghc=-static --optghc=-H32m --optghc=-O > --optghc=-Werror --optghc=-Wall --optghc=-H64m --optghc=-O0 > --optghc=-this-package-key --optghc=ghcpr_FgrV6cgh2JHBlbcx1OSlwt > --optghc=-hide-all-packages --optghc=-i > --optghc=-ilibraries/ghc-prim/. > --optghc=-ilibraries/ghc-prim/dist-install/build > --optghc=-ilibraries/ghc-prim/dist-install/build/autogen > --optghc=-Ilibraries/ghc-prim/dist-install/build > --optghc=-Ilibraries/ghc-prim/dist-install/build/autogen > --optghc=-Ilibraries/ghc-prim/. --optghc=-optP-include > --optghc=-optPlibraries/ghc-prim/dist-install/build/autogen/cabal_macros.h > --optghc=-package-key --optghc=rts --optghc=-this-package-key > --optghc=ghc-prim --optghc=-XHaskell2010 --optghc=-O2 --optghc=-O > --optghc=-dcore-lint --optghc=-fno-warn-deprecated-flags > --optghc=-fno-warn-tabs --optghc=-Wwarn --optghc=-no-user-package-db > --optghc=-rtsopts --optghc=-fno-warn-trustworthy-safe --optghc=-odir > --optghc=libraries/ghc-prim/dist-install/build --optghc=-hidir > --optghc=libraries/ghc-prim/dist-install/build --optghc=-stubdir > --optghc=libraries/ghc-prim/dist-install/build > libraries/ghc-prim/./GHC/CString.hs > libraries/ghc-prim/./GHC/Classes.hs > libraries/ghc-prim/./GHC/Debug.hs > libraries/ghc-prim/./GHC/IntWord64.hs > libraries/ghc-prim/./GHC/Magic.hs > libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs > libraries/ghc-prim/./GHC/Tuple.hs libraries/ghc-prim/./GHC/Types.hs > libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs +RTS > -tlibraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock.t > --machine-readable > > [snip] > > Declaration for $fOrdMaybe: attempting to use module 'GHC.Classes' > (libraries/ghc-prim/./GHC/Classes.hs) which is not loaded CCSOf# > (libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs:3187) > > [snip] > > Cannot continue after interface file error > libraries/ghc-prim/ghc.mk:4: recipe for target > 'libraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock' > failed make[1]: *** > [libraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock] > Error 1 Makefile:71: recipe for target 'all' failed make: *** [all] > Error 2 HEAD (master)$ > Hi Simon, In InterfaceFile.hs in Haddock there is binaryInterfaceVersion :: Word16 #if (__GLASGOW_HASKELL__ >= 711) && (__GLASGOW_HASKELL__ < 713) binaryInterfaceVersion = 27 ? Try bumping this to 28 and if it works then then you may want to commit the change. By the way I see that there is a lot of output from Haddock now as I made it print locations of missing documentation by default. Maybe --no-print-missing-docs should be passed in for GHC stuff. -- Mateusz K. From george.colpitts at gmail.com Fri Jan 2 13:12:34 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Fri, 2 Jan 2015 09:12:34 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - feedback on Mac OS In-Reply-To: References: Message-ID: Only problem remaining is compiling with -fllvm and running resulting executable Other problems below have now been solved: - cpphs - new version resolves problem - cabal install vector - upgrade to gcc (Homebrew gcc 4.9.2_1) 4.9.2 solves problem On Thu, Jan 1, 2015 at 9:58 AM, George Colpitts wrote: > I built from source on Mac OS and found the following issues: > > > - llvm , compiling with llvm (3.4.2) gives the following warnings: > - $ ghc -fllvm cubeFast.hs > [1 of 1] Compiling Main ( cubeFast.hs, cubeFast.o ) > clang: warning: argument unused during compilation: > '-fno-stack-protector' > clang: warning: argument unused during compilation: '-D > TABLES_NEXT_TO_CODE' > clang: warning: argument unused during compilation: '-I .' > clang: warning: argument unused during compilation: '-fno-common' > clang: warning: argument unused during compilation: '-U __PIC__' > clang: warning: argument unused during compilation: '-D __PIC__' > Linking cubeFast ... > - running the resulting executable crashes (compiling without > -fllvm gives no warnings and executable works properly) > - cat bigCube.txt | ./cubeFast > /dev/null > Segmentation fault: 11 > - Exception Type: EXC_BAD_ACCESS (SIGSEGV) > Exception Codes: KERN_INVALID_ADDRESS at 0xfffffffd5bfd8460 > > > - ?cabal install vector fails: > - [ 5 of 19] Compiling Data.Vector.Fusion.Stream.Monadic ( > Data/Vector/Fusion/Stream/Monadic.hs, > dist/build/Data/Vector/Fusion/Stream/Monadic.o ) > : can't load .so/.DLL for: > /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/lib/libiconv.dylib > (dlopen(/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/lib/libiconv.dylib, > 5): no suitable image found. Did find: > > /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/lib/libiconv.dylib: > mach-o, but wrong filetype) > - ?cabal install cpphs fails:? > - cabal install cpphs > Resolving dependencies... > Configuring cpphs-1.13... > Building cpphs-1.13... > Failed to install cpphs-1.13 > Build log ( /Users/gcolpitts/.cabal/logs/cpphs-1.13.log ): > Warning: cpphs.cabal: Unknown fields: build-depends (line 5) > Fields allowed in this section: > name, version, cabal-version, build-type, license, license-file, > license-files, copyright, maintainer, stability, homepage, > package-url, bug-reports, synopsis, description, category, author, > tested-with, data-files, data-dir, extra-source-files, > extra-tmp-files, extra-doc-files > Configuring cpphs-1.13... > Building cpphs-1.13... > Preprocessing library cpphs-1.13... > - Language/Preprocessor/Cpphs.hs:1:1: > Could not find module ?Prelude? > It is a member of the hidden package ?base-4.8.0.0?. > Perhaps you need to add ?base? to the build-depends in your > .cabal file. > Use -v to see a list of the files searched for. > > Language/Preprocessor/Cpphs/CppIfdef.hs:32:8: > Could not find module ?Numeric? > It is a member of the hidden package ?base-4.8.0.0?. > Perhaps you need to add ?base? to the build-depends in your > .cabal file. > Use -v to see a list of the files searched for. > > Language/Preprocessor/Cpphs/CppIfdef.hs:33:8: > Could not find module ?System.IO.Unsafe? > It is a member of the hidden package ?base-4.8.0.0?. > Perhaps you need to add ?base? to the build-depends in your > .cabal file. > Use -v to see a list of the files searched for. > > Language/Preprocessor/Cpphs/CppIfdef.hs:34:8: > Could not find module ?System.IO? > It is a member of the hidden package ?base-4.8.0.0?. > Perhaps you need to add ?base? to the build-depends in your > .cabal file. > Use -v to see a list of the files searched for. > > Language/Preprocessor/Cpphs/MacroPass.hs:29:8: > Could not find module ?Control.Monad? > It is a member of the hidden package ?base-4.8.0.0?. > Perhaps you need to add ?base? to the build-depends in your > .cabal file. > Use -v to see a list of the files searched for. > > Language/Preprocessor/Cpphs/MacroPass.hs:30:8: > Could not find module ?System.Time? > Perhaps you meant > System.CPUTime (needs flag -package-key base-4.8.0.0) > System.Cmd (needs flag -package-key > process-1.2.1.0 at proce_ADbmNMhxdsoDn9NrOWjezu) > System.Mem (needs flag -package-key base-4.8.0.0) > Use -v to see a list of the files searched for. > > Language/Preprocessor/Cpphs/MacroPass.hs:31:8: > Could not find module ?System.Locale? > Use -v to see a list of the files searched for. > > Language/Preprocessor/Cpphs/Options.hs:22:8: > Could not find module ?Data.Maybe? > It is a member of the hidden package ?base-4.8.0.0?. > Perhaps you need to add ?base? to the build-depends in your > .cabal file. > Use -v to see a list of the files searched for. > > Language/Preprocessor/Cpphs/ReadFirst.hs:19:8: > Could not find module ?System.Directory? > It is a member of the hidden package > ?directory-1.2.1.1 at direc_3m6Ew9I164U5MIkATLCdb8?. > Perhaps you need to add ?directory? to the build-depends in > your .cabal file. > Use -v to see a list of the files searched for. > > Language/Preprocessor/Unlit.hs:5:8: > Could not find module ?Data.Char? > It is a member of the hidden package ?base-4.8.0.0?. > Perhaps you need to add ?base? to the build-depends in your > .cabal file. > Use -v to see a list of the files searched for. > > Language/Preprocessor/Unlit.hs:6:8: > Could not find module ?Data.List? > It is a member of the hidden package ?base-4.8.0.0?. > Perhaps you need to add ?base? to the build-depends in your > .cabal file. > Use -v to see a list of the files searched for. > cabal: Error: some packages failed to install: > cpphs-1.13 failed during the building phase. The exception was: > ExitFailure 1 > > ?Configuration details: > > > - Mac OS 10.10.1 (Yosemite) > - uname -a > Darwin iMac27-5.local 14.0.0 Darwin Kernel Version 14.0.0: Fri Sep 19 > 00:26:44 PDT 2014; root:xnu-2782.1.97~2/RELEASE_X86_64 x86_64 > - llvm info: > - opt --version > LLVM (http://llvm.org/): > LLVM version 3.4.2 > Optimized build with assertions. > Built Oct 31 2014 (23:14:30). > Default target: x86_64-apple-darwin14.0.0 > Host CPU: corei7 > - gcc --version > gcc (Homebrew gcc 4.9.1) 4.9.1 > Copyright (C) 2014 Free Software Foundation, Inc. > This is free software; see the source for copying conditions. There > is NO > warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR > PURPOSE. > - ? /usr/bin/ghc --info > [("Project name","The Glorious Glasgow Haskell Compilation System") > ,("GCC extra via C opts"," -fwrapv") > ,("C compiler command","/usr/bin/gcc") > ,("C compiler flags"," -m64 -fno-stack-protector") > ,("C compiler link flags"," -m64") > ,("Haskell CPP command","/usr/bin/gcc") > ,("Haskell CPP flags","-E -undef -traditional -Wno-invalid-pp-token > -Wno-unicode -Wno-trigraphs") > ,("ld command","/usr/bin/ld") > ,("ld flags"," -arch x86_64") > ,("ld supports compact unwind","YES") > ,("ld supports build-id","NO") > ,("ld supports filelist","YES") > ,("ld is GNU ld","NO") > ,("ar command","/usr/bin/ar") > ,("ar flags","clqs") > ,("ar supports at file","NO") > ,("touch command","touch") > ,("dllwrap command","/bin/false") > ,("windres command","/bin/false") > ,("libtool command","libtool") > ,("perl command","/usr/bin/perl") > ,("target os","OSDarwin") > ,("target arch","ArchX86_64") > ,("target word size","8") > ,("target has GNU nonexec stack","False") > ,("target has .ident directive","True") > ,("target has subsections via symbols","True") > ,("Unregisterised","NO") > ,("LLVM llc command","llc") > ,("LLVM opt command","opt") > ,("Project version","7.8.3") > ,("Booter version","7.6.3") > ,("Stage","2") > ,("Build platform","x86_64-apple-darwin") > ,("Host platform","x86_64-apple-darwin") > ,("Target platform","x86_64-apple-darwin") > ,("Have interpreter","YES") > ,("Object splitting supported","YES") > ,("Have native code generator","YES") > ,("Support SMP","YES") > ,("Tables next to code","YES") > ,("RTS ways","l debug thr thr_debug thr_l thr_p dyn debug_dyn thr_dyn > thr_debug_dyn l_dyn thr_l_dyn") > ,("Support dynamic-too","YES") > ,("Support parallel --make","YES") > ,("Dynamic by default","NO") > ,("GHC Dynamic","YES") > ,("Leading underscore","YES") > ,("Debug on","False") > > ,("LibDir","/Library/Frameworks/GHC.framework/Versions/7.8.3-x86_64/usr/lib/ghc-7.8.3") > ,("Global Package > DB","/Library/Frameworks/GHC.framework/Versions/7.8.3-x86_64/usr/lib/ghc-7.8.3/package.conf.d") > ] > - Not sure I found the correct instructions for building from source, > I used the following: > - > > $ autoreconf > $ ./configure > $ make > $ make install > > > > > On Tue, Dec 23, 2014 at 10:36 AM, Austin Seipp > wrote: > >> We are pleased to announce the first release candidate for GHC 7.10.1: >> >> https://downloads.haskell.org/~ghc/7.10.1-rc1/ >> >> This includes the source tarball and bindists for 64bit/32bit Linux >> and Windows. Binary builds for other platforms will be available >> shortly. (CentOS 6.5 binaries are not available at this time like they >> were for 7.8.x). These binaries and tarballs have an accompanying >> SHA256SUMS file signed by my GPG key id (0x3B58D86F). >> >> We plan to make the 7.10.1 release sometime in February of 2015. We >> expect another RC to occur during January of 2015. >> >> Please test as much as possible; bugs are much cheaper if we find them >> before the release! >> >> -- >> Regards, >> >> Austin Seipp, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Fri Jan 2 21:57:48 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 02 Jan 2015 13:57:48 -0800 Subject: Compiling nofib-analyse In-Reply-To: <1419984556-sup-9043@sabre> References: <618BE556AADD624C9C918AA5D5911BEF5628F05C@DB3PRD3001MB020.064d.mgd.msft.net> <1419984556-sup-9043@sabre> Message-ID: <1420235805-sup-7421@sabre> Attached is a patch which axes HTML support in nofib-analyse. I would Phab it but I don't know how to do that for submodules. Maybe we should drop the LaTeX support too! Edward Excerpts from Edward Z. Yang's message of 2014-12-30 19:10:56 -0500: > Pretty sure it's the aptly named 'html'. > https://hackage.haskell.org/package/html > > I can't remember the last time I used the HTML reporting > capability, so I'd be happy about removing it. > > Edward > > Excerpts from Simon Peyton Jones's message of 2014-12-30 07:03:17 -0500: > > When building nofib-analyse (in nofib), I get > > > > /home/simonpj/local/bin/ghc -O -cpp --make Main -o nofib-analyse > > > > Main.hs:14:18: > > Could not find module 'Text.Html' > > Use -v to see a list of the files searched for. > > > > There are rather a lot of HTML packages. Which one is needed? Does this meant that it's essential to install this package before running nofib, a manual step? > > > > I wonder if it'd be better to remove the HTML dependency, or make it optional? > > > > Simon -------------- next part -------------- A non-text attachment was scrubbed... Name: 0001-Remove-HTML-generation-from-nofib-analyse-dropping-h.patch Type: application/octet-stream Size: 11134 bytes Desc: not available URL: From johan.tibell at gmail.com Fri Jan 2 23:18:55 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 2 Jan 2015 18:18:55 -0500 Subject: Shipping core libraries with debug symbols Message-ID: Hi! We are now able to generate DWARF debug info, by passing -g to GHC. This will allow for better debugging (e.g. using GDB) and profiling (e.g. using Linux perf events). To make this feature more user accessible we need to ship debug info for the core libraries (and perhaps the RTS). The reason we need to ship debug info is that it's difficult, or impossible in the case of base, for the user to rebuild these libraries.The question is, how do we do this well? I don't think our "way" solution works very well. It causes us to recompile too much and GHC doesn't know which "ways" have been built or not. I believe other compilers, e.g. GCC, ship debug symbols in separate files ( https://packages.debian.org/sid/libc-dbg) that e.g. GDB can then look up. -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sat Jan 3 00:26:32 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Fri, 2 Jan 2015 19:26:32 -0500 Subject: Shipping core libraries with debug symbols In-Reply-To: References: Message-ID: On Fri, Jan 2, 2015 at 6:18 PM, Johan Tibell wrote: > I believe other compilers, e.g. GCC, ship debug symbols in separate files ( > https://packages.debian.org/sid/libc-dbg > > ) that e.g. GDB can then look up. > Lookaside debugging information is (a) a Linux-ism, although possibly also included in mingw --- but not OS X or the *BSDs (b) on RPM-based systems at least, is split out of objects into separate files, and thence into debug packages, by the standard RPM support macros before the standard strip step (I expect debuild does something similar on Debian-ish systems). -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sat Jan 3 00:54:32 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 2 Jan 2015 19:54:32 -0500 Subject: Shipping core libraries with debug symbols In-Reply-To: References: Message-ID: Brandon, If we just built GHC with debug symbols enabled, everything should just work from a packaging perspective? On Fri, Jan 2, 2015 at 7:26 PM, Brandon Allbery wrote: > On Fri, Jan 2, 2015 at 6:18 PM, Johan Tibell > wrote: > >> I believe other compilers, e.g. GCC, ship debug symbols in separate files >> (https://packages.debian.org/sid/libc-dbg >> >> ) that e.g. GDB can then look up. >> > > Lookaside debugging information is (a) a Linux-ism, although possibly also > included in mingw --- but not OS X or the *BSDs (b) on RPM-based systems at > least, is split out of objects into separate files, and thence into debug > packages, by the standard RPM support macros before the standard strip step > (I expect debuild does something similar on Debian-ish systems). > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sat Jan 3 00:59:30 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Fri, 2 Jan 2015 19:59:30 -0500 Subject: Shipping core libraries with debug symbols In-Reply-To: References: Message-ID: On Fri, Jan 2, 2015 at 7:54 PM, Johan Tibell wrote: > If we just built GHC with debug symbols enabled, everything should just > work from a packaging perspective? > On most RPM systems, at least (I get debuginfo packages for local RPM builds, with nothing special in the specs files). Someone else would have to comment on Debian's build system, although I expect that it is similarly automated. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sat Jan 3 13:05:48 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Sat, 3 Jan 2015 08:05:48 -0500 Subject: Cabal 1.22 RC ready to test In-Reply-To: References: Message-ID: On Sat, Jan 3, 2015 at 2:36 AM, Mikhail Glushenkov < the.dead.shall.rise at gmail.com> wrote: > Hi, > > On 3 January 2015 at 03:36, Johan Tibell wrote: > > If I don't hear anything the next few days I will make the release. > > The test suite doesn't compile with GHC HEAD on Travis. > The error (https://travis-ci.org/haskell/cabal/jobs/45758614) is quite perplexing: [62 of 77] Compiling Distribution.Client.Config ( Distribution/Client/Config.hs, dist/dist-sandbox-8940a882/build/cabal/cabal-tmp/Distribution/Client/Config.o ) Distribution/Client/Config.hs:56:12: Module ?Distribution.Simple.Compiler? does not export ?DebugInfoLevel(..)? cabal: Error: some packages failed to install: cabal-install-1.22.0.0 failed during the building phase. The exception was: ExitFailure 1 Distribution.Simple.Compiler most definitely does export DebugInfoLevel, otherwise it wouldn't compile with the other GHC versions. Does GHC do something special with Cabal nowadays when it's no longer tied to GHC? -------------- next part -------------- An HTML attachment was scrubbed... URL: From scpmw at leeds.ac.uk Sat Jan 3 18:33:42 2015 From: scpmw at leeds.ac.uk (Peter Wortmann) Date: Sat, 03 Jan 2015 19:33:42 +0100 Subject: Shipping core libraries with debug symbols In-Reply-To: References: Message-ID: The debian package seems to simply put un-stripped libraries into a special path (/usr/lib/debug/...). This should be relatively straight-forward to implement. Note though that from a look at the RPM infrastructure, they have a tool in there (dwarfread) which actually parses through DWARF information and updates paths, so there is possibly more going on here. On the other hand, supporting -gsplit-dwarf seems to be a different mechanism, called Fission[1]. I haven't looked too much at the implementation yet, but to me it looks like it means generating copies of debug sections (such as .debug-line.dwo) which will then be extracted using "objcopy --extract-dwo". This might take a bit more work to implement, both on DWARF generation code as well as infrastructure. Interestingly enough, doing this kind of splitting will actually buy us next to nothing - with Fission both .debug_line and .debug_frame would remain in the binary unchanged, so all we'd export would be some fairly inconsequential data from .debug_info. In contrast to other programming languages, we just don't have that much debug information in the first place. Well, at least not yet. Greetings, Peter [1] https://gcc.gnu.org/wiki/DebugFission On 03/01/2015 00:18, Johan Tibell wrote: > Hi! > > We are now able to generate DWARF debug info, by passing -g to GHC. This > will allow for better debugging (e.g. using GDB) and profiling (e.g. > using Linux perf events). To make this feature more user accessible we > need to ship debug info for the core libraries (and perhaps the RTS). > The reason we need to ship debug info is that it's difficult, or > impossible in the case of base, for the user to rebuild these > libraries.The question is, how do we do this well? I don't think our > "way" solution works very well. It causes us to recompile too much and > GHC doesn't know which "ways" have been built or not. > > I believe other compilers, e.g. GCC, ship debug symbols in separate > files (https://packages.debian.org/sid/libc-dbg) that e.g. GDB can then > look up. > > -- Johan > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From johan.tibell at gmail.com Sat Jan 3 20:17:02 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Sat, 3 Jan 2015 15:17:02 -0500 Subject: Cabal 1.22 RC ready to test In-Reply-To: References: Message-ID: It might be as simple as bumping the Cabal submodule in GHC to match the upstream 1.22 branch. On Sat, Jan 3, 2015 at 8:46 AM, Jake Wheat wrote: > > On 3 January 2015 at 15:05, Johan Tibell wrote: > >> The error (https://travis-ci.org/haskell/cabal/jobs/45758614) is quite >> perplexing: >> >> [62 of 77] Compiling Distribution.Client.Config ( >> Distribution/Client/Config.hs, >> dist/dist-sandbox-8940a882/build/cabal/cabal-tmp/Distribution/Client/Config.o >> ) >> Distribution/Client/Config.hs:56:12: >> Module >> ?Distribution.Simple.Compiler? >> does not export >> ?DebugInfoLevel(..)? >> cabal: Error: some packages failed to install: >> cabal-install-1.22.0.0 failed during the building phase. The exception >> was: >> ExitFailure 1 >> >> Distribution.Simple.Compiler most definitely does export DebugInfoLevel, >> otherwise it wouldn't compile with the other GHC versions. >> >> Does GHC do something special with Cabal nowadays when it's no longer >> tied to GHC? >> >> Is it because the Cabal-1.22.0.0 bundled with ghc is now different to the > Cabal-1.22.0.0 in github? I get the same error compiling the master branch > cabal-install with ghc-7.10.0-20141222. > > I think one solution is to increase the version number of Cabal in github > (for 1.22 and master branches), and to make the latest cabal-install depend > on this (i.e. Cabal>=1.22.0.1) since cabal-install 1.22 in github no longer > works with the 'released' snapshot Cabal-1.22.0.0 in ghc. This fixes the > error for ghc-7.10.0-20141222. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.e.foster at gmail.com Sat Jan 3 20:20:04 2015 From: martin.e.foster at gmail.com (Martin Foster) Date: Sat, 3 Jan 2015 20:20:04 +0000 Subject: Windows build gotchas In-Reply-To: <87y4pme412.fsf@gmail.com> References: <87y4pme412.fsf@gmail.com> Message-ID: On Thu, Jan 1, 2015 at 6:42 PM, Herbert Valerio Riedel wrote: > > > I noticed that the cabal output made reference to > > "C:\Users\Martin\AppData\Roaming\cabal\", so tried moving that out of the > > way, but it only made the problem worse. I did figure it out eventually: > in > > addition to that directory, "%APPDATA%\cabal", there were also files left > > over in "%APPDATA%\ghc". Once I removed that directory as well, things > > started working again - but it took me a lot of time & frustration to get > > there. > > That's btw because Cabal/GHC uses `getAppUserDataDirectory "cabal"` and > `getAppUserDataDirectory "ghc"` respectively... > > Hrm. It seems the behaviour of getAppUserDataDirectory is decided at compile-time, by way of `#if defined(mingw32_HOST_OS)`: http://hackage.haskell.org/package/directory/docs/src/System-Directory.html#getAppUserDataDirectory So, given the "build GHC on Windows" procedure involves downloading Cabal as a binary, it could only be changed by using a different binary. And much as I might be able to argue it'd make sense for getAppUserDataDirectory to use $HOME instead of %APPDATA% when in MSYS, changing it would be a backwards-compatibility-breaking change to System.Directory. Well, I suppose, theoretically, it could do something like... use the value of some special environment variable if it's set at runtime, otherwise retain the existing behaviour. But that doesn't feel like it'd be good practice to me - too hacky. So I guess that'll be staying as it is, then. Good to know why it's like that, though - it makes a lot more sense now. Thanks for the pointer! - Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sat Jan 3 20:22:41 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Sat, 3 Jan 2015 15:22:41 -0500 Subject: Shipping core libraries with debug symbols In-Reply-To: References: Message-ID: How much debug info (as a percentage) do we currently generate? Could we just keep it in there in the release? On Sat, Jan 3, 2015 at 1:33 PM, Peter Wortmann wrote: > > > The debian package seems to simply put un-stripped libraries into a > special path (/usr/lib/debug/...). This should be relatively > straight-forward to implement. Note though that from a look at the RPM > infrastructure, they have a tool in there (dwarfread) which actually parses > through DWARF information and updates paths, so there is possibly more > going on here. > > On the other hand, supporting -gsplit-dwarf seems to be a different > mechanism, called Fission[1]. I haven't looked too much at the > implementation yet, but to me it looks like it means generating copies of > debug sections (such as .debug-line.dwo) which will then be extracted using > "objcopy --extract-dwo". This might take a bit more work to implement, both > on DWARF generation code as well as infrastructure. > > Interestingly enough, doing this kind of splitting will actually buy us > next to nothing - with Fission both .debug_line and .debug_frame would > remain in the binary unchanged, so all we'd export would be some fairly > inconsequential data from .debug_info. In contrast to other programming > languages, we just don't have that much debug information in the first > place. Well, at least not yet. > > Greetings, > Peter > > [1] https://gcc.gnu.org/wiki/DebugFission > > > > On 03/01/2015 00:18, Johan Tibell wrote: > >> Hi! >> >> We are now able to generate DWARF debug info, by passing -g to GHC. This >> will allow for better debugging (e.g. using GDB) and profiling (e.g. >> using Linux perf events). To make this feature more user accessible we >> need to ship debug info for the core libraries (and perhaps the RTS). >> The reason we need to ship debug info is that it's difficult, or >> impossible in the case of base, for the user to rebuild these >> libraries.The question is, how do we do this well? I don't think our >> "way" solution works very well. It causes us to recompile too much and >> GHC doesn't know which "ways" have been built or not. >> >> I believe other compilers, e.g. GCC, ship debug symbols in separate >> files (https://packages.debian.org/sid/libc-dbg) that e.g. GDB can then >> look up. >> >> -- Johan >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From slyich at gmail.com Sat Jan 3 20:52:54 2015 From: slyich at gmail.com (Sergei Trofimovich) Date: Sat, 3 Jan 2015 20:52:54 +0000 Subject: Bash completion in GHC 7.10 In-Reply-To: References: Message-ID: <20150103205254.0ba49d7a@sf> On Wed, 10 Dec 2014 16:43:57 +0400 Lennart Kolmodin wrote: > Hi everybody! > > TL;DL GHC 7.10 will have better bash completion, try it out! I'd like your > help to verify the categorisation of DynFlags into ghc / ghci / shared or > hidden flags. Thank you! On the way to users :) https://github.com/gentoo-haskell/gentoo-haskell/commit/d6f63341693063e60168bbddffb0806621696689 sf / # ghc --print-li /usr/lib64/ghc-7.10.0.20141222 sf / # ghci --print-li ghc: panic! (the 'impossible' happened) (GHC version 7.10.0.20141222 for x86_64-unknown-linux): ghc: panic! (the 'impossible' happened) (GHC version 7.10.0.20141222 for x86_64-unknown-linux): v_unsafeGlobalDynFlags: not initialised Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug Absolutely not your fault, just makes such things more discoverable :] Thanks again! -- Sergei -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From mail at joachim-breitner.de Sat Jan 3 20:53:12 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sat, 03 Jan 2015 21:53:12 +0100 Subject: cryptarithm2 +8.16% In-Reply-To: <1420115438.17917.2.camel@joachim-breitner.de> References: <1420115438.17917.2.camel@joachim-breitner.de> Message-ID: <1420318392.4074.2.camel@joachim-breitner.de> Hi, Am Donnerstag, den 01.01.2015, 13:30 +0100 schrieb Joachim Breitner: > due to #9938 ghcspeed did not measure every commit but somewhere in > these commits, cryptarithm2 regressed by 8%: it seems that d8d003185a4bca1a1ebbadb5111118ef37bbc83a (When solving one Given from another, use the depth to control which way round) has solved this. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From simonpj at microsoft.com Sat Jan 3 22:03:48 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sat, 3 Jan 2015 22:03:48 +0000 Subject: cryptarithm2 +8.16% In-Reply-To: <1420318392.4074.2.camel@joachim-breitner.de> References: <1420115438.17917.2.camel@joachim-breitner.de> <1420318392.4074.2.camel@joachim-breitner.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF56294AF4@DB3PRD3001MB020.064d.mgd.msft.net> That's interesting. You mean the 8% regression has gone away? S | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Joachim | Breitner | Sent: 03 January 2015 20:53 | To: ghc-devs at haskell.org | Subject: Re: cryptarithm2 +8.16% | | Hi, | | Am Donnerstag, den 01.01.2015, 13:30 +0100 schrieb Joachim Breitner: | > due to #9938 ghcspeed did not measure every commit but somewhere in | > these commits, cryptarithm2 regressed by 8%: | | it seems that d8d003185a4bca1a1ebbadb5111118ef37bbc83a (When solving one | Given from another, use the depth to control which way round) has solved | this. | | | Greetings, | Joachim | | -- | Joachim ?nomeata? Breitner | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F | Debian Developer: nomeata at debian.org From alan.zimm at gmail.com Sat Jan 3 22:20:04 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sun, 4 Jan 2015 00:20:04 +0200 Subject: Cabal 1.22 RC ready to test In-Reply-To: References: Message-ID: I tried to build https://github.com/ghc/packages-Cabal earlier today and got the same error, but it went fine with the version at https://github.com/haskell/cabal On Sat, Jan 3, 2015 at 10:17 PM, Johan Tibell wrote: > It might be as simple as bumping the Cabal submodule in GHC to match the > upstream 1.22 branch. > > On Sat, Jan 3, 2015 at 8:46 AM, Jake Wheat > wrote: > >> >> On 3 January 2015 at 15:05, Johan Tibell wrote: >> >>> The error (https://travis-ci.org/haskell/cabal/jobs/45758614) is quite >>> perplexing: >>> >>> [62 of 77] Compiling Distribution.Client.Config ( >>> Distribution/Client/Config.hs, >>> dist/dist-sandbox-8940a882/build/cabal/cabal-tmp/Distribution/Client/Config.o >>> ) >>> Distribution/Client/Config.hs:56:12: >>> Module >>> ?Distribution.Simple.Compiler? >>> does not export >>> ?DebugInfoLevel(..)? >>> cabal: Error: some packages failed to install: >>> cabal-install-1.22.0.0 failed during the building phase. The exception >>> was: >>> ExitFailure 1 >>> >>> Distribution.Simple.Compiler most definitely does export DebugInfoLevel, >>> otherwise it wouldn't compile with the other GHC versions. >>> >>> Does GHC do something special with Cabal nowadays when it's no longer >>> tied to GHC? >>> >>> Is it because the Cabal-1.22.0.0 bundled with ghc is now different to >> the Cabal-1.22.0.0 in github? I get the same error compiling the master >> branch cabal-install with ghc-7.10.0-20141222. >> >> I think one solution is to increase the version number of Cabal in github >> (for 1.22 and master branches), and to make the latest cabal-install depend >> on this (i.e. Cabal>=1.22.0.1) since cabal-install 1.22 in github no longer >> works with the 'released' snapshot Cabal-1.22.0.0 in ghc. This fixes the >> error for ghc-7.10.0-20141222. >> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sat Jan 3 22:46:46 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Sat, 3 Jan 2015 17:46:46 -0500 Subject: Cabal 1.22 RC ready to test In-Reply-To: References: Message-ID: I'm pretty sure we just need to update the submodule. On Sat, Jan 3, 2015 at 5:20 PM, Alan & Kim Zimmerman wrote: > I tried to build https://github.com/ghc/packages-Cabal earlier today and > got the same error, but it went fine with the version at > https://github.com/haskell/cabal > > > > On Sat, Jan 3, 2015 at 10:17 PM, Johan Tibell > wrote: > >> It might be as simple as bumping the Cabal submodule in GHC to match the >> upstream 1.22 branch. >> >> On Sat, Jan 3, 2015 at 8:46 AM, Jake Wheat >> wrote: >> >>> >>> On 3 January 2015 at 15:05, Johan Tibell wrote: >>> >>>> The error (https://travis-ci.org/haskell/cabal/jobs/45758614) is quite >>>> perplexing: >>>> >>>> [62 of 77] Compiling Distribution.Client.Config ( >>>> Distribution/Client/Config.hs, >>>> dist/dist-sandbox-8940a882/build/cabal/cabal-tmp/Distribution/Client/Config.o >>>> ) >>>> Distribution/Client/Config.hs:56:12: >>>> Module >>>> ?Distribution.Simple.Compiler? >>>> does not export >>>> ?DebugInfoLevel(..)? >>>> cabal: Error: some packages failed to install: >>>> cabal-install-1.22.0.0 failed during the building phase. The exception >>>> was: >>>> ExitFailure 1 >>>> >>>> Distribution.Simple.Compiler most definitely does export >>>> DebugInfoLevel, otherwise it wouldn't compile with the other GHC versions. >>>> >>>> Does GHC do something special with Cabal nowadays when it's no longer >>>> tied to GHC? >>>> >>>> Is it because the Cabal-1.22.0.0 bundled with ghc is now different to >>> the Cabal-1.22.0.0 in github? I get the same error compiling the master >>> branch cabal-install with ghc-7.10.0-20141222. >>> >>> I think one solution is to increase the version number of Cabal in >>> github (for 1.22 and master branches), and to make the latest cabal-install >>> depend on this (i.e. Cabal>=1.22.0.1) since cabal-install 1.22 in github no >>> longer works with the 'released' snapshot Cabal-1.22.0.0 in ghc. This fixes >>> the error for ghc-7.10.0-20141222. >>> >>> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Sat Jan 3 23:58:15 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Sun, 04 Jan 2015 00:58:15 +0100 Subject: cryptarithm2 +8.16% In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF56294AF4@DB3PRD3001MB020.064d.mgd.msft.net> References: <1420115438.17917.2.camel@joachim-breitner.de> <1420318392.4074.2.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF56294AF4@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1420329495.5479.2.camel@joachim-breitner.de> Hi, Am Samstag, den 03.01.2015, 22:03 +0000 schrieb Simon Peyton Jones: > That's interesting. You mean the 8% regression has gone away? exactly. It is quite obvious when you look at the graph: http://ghcspeed-nomeata.rhcloud.com/timeline/?ben=nofib/allocs/cryptarithm2&env=1&equid=on Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From saihemanth at gmail.com Sun Jan 4 06:23:40 2015 From: saihemanth at gmail.com (Hemanth Kapila) Date: Sun, 4 Jan 2015 11:53:40 +0530 Subject: linker error on OSX (symbol not found "_iconv") Message-ID: Hi, On OSX Yosemite I am facing the following build failure while building from the master (please find the complete error at the bottom of the email): > ld: couldn't dlopen() /usr/lib/libdtrace.dylib: dlopen(/usr/lib/libdtrace.dylib, 1): Symbol not found: _iconv > Referenced from: /usr/lib/libmecabra.dylib > Expected in: /opt/local/lib/libiconv.2.dylib > in /usr/lib/libmecabra.dylib for architecture x86_64 > collect2: error: ld returned 1 exit status Can someone kindly point me what am doing wrong? FWIW, with the same configuration options, source distribution of ghc-7.8.4 from https://www.haskell.org/ghc/download_ghc_7_8_4 goes through successfully. I am using gcc (MacPorts gcc49 4.9.2_1) 4.9.2 Thanks, Hemanth The complete error message: ===--- building phase 0 /Library/Developer/CommandLineTools/usr/bin/make -r --no-print-directory -f ghc.mk phase=0 phase_0_builds make[1]: Nothing to be done for `phase_0_builds'. ===--- building phase 1 /Library/Developer/CommandLineTools/usr/bin/make -r --no-print-directory -f ghc.mk phase=1 phase_1_builds make[1]: Nothing to be done for `phase_1_builds'. ===--- building final phase /Library/Developer/CommandLineTools/usr/bin/make -r --no-print-directory -f ghc.mk phase=final all "rm" -f rts/dist/build/libHSrts-ghc7.11.20150103.dylib "inplace/bin/ghc-stage1" -this-package-key rts -shared -dynamic -dynload deploy -no-auto-link-packages -Lrts/dist/build -lffi -optl-Wl,-rpath -optl-Wl, at loader_path `cat rts/dist/libs.depend` rts/dist/build/Adjustor.dyn_o rts/dist/build/Arena.dyn_o rts/dist/build/Capability.dyn_o rts/dist/build/CheckUnload.dyn_o rts/dist/build/ClosureFlags.dyn_o rts/dist/build/Disassembler.dyn_o rts/dist/build/FileLock.dyn_o rts/dist/build/Globals.dyn_o rts/dist/build/Hash.dyn_o rts/dist/build/Hpc.dyn_o rts/dist/build/HsFFI.dyn_o rts/dist/build/Inlines.dyn_o rts/dist/build/Interpreter.dyn_o rts/dist/build/LdvProfile.dyn_o rts/dist/build/Linker.dyn_o rts/dist/build/Messages.dyn_o rts/dist/build/OldARMAtomic.dyn_o rts/dist/build/Papi.dyn_o rts/dist/build/Printer.dyn_o rts/dist/build/ProfHeap.dyn_o rts/dist/build/Profiling.dyn_o rts/dist/build/Proftimer.dyn_o rts/dist/build/RaiseAsync.dyn_o rts/dist/build/RetainerProfile.dyn_o rts/dist/build/RetainerSet.dyn_o rts/dist/build/RtsAPI.dyn_o rts/dist/build/RtsDllMain.dyn_o rts/dist/build/RtsFlags.dyn_o rts/dist/build/RtsMain.dyn_o rts/dist/build/RtsMessages.dyn_o rts/dist/build/RtsStartup.dyn_o rts/dist/build/RtsUtils.dyn_o rts/dist/build/STM.dyn_o rts/dist/build/Schedule.dyn_o rts/dist/build/Sparks.dyn_o rts/dist/build/Stable.dyn_o rts/dist/build/StaticPtrTable.dyn_o rts/dist/build/Stats.dyn_o rts/dist/build/StgCRun.dyn_o rts/dist/build/StgPrimFloat.dyn_o rts/dist/build/Task.dyn_o rts/dist/build/ThreadLabels.dyn_o rts/dist/build/ThreadPaused.dyn_o rts/dist/build/Threads.dyn_o rts/dist/build/Ticky.dyn_o rts/dist/build/Timer.dyn_o rts/dist/build/Trace.dyn_o rts/dist/build/WSDeque.dyn_o rts/dist/build/Weak.dyn_o rts/dist/build/hooks/FlagDefaults.dyn_o rts/dist/build/hooks/MallocFail.dyn_o rts/dist/build/hooks/OnExit.dyn_o rts/dist/build/hooks/OutOfHeap.dyn_o rts/dist/build/hooks/StackOverflow.dyn_o rts/dist/build/sm/BlockAlloc.dyn_o rts/dist/build/sm/Compact.dyn_o rts/dist/build/sm/Evac.dyn_o rts/dist/build/sm/GC.dyn_o rts/dist/build/sm/GCAux.dyn_o rts/dist/build/sm/GCUtils.dyn_o rts/dist/build/sm/MBlock.dyn_o rts/dist/build/sm/MarkWeak.dyn_o rts/dist/build/sm/Sanity.dyn_o rts/dist/build/sm/Scav.dyn_o rts/dist/build/sm/Storage.dyn_o rts/dist/build/sm/Sweep.dyn_o rts/dist/build/eventlog/EventLog.dyn_o rts/dist/build/posix/GetEnv.dyn_o rts/dist/build/posix/GetTime.dyn_o rts/dist/build/posix/Itimer.dyn_o rts/dist/build/posix/OSMem.dyn_o rts/dist/build/posix/OSThreads.dyn_o rts/dist/build/posix/Select.dyn_o rts/dist/build/posix/Signals.dyn_o rts/dist/build/posix/TTY.dyn_o rts/dist/build/Apply.dyn_o rts/dist/build/Exception.dyn_o rts/dist/build/HeapStackCheck.dyn_o rts/dist/build/PrimOps.dyn_o rts/dist/build/StgMiscClosures.dyn_o rts/dist/build/StgStartup.dyn_o rts/dist/build/StgStdThunks.dyn_o rts/dist/build/Updates.dyn_o rts/dist/build/AutoApply.dyn_o -optl-m64 -fPIC -dynamic -H64m -O0 -fasm -Iincludes -Iincludes/dist -Iincludes/dist-derivedconstants/header -Iincludes/dist-ghcconstants/header -Irts -Irts/dist/build -DCOMPILING_RTS -this-package-key rts -dcmm-lint -DDTRACE -i -irts -irts/dist/build -irts/dist/build/autogen -Irts/dist/build -Irts/dist/build/autogen -O2 -fno-use-rpaths -o rts/dist/build/libHSrts-ghc7.11.20150103.dylib ld: couldn't dlopen() /usr/lib/libdtrace.dylib: dlopen(/usr/lib/libdtrace.dylib, 1): Symbol not found: _iconv Referenced from: /usr/lib/libmecabra.dylib Expected in: /opt/local/lib/libiconv.2.dylib in /usr/lib/libmecabra.dylib for architecture x86_64 collect2: error: ld returned 1 exit status make[1]: *** [rts/dist/build/libHSrts-ghc7.11.20150103.dylib] Error 1 make: *** [all] Error 2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Sun Jan 4 08:22:28 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 04 Jan 2015 09:22:28 +0100 Subject: GHC 7.4.2 on Ubuntu Trusty In-Reply-To: <1420357849-sup-8899@sabre> (Edward Z. Yang's message of "Sat, 03 Jan 2015 23:54:58 -0800") References: <20141022105441.GA14512@machine> <87a94n808i.fsf@gmail.com> <1414533995-sup-9843@sabre> <1420357849-sup-8899@sabre> Message-ID: <87iognrm4b.fsf@gmail.com> Hello Edward, On 2015-01-04 at 08:54:58 +0100, Edward Z. Yang wrote: [...] > There are also some changes to hoopl, transformers and hpc (mostly > because their bootstrap libraries.) ...what kind of changes specifically? Once thing that needs to be considered is that we'd require to upstream changes to transformers (it's not under GHC HQ's direct control) for a transformers point(?) release ... and we'd need that as we can't release any source-tarball that contains libraries (which get installed into the pkg-db) that don't match their upstream version on Hackage. Cheers, hvr From allbery.b at gmail.com Sun Jan 4 14:15:16 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 4 Jan 2015 09:15:16 -0500 Subject: linker error on OSX (symbol not found "_iconv") In-Reply-To: References: Message-ID: On Sun, Jan 4, 2015 at 1:23 AM, Hemanth Kapila wrote: > > ld: couldn't dlopen() /usr/lib/libdtrace.dylib: > dlopen(/usr/lib/libdtrace.dylib, 1): Symbol not found: _iconv > > Referenced from: /usr/lib/libmecabra.dylib > > Expected in: /opt/local/lib/libiconv.2.dylib > > in /usr/lib/libmecabra.dylib for architecture x86_64 > > collect2: error: ld returned 1 exit status > You are mixing Apple and MacPorts libraries. (The same will happen with Homebrew but it'll be using /usr/local/lib/libiconv.2.dylib.) Possibly you also have DYLIB_LIBRARY_PATH set, which will compound the problem; *please* do not do this. You are not on Linux where setting LD_LIBRARY_PATH is common and relatively safe, DYLIB_LIBRARY_PATH will break things. The iconv libraries contain static data which is not compatible between versions, leading to core dumps unless something is done to force a link time error. Both MacPorts and Homebrew rename symbols in iconv to force this error. Since you are building ghc, either you have forced it to use MacPorts libraries when it otherwise wouldn't (see above re DYLD_LIBRARY_PATH) or you at some point copied MacPorts libraries into system library paths (OS X bakes full paths into object files and dylibs. This also means such libraries cannot be used on a system without MacPorts installed without at minimum using install_name_tool to change the baked-in paths). -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From saihemanth at gmail.com Sun Jan 4 16:06:50 2015 From: saihemanth at gmail.com (Hemanth Kapila) Date: Sun, 4 Jan 2015 21:36:50 +0530 Subject: linker error on OSX (symbol not found "_iconv") In-Reply-To: References: Message-ID: Hi, Thanks for the reply. I understand that discrepancy between macports and system libraries are causing this issue but I am not using the environment variable DYLD_LIBRARY_PATH. More over, I can build ghc-7.8.4 from sources, with the same configuration options. I repeated it there. I thought it is likely that I should see the same issue there as well. I am not able to figure out the exact dependency issue here - apparently, libHSrts cannot be built with the system version of libiconv (configure step fails), while at the same time "ghc-stage1" relies on some system tool that needs older version of libiconv. Is that a fair picture of the problem? I wondered why this does not occur for ghc-7.8.4 distributed sources. I thought something got inadvertently modified in the compiler/main folder since the release of 7.8.4 and resulted in this issue. That's why I posted it over here. However, by the looks of it, it is just me.. Thanks again, Hemanth On Sun, Jan 4, 2015 at 7:45 PM, Brandon Allbery wrote: > On Sun, Jan 4, 2015 at 1:23 AM, Hemanth Kapila > wrote: > >> > ld: couldn't dlopen() /usr/lib/libdtrace.dylib: >> dlopen(/usr/lib/libdtrace.dylib, 1): Symbol not found: _iconv >> > Referenced from: /usr/lib/libmecabra.dylib >> > Expected in: /opt/local/lib/libiconv.2.dylib >> > in /usr/lib/libmecabra.dylib for architecture x86_64 >> > collect2: error: ld returned 1 exit status >> > > You are mixing Apple and MacPorts libraries. (The same will happen with > Homebrew but it'll be using /usr/local/lib/libiconv.2.dylib.) Possibly you > also have DYLIB_LIBRARY_PATH set, which will compound the problem; *please* > do not do this. You are not on Linux where setting LD_LIBRARY_PATH is > common and relatively safe, DYLIB_LIBRARY_PATH will break things. > > The iconv libraries contain static data which is not compatible between > versions, leading to core dumps unless something is done to force a link > time error. Both MacPorts and Homebrew rename symbols in iconv to force > this error. > > Since you are building ghc, either you have forced it to use MacPorts > libraries when it otherwise wouldn't (see above re DYLD_LIBRARY_PATH) or > you at some point copied MacPorts libraries into system library paths (OS X > bakes full paths into object files and dylibs. This also means such > libraries cannot be used on a system without MacPorts installed without at > minimum using install_name_tool to change the baked-in paths). > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > -- I drink I am thunk. -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sun Jan 4 16:14:15 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 4 Jan 2015 11:14:15 -0500 Subject: linker error on OSX (symbol not found "_iconv") In-Reply-To: References: Message-ID: On Sun, Jan 4, 2015 at 11:06 AM, Hemanth Kapila wrote: > I am not able to figure out the exact dependency issue here - apparently, > libHSrts cannot be built with the system version of libiconv (configure > step fails), while at the same time "ghc-stage1" relies on some system tool > that needs older version of libiconv. > Is that a fair picture of the problem? I wondered why this does not occur > for ghc-7.8.4 distributed sources. > So presumably ghc HEAD requires a newer iconv now, presumably for better encoding handling. Many things do, which is why both MacPorts and Homebrew include the newer one (and then must hack around incompatibility) instead of sticking to Apple's. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Sun Jan 4 17:31:34 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Sun, 04 Jan 2015 09:31:34 -0800 Subject: GHC 7.4.2 on Ubuntu Trusty In-Reply-To: <87iognrm4b.fsf@gmail.com> References: <20141022105441.GA14512@machine> <87a94n808i.fsf@gmail.com> <1414533995-sup-9843@sabre> <1420357849-sup-8899@sabre> <87iognrm4b.fsf@gmail.com> Message-ID: <1420392485-sup-5443@sabre> For transformers, I needed: diff --git a/Control/Monad/Trans/Error.hs b/Control/Monad/Trans/Error.hs index 0158a8a..0dea478 100644 --- a/Control/Monad/Trans/Error.hs +++ b/Control/Monad/Trans/Error.hs @@ -57,6 +57,10 @@ instance MonadPlus IO where mzero = ioError (userError "mzero") m `mplus` n = m `catchIOError` \_ -> n +instance Alternative IO where + empty = mzero + (<|>) = mplus + #if !(MIN_VERSION_base(4,4,0)) -- exported by System.IO.Error from base-4.4 catchIOError :: IO a -> (IOError -> IO a) -> IO a For hpc, I needed: Build-Depends: - base >= 4.4.1 && < 4.8, + base >= 4.4.1 && < 4.9, containers >= 0.4.1 && < 0.6, directory >= 1.1 && < 1.3, - time >= 1.2 && < 1.5 + time >= 1.2 && < 1.6 For hoopl, I needed: - Build-Depends: base >= 4.3 && < 4.8 + Build-Depends: base >= 4.3 && < 4.9 For the latter two, I think this should be a perfectly acceptable point release. For transformers, we could also just ifdef the Alternative into the GHC sources. Edward Excerpts from Herbert Valerio Riedel's message of 2015-01-04 00:22:28 -0800: > Hello Edward, > > On 2015-01-04 at 08:54:58 +0100, Edward Z. Yang wrote: > > [...] > > > There are also some changes to hoopl, transformers and hpc (mostly > > because their bootstrap libraries.) > > ...what kind of changes specifically? > > Once thing that needs to be considered is that we'd require to upstream > changes to transformers (it's not under GHC HQ's direct control) for a > transformers point(?) release ... and we'd need that as we can't release > any source-tarball that contains libraries (which get installed into the > pkg-db) that don't match their upstream version on Hackage. > > Cheers, > hvr From scpmw at leeds.ac.uk Sun Jan 4 22:48:02 2015 From: scpmw at leeds.ac.uk (Peter Wortmann) Date: Sun, 04 Jan 2015 23:48:02 +0100 Subject: Shipping core libraries with debug symbols In-Reply-To: References: Message-ID: Okay, I ran a little experiment - here's the size of the debug sections that Fission would keep (for base library): .debug_abbrev: 8932 - 0.06% .debug_line: 374134 - 2.6% .debug_frame: 671200 - 4.5% Not that much. On the other hand, .debug_info is a significant contributor: .debug_info(full): 4527391 - 30% Here's what this contains: All procs get a corresponding DWARF entry, and we declare all Cmm blocks as "lexical blocks". The latter isn't actually required right now - to my knowledge, GDB simply ignores it, while LLDB shows it as "inlined" routines. In either case, it just shows yet more GHC-generated names, so it's really only useful for profiling tools that know Cmm block names. So here's what we get if we strip out block information: .debug_info(!block): 1688410 - 11% This eliminates a good chunk of information, and might therefore be a good idea for "-g1" at minimum. If we want this as default for 7.10, this would make the total overhead about 18%. Acceptable? I can supply a patch if needed. Just for comparison - for Fission we'd strip proc records as well, which would cause even more extreme savings: .debug_info(!proc): 36081 - 0.2% At this point the overhead would be just about 7% - but without doing Fission properly this would most certainly affect debuggers. Greetings, Peter On 03/01/2015 21:22, Johan Tibell wrote: > How much debug info (as a percentage) do we currently generate? Could we just keep it in there in the release? From johan.tibell at gmail.com Sun Jan 4 23:59:29 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Sun, 4 Jan 2015 18:59:29 -0500 Subject: Shipping core libraries with debug symbols In-Reply-To: References: Message-ID: What about keeping exactly what -g1 keeps for gcc (i.e. functions, external variables, and line number tables)? On Sun, Jan 4, 2015 at 5:48 PM, Peter Wortmann wrote: > > > Okay, I ran a little experiment - here's the size of the debug sections > that Fission would keep (for base library): > > .debug_abbrev: 8932 - 0.06% > .debug_line: 374134 - 2.6% > .debug_frame: 671200 - 4.5% > > Not that much. On the other hand, .debug_info is a significant contributor: > > .debug_info(full): 4527391 - 30% > > Here's what this contains: All procs get a corresponding DWARF entry, and > we declare all Cmm blocks as "lexical blocks". The latter isn't actually > required right now - to my knowledge, GDB simply ignores it, while LLDB > shows it as "inlined" routines. In either case, it just shows yet more > GHC-generated names, so it's really only useful for profiling tools that > know Cmm block names. > > So here's what we get if we strip out block information: > > .debug_info(!block): 1688410 - 11% > > This eliminates a good chunk of information, and might therefore be a good > idea for "-g1" at minimum. If we want this as default for 7.10, this would > make the total overhead about 18%. Acceptable? I can supply a patch if > needed. > > Just for comparison - for Fission we'd strip proc records as well, which > would cause even more extreme savings: > > .debug_info(!proc): 36081 - 0.2% > > At this point the overhead would be just about 7% - but without doing > Fission properly this would most certainly affect debuggers. > > Greetings, > Peter > > On 03/01/2015 21:22, Johan Tibell wrote: > > How much debug info (as a percentage) do we currently generate? Could we > just keep it in there in the release? > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 5 17:37:38 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 5 Jan 2015 17:37:38 +0000 Subject: Compiling nofib-analyse In-Reply-To: <1420235805-sup-7421@sabre> References: <618BE556AADD624C9C918AA5D5911BEF5628F05C@DB3PRD3001MB020.064d.mgd.msft.net> <1419984556-sup-9043@sabre> <1420235805-sup-7421@sabre> Message-ID: <618BE556AADD624C9C918AA5D5911BEF56296225@DB3PRD3001MB020.064d.mgd.msft.net> Let's do it! Austin, would you like to commit this? Reason: simplifies build system, in exchange for dropping a feature that no one uses. Simon | -----Original Message----- | From: Edward Z. Yang [mailto:ezyang at mit.edu] | Sent: 02 January 2015 21:58 | To: Simon Peyton Jones; ghc-devs at haskell.org | Subject: Re: Compiling nofib-analyse | | Attached is a patch which axes HTML support in nofib-analyse. | I would Phab it but I don't know how to do that for submodules. | | Maybe we should drop the LaTeX support too! | | Edward | | Excerpts from Edward Z. Yang's message of 2014-12-30 19:10:56 -0500: | > Pretty sure it's the aptly named 'html'. | > https://hackage.haskell.org/package/html | > | > I can't remember the last time I used the HTML reporting capability, | > so I'd be happy about removing it. | > | > Edward | > | > Excerpts from Simon Peyton Jones's message of 2014-12-30 07:03:17 - | 0500: | > > When building nofib-analyse (in nofib), I get | > > | > > /home/simonpj/local/bin/ghc -O -cpp --make Main -o nofib-analyse | > > | > > Main.hs:14:18: | > > Could not find module 'Text.Html' | > > Use -v to see a list of the files searched for. | > > | > > There are rather a lot of HTML packages. Which one is needed? | Does this meant that it's essential to install this package before | running nofib, a manual step? | > > | > > I wonder if it'd be better to remove the HTML dependency, or make | it optional? | > > | > > Simon From ezyang at mit.edu Mon Jan 5 21:21:22 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 05 Jan 2015 13:21:22 -0800 Subject: Compiling nofib-analyse In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF56296225@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF5628F05C@DB3PRD3001MB020.064d.mgd.msft.net> <1419984556-sup-9043@sabre> <1420235805-sup-7421@sabre> <618BE556AADD624C9C918AA5D5911BEF56296225@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1420492851-sup-5765@sabre> I went ahead and pushed it. Edward Excerpts from Simon Peyton Jones's message of 2015-01-05 09:37:38 -0800: > Let's do it! Austin, would you like to commit this? > > Reason: simplifies build system, in exchange for dropping a feature that no one uses. > > Simon > > | -----Original Message----- > | From: Edward Z. Yang [mailto:ezyang at mit.edu] > | Sent: 02 January 2015 21:58 > | To: Simon Peyton Jones; ghc-devs at haskell.org > | Subject: Re: Compiling nofib-analyse > | > | Attached is a patch which axes HTML support in nofib-analyse. > | I would Phab it but I don't know how to do that for submodules. > | > | Maybe we should drop the LaTeX support too! > | > | Edward > | > | Excerpts from Edward Z. Yang's message of 2014-12-30 19:10:56 -0500: > | > Pretty sure it's the aptly named 'html'. > | > https://hackage.haskell.org/package/html > | > > | > I can't remember the last time I used the HTML reporting capability, > | > so I'd be happy about removing it. > | > > | > Edward > | > > | > Excerpts from Simon Peyton Jones's message of 2014-12-30 07:03:17 - > | 0500: > | > > When building nofib-analyse (in nofib), I get > | > > > | > > /home/simonpj/local/bin/ghc -O -cpp --make Main -o nofib-analyse > | > > > | > > Main.hs:14:18: > | > > Could not find module 'Text.Html' > | > > Use -v to see a list of the files searched for. > | > > > | > > There are rather a lot of HTML packages. Which one is needed? > | Does this meant that it's essential to install this package before > | running nofib, a manual step? > | > > > | > > I wonder if it'd be better to remove the HTML dependency, or make > | it optional? > | > > > | > > Simon From simonpj at microsoft.com Tue Jan 6 09:59:44 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 6 Jan 2015 09:59:44 +0000 Subject: Updating submodules Message-ID: <618BE556AADD624C9C918AA5D5911BEF56296CF2@DB3PRD3001MB020.064d.mgd.msft.net> Herbert, or anyone, I'm very confused about the workflow for updating submodules. I want to update several (to remove redundant constraints from contexts) which are maintained by GHC HQ. But for libraries/parallel I find: * There is no .git/config in libraries/parallel. (Whereas there is for another submodule, libraries/hoopl.) * There is, however, a .git file which points to .git/modules/libraries/parallel * In .git/modules/libraries/parallel/config, I see a url of https://git.haskell.org/packages/parallel.git. But I can't push to this URL. * That matches the url in https://ghc.haskell.org/trac/ghc/wiki/Repositories, but contradicts the url in 'packages', which says ssh://git at github.com/haskell/parallel.git * I don't understand what URL should be expected for submodules with "-" in the 'upstream url' column of the 'packages' file. It says "-" means 'this is a submodule', but parallel is certainly a submodule and doesn't have "-". But so is hoopl, which does have "-". I tried a minimal change of adding pushurl = ssh://git at git.haskell.org/packages/hoopl.git to .git/modules/libraries/parallel/config. But when I tried to push I got simonpj at cam-05-unx:~/code/HEAD-2/libraries/parallel$ git push Counting objects: 7, done. Delta compression using up to 32 threads. Compressing objects: 100% (4/4), done. Writing objects: 100% (4/4), 410 bytes, done. Total 4 (delta 3), reused 0 (delta 0) remote: W refs/heads/master packages/parallel simonpj DENIED by refs/.* remote: error: hook declined to update refs/heads/master To ssh://git at git.haskell.org/packages/parallel.git ! [remote rejected] HEAD -> master (hook declined) error: failed to push some refs to 'ssh://git at git.haskell.org/packages/parallel.git' So I'm thoroughly stuck. I can't push my main patch until I push the submodule patches. What do I do? And would it be possible to update the wiki pages to make this clear? Especially * https://ghc.haskell.org/trac/ghc/wiki/Repositories * https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git/Submodules Thanks Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Tue Jan 6 10:22:16 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 06 Jan 2015 11:22:16 +0100 Subject: Updating submodules In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF56296CF2@DB3PRD3001MB020.064d.mgd.msft.net> (Simon Peyton Jones's message of "Tue, 6 Jan 2015 09:59:44 +0000") References: <618BE556AADD624C9C918AA5D5911BEF56296CF2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <87h9w4us2v.fsf@gmail.com> Hello Simon, On 2015-01-06 at 10:59:44 +0100, Simon Peyton Jones wrote: > I'm very confused about the workflow for updating submodules. I want > to update several (to remove redundant constraints from contexts) > which are maintained by GHC HQ. But for libraries/parallel I find: > > > * There is no .git/config in libraries/parallel. (Whereas > there is for another submodule, libraries/hoopl.) > * There is, however, a .git file which points to .git/modules/libraries/parallel That's most likely because libraries/hoopl wasn't created via `git submodule` but rather inherited from a Git checkout where libraries/hoopl was an decoupled (not yet submodule) sub-repo... In any case, if you manage Git remotes (while in libraries/hoopl) via the `git remote` command, Git takes care of following the "symlinked" .git folder... > * In .git/modules/libraries/parallel/config, I see a url of > https://git.haskell.org/packages/parallel.git. But I can't push to > this URL. yes, that's our mirrored copy of github.com/haskell/parallel/ > * That matches the url in > https://ghc.haskell.org/trac/ghc/wiki/Repositories, but contradicts > the url in 'packages', which says > ssh://git at github.com/haskell/parallel.git yes, that's exactly the upstream URL you're supposed to push to... (and since it's a ssh:// protocl url, it means you should have push-rights there) > * I don't understand what URL should be expected for submodules with > "-" in the 'upstream url' column of the 'packages' file. It says "-" > means 'this is a submodule', but parallel is certainly a submodule and > doesn't have "-". The comment there is probably a bit misleading; "-" in the "upstreamurl" field just means that the official upstream repo is at git.haskell.org, and you should use the usual ssh://git.haskell.org/... URL for pushing... > But so is hoopl, which does have "-". > I tried a minimal change of adding > pushurl = ssh://git at git.haskell.org/packages/hoopl.git are you confusing 'hoopl' with 'parallel' here? hoopl's upstream is in fact at git.haskell.org, but parallel lives at github.com/haskell/parallel ... > to .git/modules/libraries/parallel/config. But when I tried to push I got > simonpj at cam-05-unx:~/code/HEAD-2/libraries/parallel$ git push > Counting objects: 7, done. > Delta compression using up to 32 threads. > Compressing objects: 100% (4/4), done. > Writing objects: 100% (4/4), 410 bytes, done. > Total 4 (delta 3), reused 0 (delta 0) > remote: W refs/heads/master packages/parallel simonpj DENIED by refs/.* > remote: error: hook declined to update refs/heads/master > To ssh://git at git.haskell.org/packages/parallel.git > ! [remote rejected] HEAD -> master (hook declined) > error: failed to push some refs to 'ssh://git at git.haskell.org/packages/parallel.git' > > So I'm thoroughly stuck. I can't push my main patch until I push the submodule patches. What do I do? > And would it be possible to update the wiki pages to make this clear? Especially > > * https://ghc.haskell.org/trac/ghc/wiki/Repositories > > * https://ghc.haskell.org/trac/ghc/wiki/WorkingConventions/Git/Submodules > > Thanks > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -- "Elegance is not optional" -- Richard O'Keefe From simonpj at microsoft.com Tue Jan 6 10:48:50 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 6 Jan 2015 10:48:50 +0000 Subject: Updating submodules In-Reply-To: <87h9w4us2v.fsf@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF56296CF2@DB3PRD3001MB020.064d.mgd.msft.net> <87h9w4us2v.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF56296DFB@DB3PRD3001MB020.064d.mgd.msft.net> | > * There is no .git/config in libraries/parallel. (Whereas | > there is for another submodule, libraries/hoopl.) | | > * There is, however, a .git file which points to | .git/modules/libraries/parallel | | That's most likely because libraries/hoopl wasn't created via `git | submodule` but rather inherited from a Git checkout where | libraries/hoopl was an decoupled (not yet submodule) sub-repo... Yes, that's plausible. So the hoopl one is wrong, and the parallel one is right. But how do I fix hoopl? (Short of blowing away the whole repository, which I can't do because it has lots of commits in it.) | In any case, if you manage Git remotes (while in libraries/hoopl) via | the `git remote` command, Git takes care of following the "symlinked" | .git folder... OK. But in this case what do I do? | > * In .git/modules/libraries/parallel/config, I see a url of | > https://git.haskell.org/packages/parallel.git. But I can't push to | > this URL. | | yes, that's our mirrored copy of github.com/haskell/parallel/ | | > * That matches the url in | > https://ghc.haskell.org/trac/ghc/wiki/Repositories, but contradicts | > the url in 'packages', which says | | > ssh://git at github.com/haskell/parallel.git | | yes, that's exactly the upstream URL you're supposed to push to... | (and since it's a ssh:// protocl url, it means you should have push- | rights there) So * I *push* to ssh://git at github.com/haskell/parallel.git * I *pull* from https://git.haskell.org/packages/parallel.git Is that right? Then again, how can I get the right URLs in the right place? | The comment there is probably a bit misleading; | | "-" in the "upstreamurl" field just means that the official upstream | repo is at git.haskell.org, and you should use the usual | ssh://git.haskell.org/... URL for pushing... OK, so they are *ALL* sub-modules, and "-" is just shorthand for a particular URL. Would it be possible to fix the comment? Simon From simonpj at microsoft.com Tue Jan 6 12:49:15 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 6 Jan 2015 12:49:15 +0000 Subject: Updating submodules In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF56296DFB@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF56296CF2@DB3PRD3001MB020.064d.mgd.msft.net> <87h9w4us2v.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF56296DFB@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF56297099@DB3PRD3001MB020.064d.mgd.msft.net> Following a chat with Herbert, I've updated https://ghc.haskell.org/trac/ghc/wiki/Repositories Please check/proof-read Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Simon Peyton Jones | Sent: 06 January 2015 10:49 | To: Herbert Valerio Riedel | Cc: ghc-devs at haskell.org | Subject: RE: Updating submodules | | | > * There is no .git/config in libraries/parallel. | (Whereas | | > there is for another submodule, libraries/hoopl.) | | | | > * There is, however, a .git file which points to | | .git/modules/libraries/parallel | | | | That's most likely because libraries/hoopl wasn't created via `git | | submodule` but rather inherited from a Git checkout where | | libraries/hoopl was an decoupled (not yet submodule) sub-repo... | | Yes, that's plausible. So the hoopl one is wrong, and the parallel | one is right. But how do I fix hoopl? (Short of blowing away the | whole repository, which I can't do because it has lots of commits in | it.) | | | In any case, if you manage Git remotes (while in libraries/hoopl) | via | | the `git remote` command, Git takes care of following the | "symlinked" | | .git folder... | | OK. But in this case what do I do? | | | > * In .git/modules/libraries/parallel/config, I see a url of > | | https://git.haskell.org/packages/parallel.git. But I can't push to | > | | this URL. | | | | yes, that's our mirrored copy of github.com/haskell/parallel/ | | | | > * That matches the url in | | > https://ghc.haskell.org/trac/ghc/wiki/Repositories, but | contradicts | | > the url in 'packages', which says | | | | > ssh://git at github.com/haskell/parallel.git | | | | yes, that's exactly the upstream URL you're supposed to push to... | | (and since it's a ssh:// protocl url, it means you should have | push- | | rights there) | | So | | * I *push* to ssh://git at github.com/haskell/parallel.git | * I *pull* from https://git.haskell.org/packages/parallel.git | | Is that right? Then again, how can I get the right URLs in the right | place? | | | | The comment there is probably a bit misleading; | | | | "-" in the "upstreamurl" field just means that the official | upstream | | repo is at git.haskell.org, and you should use the usual | | ssh://git.haskell.org/... URL for pushing... | | OK, so they are *ALL* sub-modules, and "-" is just shorthand for a | particular URL. Would it be possible to fix the comment? | | Simon | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From rrnewton at gmail.com Tue Jan 6 16:28:17 2015 From: rrnewton at gmail.com (Ryan Newton) Date: Tue, 06 Jan 2015 16:28:17 +0000 Subject: Updating submodules References: <618BE556AADD624C9C918AA5D5911BEF56296CF2@DB3PRD3001MB020.064d.mgd.msft.net> <87h9w4us2v.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF56296DFB@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF56297099@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Has everyone seen the git man page generator ;-)? Hilarious. http://git-man-page-generator.lokaltog.net/ On Tue Jan 06 2015 at 7:49:30 AM Simon Peyton Jones wrote: > Following a chat with Herbert, I've updated > https://ghc.haskell.org/trac/ghc/wiki/Repositories > > Please check/proof-read > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Simon Peyton Jones > | Sent: 06 January 2015 10:49 > | To: Herbert Valerio Riedel > | Cc: ghc-devs at haskell.org > | Subject: RE: Updating submodules > | > | | > * There is no .git/config in libraries/parallel. > | (Whereas > | | > there is for another submodule, libraries/hoopl.) > | | > | | > * There is, however, a .git file which points to > | | .git/modules/libraries/parallel > | | > | | That's most likely because libraries/hoopl wasn't created via `git > | | submodule` but rather inherited from a Git checkout where > | | libraries/hoopl was an decoupled (not yet submodule) sub-repo... > | > | Yes, that's plausible. So the hoopl one is wrong, and the parallel > | one is right. But how do I fix hoopl? (Short of blowing away the > | whole repository, which I can't do because it has lots of commits in > | it.) > | > | | In any case, if you manage Git remotes (while in libraries/hoopl) > | via > | | the `git remote` command, Git takes care of following the > | "symlinked" > | | .git folder... > | > | OK. But in this case what do I do? > | > | | > * In .git/modules/libraries/parallel/config, I see a url of > > | | https://git.haskell.org/packages/parallel.git. But I can't push to > | > > | | this URL. > | | > | | yes, that's our mirrored copy of github.com/haskell/parallel/ > | | > | | > * That matches the url in > | | > https://ghc.haskell.org/trac/ghc/wiki/Repositories, but > | contradicts > | | > the url in 'packages', which says > | | > | | > ssh://git at github.com/haskell/parallel.git > | | > | | yes, that's exactly the upstream URL you're supposed to push to... > | | (and since it's a ssh:// protocl url, it means you should have > | push- > | | rights there) > | > | So > | > | * I *push* to ssh://git at github.com/haskell/parallel.git > | * I *pull* from https://git.haskell.org/packages/parallel.git > | > | Is that right? Then again, how can I get the right URLs in the right > | place? > | > | > | | The comment there is probably a bit misleading; > | | > | | "-" in the "upstreamurl" field just means that the official > | upstream > | | repo is at git.haskell.org, and you should use the usual > | | ssh://git.haskell.org/... URL for pushing... > | > | OK, so they are *ALL* sub-modules, and "-" is just shorthand for a > | particular URL. Would it be possible to fix the comment? > | > | Simon > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Tue Jan 6 17:42:15 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Tue, 6 Jan 2015 12:42:15 -0500 Subject: Updating submodules In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF56296CF2@DB3PRD3001MB020.064d.mgd.msft.net> <87h9w4us2v.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF56296DFB@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF56297099@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Tue, Jan 6, 2015 at 11:28 AM, Ryan Newton wrote: > Has everyone seen the git man page generator ;-)? Hilarious. > http://git-man-page-generator.lokaltog.net/ I still want the git version of http://thedoomthatcametopuppet.tumblr.com/ :p -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Tue Jan 6 20:57:00 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Tue, 6 Jan 2015 22:57:00 +0200 Subject: API Annotations status report Message-ID: A quick status report on API Annotations. I have managed to integrate the API Annotations into the ghc-7.10 branch of HaRe [1], with a test comand to simply roundtrip the source. So the command ghc-hare roundtrip filename.hs will produce a file 'filename.refactored.hs' which should be the same as the original, except it does not preserve trailing whitespace or tabs. This makes use of the updated ghc-mod for 7.10 [2], and ghc-exactprint [3] However this only works for the ghc-7.10 branch with D538 [4] applied. Regards Alan [1] https://github.com/alanz/HaRe/tree/ghc-7.10 [2] https://github.com/DanielG/ghc-mod [3] https://github.com/alanz/ghc-exactprint/tree/wip [4] https://phabricator.haskell.org/D538 -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jan 6 22:33:44 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 6 Jan 2015 22:33:44 +0000 Subject: breakages due to redundant constraint removals? In-Reply-To: <87vbkjtusg.fsf@gmail.com> References: <87zj9vtvn9.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF56298F2C@DB3PRD3001MB020.064d.mgd.msft.net> <87vbkjtusg.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF56298FF5@DB3PRD3001MB020.064d.mgd.msft.net> | ..ah, I see, so we'll need some CPP to retain compatiblity with stable | GHCs (as parallel and deepseq have -- before your commit -- been | compatible with all stable GHC 7.x releases)... yes, I suppose so. Or, I suppose, we could revert my changes to parallel and deepseq, and add -fno-warn-redundant-constraints at the top (I suppose *that* would need cpp). And then remember in n years time to take it out. Because of the n years time issue I'm inclined to the former solution, because at least it's clear: "if compiling with GHC <= 7.10, use this signature, else that one". Simon | | | | > | > Simon | > | > | -----Original Message----- | > | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] | > | Sent: 06 January 2015 22:03 | > | To: Simon Peyton Jones | > | Subject: breakages due to redundant constraint removals? | > | | > | Hello Simon, | > | | > | I just noticed that your recent commit to deepseq, had a devastating | > | effect: | > | | > | https://travis-ci.org/haskell/deepseq/builds/46063392 | > | | > | | > | similiarly for parallel: | > | | > | https://travis-ci.org/haskell/parallel/builds/46062982 | > | | > | | > | ...did you notice that as well? | > | | > | Cheers, | > | hvr | | -- | "Elegance is not optional" -- Richard O'Keefe From simonpj at microsoft.com Tue Jan 6 22:47:04 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 6 Jan 2015 22:47:04 +0000 Subject: Updating submodules In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF56296CF2@DB3PRD3001MB020.064d.mgd.msft.net> <87h9w4us2v.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF56296DFB@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF56297099@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF56299068@DB3PRD3001MB020.064d.mgd.msft.net> Now everything is clear. I wish someone had pointed me at this site earlier. No longer do I need to pore through endless Learn You a Git for Great Happiness tutorials. I can just generate a new man page whenever I feel like it. Perfect. Simon From: Ryan Newton [mailto:rrnewton at gmail.com] Sent: 06 January 2015 16:28 To: Simon Peyton Jones; Herbert Valerio Riedel Cc: ghc-devs at haskell.org Subject: Re: Updating submodules Has everyone seen the git man page generator ;-)? Hilarious. http://git-man-page-generator.lokaltog.net/ On Tue Jan 06 2015 at 7:49:30 AM Simon Peyton Jones > wrote: Following a chat with Herbert, I've updated https://ghc.haskell.org/trac/ghc/wiki/Repositories Please check/proof-read Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Simon Peyton Jones | Sent: 06 January 2015 10:49 | To: Herbert Valerio Riedel | Cc: ghc-devs at haskell.org | Subject: RE: Updating submodules | | | > * There is no .git/config in libraries/parallel. | (Whereas | | > there is for another submodule, libraries/hoopl.) | | | | > * There is, however, a .git file which points to | | .git/modules/libraries/parallel | | | | That's most likely because libraries/hoopl wasn't created via `git | | submodule` but rather inherited from a Git checkout where | | libraries/hoopl was an decoupled (not yet submodule) sub-repo... | | Yes, that's plausible. So the hoopl one is wrong, and the parallel | one is right. But how do I fix hoopl? (Short of blowing away the | whole repository, which I can't do because it has lots of commits in | it.) | | | In any case, if you manage Git remotes (while in libraries/hoopl) | via | | the `git remote` command, Git takes care of following the | "symlinked" | | .git folder... | | OK. But in this case what do I do? | | | > * In .git/modules/libraries/parallel/config, I see a url of > | | https://git.haskell.org/packages/parallel.git. But I can't push to | > | | this URL. | | | | yes, that's our mirrored copy of github.com/haskell/parallel/ | | | | > * That matches the url in | | > https://ghc.haskell.org/trac/ghc/wiki/Repositories, but | contradicts | | > the url in 'packages', which says | | | | > ssh://git at github.com/haskell/parallel.git | | | | yes, that's exactly the upstream URL you're supposed to push to... | | (and since it's a ssh:// protocl url, it means you should have | push- | | rights there) | | So | | * I *push* to ssh://git at github.com/haskell/parallel.git | * I *pull* from https://git.haskell.org/packages/parallel.git | | Is that right? Then again, how can I get the right URLs in the right | place? | | | | The comment there is probably a bit misleading; | | | | "-" in the "upstreamurl" field just means that the official | upstream | | repo is at git.haskell.org, and you should use the usual | | ssh://git.haskell.org/... URL for pushing... | | OK, so they are *ALL* sub-modules, and "-" is just shorthand for a | particular URL. Would it be possible to fix the comment? | | Simon | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Tue Jan 6 23:20:32 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 07 Jan 2015 00:20:32 +0100 Subject: New performance dashboard front-end Message-ID: <1420586432.17696.17.camel@joachim-breitner.de> Hi, over the holidays I?ve been working on a new and custom-made? front-end for our performance data, mainly to work around limitations of codespeed when it comes to understanding git, but also to add other features that I happen to want. I put a (not automatically updated) preview on http://deb.haskell.org/speed/ The code is not yet online, and there are some features missing (most notably: Working with multiple builders, the per-benchmark-graphs.) I?d like to have this hosted somewhere properly, so this is a request for the infrastructure team. I am a big fan of static content, so the system is a (shake-driven) batch process that generates a bunch of .json file. These can be served statically, together with the completely static index.html and a few javascript libraries. So it?s all very CDN-friendly. So all I need is * shell access to some machine, preferably with ghc installed * some disk space (actually quite a bit due to all the build logs, although that could be reduced by gzipping or reading them directly from the git repo?) * a new virtual host or a subdirectory of http://ghc.haskell.org/ where I can deploy my files to. * the possibility to either run a cronjob to poll for new logs, or maybe (later) some more sophisticated trigger. Would that be possible? Greetings, Joachim ? It is still a generic display of "values per git commit", and I hope I can keep it that way ? maybe other projects can use it as well. ? currently at https://github.com/nomeata/ghc-speed-logs If the above becomes official, this probably also should move to git.haskell.org. The repo is 250M, but 7,2G checked out. I plan to make my code read the logs directly from the repo, and link to the cgit web interface to show the logs, so that it never has to be checked out. -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From ezyang at mit.edu Tue Jan 6 23:27:21 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Tue, 06 Jan 2015 15:27:21 -0800 Subject: T9938 and T9939 failing Message-ID: <1420586802-sup-9199@sabre> Here's what I get, on a clean validate. =====> T9939(normal) 2555 of 4389 [0, 0, 0] cd ./typecheck/should_compile && '/home/hs01/ezyang/ghc-validate/inplace/bin/ghc-stage2' -fforc e-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-warn-tabs - fno-ghci-history -c T9939.hs -fno-warn-incomplete-patterns >T9939.comp.stderr 2>&1 Actual stderr output differs from expected: --- ./typecheck/should_compile/T9939.stderr 2015-01-06 13:51:47.089376284 -0800 +++ ./typecheck/should_compile/T9939.comp.stderr 2015-01-06 15:24:39.923007911 -0800 @@ -1 +1,18 @@ \ No newline at end of file +T9939.hs:5:7: + Redundant constraint: Eq a + In the type signature for: f1 :: (Eq a, Ord a) => a -> a -> Bool + +T9939.hs:9:7: + Redundant constraint: Eq a + In the type signature for: f2 :: (Eq a, Ord a) => a -> a -> Bool + +T9939.hs:13:7: + Redundant constraint: Eq b + In the type signature for: + f3 :: (Eq a, a ~ b, Eq b) => a -> b -> Bool + +T9939.hs:20:7: + Redundant constraint: Eq b + In the type signature for: + f4 :: (Eq a, Eq b) => a -> b -> Equal a b -> Bool *** unexpected failure for T9939(normal) =====> T9938(normal) 3401 of 4389 [0, 1, 0] cd ./driver && $MAKE -s --no-print-directory T9938 T9938.run.stdout 2>T9938.run. stderr Wrong exit code (expected 0 , actual 2 ) Stdout: Makefile:592: recipe for target 'T9938' failed Stderr: T9938.o: In function `r3Ho_info': (.text+0x52): undefined reference to `transzuH9c1w14lEUN3zzdWCTsn8jG_ControlziMonadziTransziSta teziLazzy_zdwzdcp1Alternative_info' collect2: error: ld returned 1 exit status make[3]: *** [T9938] Error 1 *** unexpected failure for T9938(normal) Edward From mikolaj at well-typed.com Wed Jan 7 07:19:04 2015 From: mikolaj at well-typed.com (Mikolaj Konarski) Date: Wed, 7 Jan 2015 08:19:04 +0100 Subject: T9938 and T9939 failing In-Reply-To: References: <1420586802-sup-9199@sabre> Message-ID: This looks like the result of the new cool patch by SPJ that detects redundant constraints. IIRC, it was supposed to be added to -Wall, but disabled for validate, at least for packages out of our control. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 7 09:54:57 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 7 Jan 2015 09:54:57 +0000 Subject: T9938 and T9939 failing In-Reply-To: <1420586802-sup-9199@sabre> References: <1420586802-sup-9199@sabre> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562996A1@DB3PRD3001MB020.064d.mgd.msft.net> Ah, I must have forgotten to add the stderr file. Would someone care to do that; if not I'll get to it this afternoon S | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Edward | Z. Yang | Sent: 06 January 2015 23:27 | To: ghc-devs | Subject: T9938 and T9939 failing | | Here's what I get, on a clean validate. | | =====> T9939(normal) 2555 of 4389 [0, 0, 0] | cd ./typecheck/should_compile && '/home/hs01/ezyang/ghc- | validate/inplace/bin/ghc-stage2' -fforc | e-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db - | rtsopts -fno-warn-tabs - | fno-ghci-history -c T9939.hs -fno-warn-incomplete-patterns | >T9939.comp.stderr 2>&1 | Actual stderr output differs from expected: | --- ./typecheck/should_compile/T9939.stderr 2015-01-06 | 13:51:47.089376284 -0800 | +++ ./typecheck/should_compile/T9939.comp.stderr 2015-01-06 | 15:24:39.923007911 -0800 | @@ -1 +1,18 @@ | | \ No newline at end of file | +T9939.hs:5:7: | + Redundant constraint: Eq a | + In the type signature for: f1 :: (Eq a, Ord a) => a -> a -> Bool | + | +T9939.hs:9:7: | + Redundant constraint: Eq a | + In the type signature for: f2 :: (Eq a, Ord a) => a -> a -> Bool | + | +T9939.hs:13:7: | + Redundant constraint: Eq b | + In the type signature for: | + f3 :: (Eq a, a ~ b, Eq b) => a -> b -> Bool | + | +T9939.hs:20:7: | + Redundant constraint: Eq b | + In the type signature for: | + f4 :: (Eq a, Eq b) => a -> b -> Equal a b -> Bool | *** unexpected failure for T9939(normal) | =====> T9938(normal) 3401 of 4389 [0, 1, 0] | cd ./driver && $MAKE -s --no-print-directory T9938 T9938.run.stdout 2>T9938.run. | stderr | Wrong exit code (expected 0 , actual 2 ) | Stdout: | Makefile:592: recipe for target 'T9938' failed | | Stderr: | T9938.o: In function `r3Ho_info': | (.text+0x52): undefined reference to | `transzuH9c1w14lEUN3zzdWCTsn8jG_ControlziMonadziTransziSta | teziLazzy_zdwzdcp1Alternative_info' | collect2: error: ld returned 1 exit status | make[3]: *** [T9938] Error 1 | | *** unexpected failure for T9938(normal) | | Edward | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Wed Jan 7 09:54:58 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 7 Jan 2015 09:54:58 +0000 Subject: New performance dashboard front-end In-Reply-To: <1420586432.17696.17.camel@joachim-breitner.de> References: <1420586432.17696.17.camel@joachim-breitner.de> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562996B2@DB3PRD3001MB020.064d.mgd.msft.net> Sounds amazing, thank you! Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Joachim | Breitner | Sent: 06 January 2015 23:21 | To: ghc-devs at haskell.org | Subject: New performance dashboard front-end | | Hi, | | over the holidays I?ve been working on a new and custom-made? front-end | for our performance data, mainly to work around limitations of codespeed | when it comes to understanding git, but also to add other features that | I happen to want. | | I put a (not automatically updated) preview on | http://deb.haskell.org/speed/ | The code is not yet online, and there are some features missing (most | notably: Working with multiple builders, the per-benchmark-graphs.) | | I?d like to have this hosted somewhere properly, so this is a request | for the infrastructure team. | | I am a big fan of static content, so the system is a (shake-driven) | batch process that generates a bunch of .json file. These can be served | statically, together with the completely static index.html and a few | javascript libraries. So it?s all very CDN-friendly. | | So all I need is | * shell access to some machine, preferably with ghc installed | * some disk space (actually quite a bit due to all the build logs, | although that could be reduced by gzipping or reading them directly | from the git repo?) | * a new virtual host or a subdirectory of http://ghc.haskell.org/ | where I can deploy my files to. | * the possibility to either run a cronjob to poll for new logs, or | maybe (later) some more sophisticated trigger. | | Would that be possible? | | Greetings, | Joachim | | ? It is still a generic display of "values per git commit", and I hope | I can keep it that way ? maybe other projects can use it as well. | | ? currently at https://github.com/nomeata/ghc-speed-logs | If the above becomes official, this probably also should move to | git.haskell.org. The repo is 250M, but 7,2G checked out. I plan to | make my code read the logs directly from the repo, and link to the | cgit web interface to show the logs, so that it never has to be | checked out. | | -- | Joachim ?nomeata? Breitner | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F | Debian Developer: nomeata at debian.org From jan.stolarek at p.lodz.pl Wed Jan 7 10:41:22 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 7 Jan 2015 11:41:22 +0100 Subject: Bash completion in GHC 7.10 In-Reply-To: <20150103205254.0ba49d7a@sf> References: <20150103205254.0ba49d7a@sf> Message-ID: <201501071141.22657.jan.stolarek@p.lodz.pl> Reported this as #9963. Janek Dnia sobota, 3 stycznia 2015, Sergei Trofimovich napisa?: > On Wed, 10 Dec 2014 16:43:57 +0400 > > Lennart Kolmodin wrote: > > Hi everybody! > > > > TL;DL GHC 7.10 will have better bash completion, try it out! I'd like > > your help to verify the categorisation of DynFlags into ghc / ghci / > > shared or hidden flags. > > Thank you! On the way to users :) > https://github.com/gentoo-haskell/gentoo-haskell/commit/d6f63341693063e6016 >8bbddffb0806621696689 > > sf / # ghc --print-li > /usr/lib64/ghc-7.10.0.20141222 > > sf / # ghci --print-li > ghc: panic! (the 'impossible' happened) > (GHC version 7.10.0.20141222 for x86_64-unknown-linux): > ghc: panic! (the 'impossible' happened) > (GHC version 7.10.0.20141222 for x86_64-unknown-linux): > v_unsafeGlobalDynFlags: not initialised > Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug > Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug > > Absolutely not your fault, just makes such things more discoverable :] > Thanks again! From simonpj at microsoft.com Wed Jan 7 14:53:56 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 7 Jan 2015 14:53:56 +0000 Subject: [Diffusion] [Build Failed] rGHC471891cb774a: Mark T9938 as expect_broken again In-Reply-To: <20150107144627.72575.28465@phabricator.haskell.org> References: <20150107144627.72575.28465@phabricator.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629A0C1@DB3PRD3001MB020.064d.mgd.msft.net> Phab says perf/compiler T3294 [stat not good enough] (normal) but it's not failing for me. Simon | -----Original Message----- | From: noreply at phabricator.haskell.org | [mailto:noreply at phabricator.haskell.org] | Sent: 07 January 2015 14:50 | To: Simon Peyton Jones | Subject: [Diffusion] [Build Failed] rGHC471891cb774a: Mark T9938 as | expect_broken again | | Harbormaster failed to build B2865: rGHC471891cb774a: Mark T9938 as | expect_broken again! | | USERS | simonpj (Author) | GHC - Testsuite (Auditor) | | COMMIT | https://phabricator.haskell.org/rGHC471891cb774a | | EMAIL PREFERENCES | https://phabricator.haskell.org/settings/panel/emailpreferences/ | | To: simonpj, GHC - Testsuite From simonpj at microsoft.com Wed Jan 7 15:19:15 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 7 Jan 2015 15:19:15 +0000 Subject: Redundant constraints Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> Friends I've pushed a big patch that adds -fwarn-redundant-constraints (on by default). It tells you when a constraint in a signature is unnecessary, e.g. f :: Ord a => a -> a -> Bool f x y = True I think I have done all the necessary library updates etc, so everything should build fine. Four libraries which we don't maintain have such warnings (MANY of them in transformers) so I'm ccing the maintainers: o containers o haskeline o transformers o binary Enjoy! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Jan 7 15:27:21 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 7 Jan 2015 16:27:21 +0100 Subject: Redundant constraints In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I think this probably makes sense, especially since you can silence the warning when you intend to add an unnecessary constraint. I had one thought though: consider an abstract data type with functions that operates over it. I might want to require e.g Ord in the definition of a function so I have freedom to change my implementation later, even though the current implementation doesn't need Ord. Think of it as separating specification and implementation. An example is 'nub'. I initially might implement it as a O(n^2) algorithm using only Eq, but I might want to leave the door open to using Ord to create something better, without later having to break backwards compatibility. On Wed, Jan 7, 2015 at 4:19 PM, Simon Peyton Jones wrote: > Friends > > I?ve pushed a big patch that adds ?fwarn-redundant-constraints (on by > default). It tells you when a constraint in a signature is unnecessary, > e.g. > > f :: Ord a => a -> a -> Bool > > f x y = True > > I think I have done all the necessary library updates etc, so everything > should build fine. > > Four libraries which we don?t maintain have such warnings (MANY of them in > transformers) so I?m ccing the maintainers: > > o containers > > o haskeline > > o transformers > > o binary > > > > Enjoy! > > > > Simon > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 7 15:40:49 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 7 Jan 2015 15:40:49 +0000 Subject: Redundant constraints In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629A496@DB3PRD3001MB020.064d.mgd.msft.net> I had one thought though: consider an abstract data type with functions that operates over it. I might want to require e.g Ord in the definition of a function so I have freedom to change my implementation later, even though the current implementation doesn't need Ord. Think of it as separating specification and implementation. An example is 'nub'. I initially might implement it as a O(n^2) algorithm using only Eq, but I might want to leave the door open to using Ord to create something better, without later having to break backwards compatibility. Yes, a per-function way to suppress the warning might be useful. But I have not implemented that. At the moment it?s just per-module. Simon From: Johan Tibell [mailto:johan.tibell at gmail.com] Sent: 07 January 2015 15:27 To: Simon Peyton Jones Cc: ghc-devs at haskell.org; Bill Mitchell (Bill.Mitchell at hq.bcs.org.uk); Milan Straka; Ross Paterson Subject: Re: Redundant constraints I think this probably makes sense, especially since you can silence the warning when you intend to add an unnecessary constraint. I had one thought though: consider an abstract data type with functions that operates over it. I might want to require e.g Ord in the definition of a function so I have freedom to change my implementation later, even though the current implementation doesn't need Ord. Think of it as separating specification and implementation. An example is 'nub'. I initially might implement it as a O(n^2) algorithm using only Eq, but I might want to leave the door open to using Ord to create something better, without later having to break backwards compatibility. On Wed, Jan 7, 2015 at 4:19 PM, Simon Peyton Jones > wrote: Friends I?ve pushed a big patch that adds ?fwarn-redundant-constraints (on by default). It tells you when a constraint in a signature is unnecessary, e.g. f :: Ord a => a -> a -> Bool f x y = True I think I have done all the necessary library updates etc, so everything should build fine. Four libraries which we don?t maintain have such warnings (MANY of them in transformers) so I?m ccing the maintainers: o containers o haskeline o transformers o binary Enjoy! Simon _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 7 16:12:01 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 7 Jan 2015 16:12:01 +0000 Subject: Cabal update fails Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629A533@DB3PRD3001MB020.064d.mgd.msft.net> What does this mean (on Linux)? Thanks, Simon bash$ cabal update Downloading the latest package list from hackage.haskell.org cabal: : resource vanished -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Wed Jan 7 16:17:02 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 07 Jan 2015 17:17:02 +0100 Subject: warning-suppression granularity (was: Redundant constraints) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629A496@DB3PRD3001MB020.064d.mgd.msft.net> (Simon Peyton Jones's message of "Wed, 7 Jan 2015 15:40:49 +0000") References: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF5629A496@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <8761ci7egx.fsf_-_@gmail.com> On 2015-01-07 at 16:40:49 +0100, Simon Peyton Jones wrote: [...] > Yes, a per-function way to suppress the warning might be useful. But > I have not implemented that. At the moment it?s just per-module. Btw, there are a couple of other warnings, I have been wishing to have a way to disable them on a per-entity basis... any chance for for a general syntax to suppress warnings on a more granular level than per-module? Cheers, hvr From hvriedel at gmail.com Wed Jan 7 16:17:54 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 07 Jan 2015 17:17:54 +0100 Subject: Cabal update fails In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629A533@DB3PRD3001MB020.064d.mgd.msft.net> (Simon Peyton Jones's message of "Wed, 7 Jan 2015 16:12:01 +0000") References: <618BE556AADD624C9C918AA5D5911BEF5629A533@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <871tn67efh.fsf@gmail.com> On 2015-01-07 at 17:12:01 +0100, Simon Peyton Jones wrote: > What does this mean (on Linux)? > > Thanks, Simon > > bash$ cabal update > Downloading the latest package list from hackage.haskell.org > cabal: : resource vanished Simply a network error while communicating w/ hackage... does `cabal update -v3` give any useful indication? From simonpj at microsoft.com Wed Jan 7 16:20:43 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 7 Jan 2015 16:20:43 +0000 Subject: Cabal update fails In-Reply-To: <871tn67efh.fsf@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF5629A533@DB3PRD3001MB020.064d.mgd.msft.net> <871tn67efh.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629A5BD@DB3PRD3001MB020.064d.mgd.msft.net> cabal update -v3 worked. Maybe it was just transitory. Thanks! | -----Original Message----- | From: Herbert Valerio Riedel [mailto:hvriedel at gmail.com] | Sent: 07 January 2015 16:18 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org | Subject: Re: Cabal update fails | | On 2015-01-07 at 17:12:01 +0100, Simon Peyton Jones wrote: | > What does this mean (on Linux)? | > | > Thanks, Simon | > | > bash$ cabal update | > Downloading the latest package list from hackage.haskell.org | > cabal: : resource vanished | | Simply a network error while communicating w/ hackage... does `cabal | update -v3` give any useful indication? From simonpj at microsoft.com Wed Jan 7 16:22:07 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 7 Jan 2015 16:22:07 +0000 Subject: warning-suppression granularity (was: Redundant constraints) In-Reply-To: <8761ci7egx.fsf_-_@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF5629A496@DB3PRD3001MB020.064d.mgd.msft.net> <8761ci7egx.fsf_-_@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629A5DC@DB3PRD3001MB020.064d.mgd.msft.net> | Btw, there are a couple of other warnings, I have been wishing to have | a way to disable them on a per-entity basis... any chance for for a | general syntax to suppress warnings on a more granular level than per- | module? I think that would be a fine idea. But we'd need some kind of concrete syntax. And it ought to work for types and classes as well as functions... From david.feuer at gmail.com Wed Jan 7 19:12:55 2015 From: david.feuer at gmail.com (David Feuer) Date: Wed, 7 Jan 2015 14:12:55 -0500 Subject: seq#: do we actually need it as a primitive? Message-ID: I've read about the inlining issues surrounding Control.Exception.evaluate that seem to have prompted the creation of seq#, but I'm still missing something. Isn't seq# a s the same as let !a' = a in (# s, a' #) ? David From austin at well-typed.com Wed Jan 7 19:53:38 2015 From: austin at well-typed.com (Austin Seipp) Date: Wed, 7 Jan 2015 13:53:38 -0600 Subject: New performance dashboard front-end In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562996B2@DB3PRD3001MB020.064d.mgd.msft.net> References: <1420586432.17696.17.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF562996B2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Joachim, This looks awesome! Yes, we can set up you up with a small server soon to do all this and keep it running in public. FWIW, I'd rather not make a virtual proxy at ghc.haskell.org point to your backend server, by routing '/speed' out to another server. What about a new domain - https://speed.ghc.haskell.org ? This falls in line with the new buildbot naming conventions we voted on a few weeks ago, it's trivial to add (only a DNS entry), and is nicer IMO. On Wed, Jan 7, 2015 at 3:54 AM, Simon Peyton Jones wrote: > Sounds amazing, thank you! > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Joachim > | Breitner > | Sent: 06 January 2015 23:21 > | To: ghc-devs at haskell.org > | Subject: New performance dashboard front-end > | > | Hi, > | > | over the holidays I?ve been working on a new and custom-made? front-end > | for our performance data, mainly to work around limitations of codespeed > | when it comes to understanding git, but also to add other features that > | I happen to want. > | > | I put a (not automatically updated) preview on > | http://deb.haskell.org/speed/ > | The code is not yet online, and there are some features missing (most > | notably: Working with multiple builders, the per-benchmark-graphs.) > | > | I?d like to have this hosted somewhere properly, so this is a request > | for the infrastructure team. > | > | I am a big fan of static content, so the system is a (shake-driven) > | batch process that generates a bunch of .json file. These can be served > | statically, together with the completely static index.html and a few > | javascript libraries. So it?s all very CDN-friendly. > | > | So all I need is > | * shell access to some machine, preferably with ghc installed > | * some disk space (actually quite a bit due to all the build logs, > | although that could be reduced by gzipping or reading them directly > | from the git repo?) > | * a new virtual host or a subdirectory of http://ghc.haskell.org/ > | where I can deploy my files to. > | * the possibility to either run a cronjob to poll for new logs, or > | maybe (later) some more sophisticated trigger. > | > | Would that be possible? > | > | Greetings, > | Joachim > | > | ? It is still a generic display of "values per git commit", and I hope > | I can keep it that way ? maybe other projects can use it as well. > | > | ? currently at https://github.com/nomeata/ghc-speed-logs > | If the above becomes official, this probably also should move to > | git.haskell.org. The repo is 250M, but 7,2G checked out. I plan to > | make my code read the logs directly from the repo, and link to the > | cgit web interface to show the logs, so that it never has to be > | checked out. > | > | -- > | Joachim ?nomeata? Breitner > | mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > | Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > | Debian Developer: nomeata at debian.org > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From austin at well-typed.com Wed Jan 7 19:58:29 2015 From: austin at well-typed.com (Austin Seipp) Date: Wed, 7 Jan 2015 13:58:29 -0600 Subject: Random holiday fun: Vote for buildbot naming conventions! In-Reply-To: References: Message-ID: Hi all, I closed the polls on this a few days ago, and the results are in. By popular vote, the candidates from first to third place are: 1) Famous logicians and Computer Scientists - 21 out of 40 votes. 2) Boring and descriptive names - 19 out of 40 votes. 3) NSA Surveillance Tools - 15 out of 40 votes. So it looks like our bots will be paying homage to our predecessors and will be named after them. :) Thanks to the 40 people who voted! On Thu, Dec 18, 2014 at 11:47 AM, Carter Schonwald wrote: > dont forget you can vote for more than one! > > On Thu, Dec 18, 2014 at 12:08 AM, Austin Seipp > wrote: >> >> Hi *, >> >> Everyone has been working hard getting things ready for the branch/RC >> later this week - and that's really appreciated! As always, GHC >> wouldn't be what it is without you. >> >> But it's the holidays - that's stressful for some, and real time >> consuming for others. So to keep things light and a little >> interesting, and if you have a minute as an experiment, I'd like to >> ask you all something: >> >> What naming conventions should we use for GHC buildbots? We've been >> adding more bots recently, and we've just gotten hold of some new >> hardware this week. >> >> Currently, GHC buildbots don't have any reserved name or identifier, >> or DNS entries. I'd like to change that - we mostly refer to them by >> IP, but this is annoying A) to remember and B) to tell other people. >> We'll likely begin to lease out these machines to developers so they >> can test and debug - meaning they'll be mentioned more. >> >> I'd like to propose a theme for naming buildbots - this theme would be >> used to populate DNS entries under the *.ghc.haskell.org domain. >> >> The question is: what naming convention do we use? >> >> So I created a poll for this. You can see that poll and vote for your >> favorite options on Phabricator: >> >> - https://phabricator.haskell.org/V3 >> >> It's an approval vote rather than a plurality; so feel free to select >> multiple choices. The winner with the most votes will get selected. >> >> Note: the selection of options is relatively random and pre-selected; >> there is an upper limit on the number of choices - 10 max - so I >> merely picked some categories I thought would work and be generic >> enough. >> >> I imagine this vote will be open for about a week or so. I'd like it >> if developers could vote on their favorites, or simply leave comments >> on the vote for further suggestion - we could institute a vote with >> better names. >> >> Thanks all - and be sure to have happy holidays. >> >> P.S I did not know RFC 1178 existed before today. Seems like there's >> one for everything... >> >> -- >> Regards, >> >> Austin Seipp, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From mgsloan at gmail.com Thu Jan 8 00:05:32 2015 From: mgsloan at gmail.com (Michael Sloan) Date: Wed, 7 Jan 2015 16:05:32 -0800 Subject: Redundant constraints In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: One option for avoiding the warning without runtime overhead would be to do something like this: ? f :: Ord a => a -> a -> Bool f x y = True where _ = x < y On Wed, Jan 7, 2015 at 7:27 AM, Johan Tibell wrote: > I think this probably makes sense, especially since you can silence the > warning when you intend to add an unnecessary constraint. > > I had one thought though: consider an abstract data type with functions that > operates over it. I might want to require e.g Ord in the definition of a > function so I have freedom to change my implementation later, even though > the current implementation doesn't need Ord. Think of it as separating > specification and implementation. An example is 'nub'. I initially might > implement it as a O(n^2) algorithm using only Eq, but I might want to leave > the door open to using Ord to create something better, without later having > to break backwards compatibility. > > On Wed, Jan 7, 2015 at 4:19 PM, Simon Peyton Jones > wrote: >> >> Friends >> >> I?ve pushed a big patch that adds ?fwarn-redundant-constraints (on by >> default). It tells you when a constraint in a signature is unnecessary, >> e.g. >> >> f :: Ord a => a -> a -> Bool >> >> f x y = True >> >> I think I have done all the necessary library updates etc, so everything >> should build fine. >> >> Four libraries which we don?t maintain have such warnings (MANY of them in >> transformers) so I?m ccing the maintainers: >> >> o containers >> >> o haskeline >> >> o transformers >> >> o binary >> >> >> >> Enjoy! >> >> >> >> Simon >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From austin at well-typed.com Thu Jan 8 01:19:48 2015 From: austin at well-typed.com (Austin Seipp) Date: Wed, 7 Jan 2015 19:19:48 -0600 Subject: GHC Weekly News - 2015/01/07 Message-ID: Hi *, it's time for another GHC Weekly News! This week's edition will actually be covering the last two/three weeks; your editor has missed the past few editions due to Holiday madness (and also some relaxation, which is not madness). It's also our first news posting in 2015! So let's get going without any further delay! GHC HQ met this week after the Christmas break; some of our notes include: - Austin Seipp announced the GHC 7.8.4 release on behalf of the GHC development team. https://www.haskell.org/pipermail/haskell/2014-December/024395.html - Austin Seipp ''also'' announced the GHC 7.10.1 RC on behalf of the GHC team, as well. https://www.haskell.org/pipermail/ghc-devs/2014-December/007781.html - Since Austin is back, he'll be spending some time finishing up all the remaining binary distributions for GHC 7.8.4 and GHC 7.10.1 RC1 (mostly, FreeBSD and OS X builds). - We've found that 7.10.1 RC1 is working surprisingly well for users so far; to help users accomodate the changes, Herbert has conveniently written a migration guide for users for their most common problems when upgrading to 7.10.1: https://ghc.haskell.org/trac/ghc/wiki/Migration/7.10 - We're aiming to release the 2nd Release Candidate for GHC 7.10.1 on January 19th. We're hoping this will be the last RC, with 7.10.1 final popping up in the middle of February. - GHC HQ may tentatively be working to release **another** GHC 7.8 release, but only for a specific purpose: to allow it to compile with 7.10.1. This will make it significantly easier for users to compile old GHCs (perhaps on newer platforms). However, we're not yet 100% decided on this, and we will likely only do a 'very minor release' of the source tarball, should this be the case. Thanks to Edward Yang for helping with this. - For future GHC releases on Windows, we're looking into adopting Neil Mitchell's new binary distribution of GHC, which is a nice installer that includes Cabal, MSYS and GHC. This should significantly lower the burden for Windows users to use GHC and update, ship or create packages. While we're not 100% sure we'll be able to have it ready for 7.10.1, it looks promising. Thanks Neil! (For more info, read Neil's blog post here: http://neilmitchell.blogspot.co.at/2014/12/beta-testing-windows-minimal-ghc.html ) There's also been some movement and chatter on the mailing lists, as usual. - GHC 7.10 is coming close to a final release, planned in February; to help keep track of everything, users and developers are suggested to look at the GHC 7.10.1 status page as a source of truth from GHC HQ: https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.10.1 - Jan Stolark is currently working on injective type families for GHC, but ran into a snag with Template Haskell while trying to understand GHC's `DsMeta` module. Richard chimed in to help: https://www.haskell.org/pipermail/ghc-devs/2014-December/007719.html - Austin Seipp opened a fun vote: what naming convention should we use for GHC buildbots? After posting the vote before the holidays, the results are in: GHC's buildbots will take their names from famous logicians and computer scientists: https://www.haskell.org/pipermail/ghc-devs/2014-December/007723.html - Carter Schonwald asked a simple question: are pattern synonyms usable in GHCi? The answer is 'no', but it seems Gergo is on the case to remedy that soon enough: https://www.haskell.org/pipermail/ghc-devs/2014-December/007724.html - Anton Dessiatov has a question about GHC's heap profiler information, but unfortunately his question has lingered. Can any GHC/Haskell hackers out there help him out? https://www.haskell.org/pipermail/ghc-devs/2014-December/007748.html - Joachim Breitner made an exciting announcement: he's working on a new performance dashboard for GHC, so we can more easily track and look at performance results over time. The current prototype looks great, and Joachim and Austin are working together to make this an official piece of GHC's infrastructure: https://www.haskell.org/pipermail/ghc-devs/2015-January/007885.html - Over the holiday, Simon went and implemented a nice new feature for GHC: detection of redundant constraints. This means if you mention `Ord` in a type signature, but actually use nothing which requires that constraint, GHC can properly warn about it. This will be going into 7.12: https://www.haskell.org/pipermail/ghc-devs/2015-January/007892.html - Now that GHC 7.10 will feature support for DWARF based debugging information, Johan Tibell opened a very obvious discussion thread: what should we do about shipping GHC and its libraries with debug support? Peter chimed in with some notes - hopefully this will all be sorted out in time for 7.10.1 proper: https://www.haskell.org/pipermail/ghc-devs/2015-January/007851.html Closed tickets the past few weeks include: #8984, #9880, #9732, #9783, #9575, #9860, #9316, #9845, #9913, #9909, #8650, #9881, #9919, #9732, #9783, #9915, #9914, #9751, #9744, #9879, #9876, #9032, #7473, #9764, #9067, #9852, #9847, #9891, #8909, #9954, #9508, #9586, and #9939. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From johan.tibell at gmail.com Thu Jan 8 07:36:21 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 8 Jan 2015 08:36:21 +0100 Subject: Clarification of HsBang and isBanged Message-ID: HsBang is defined as: -- HsBang describes what the *programmer* wrote -- This info is retained in the DataCon.dcStrictMarks field data HsBang = HsUserBang -- The user's source-code request (Maybe Bool) -- Just True {-# UNPACK #-} -- Just False {-# NOUNPACK #-} -- Nothing no pragma Bool -- True <=> '!' specified | HsNoBang -- Lazy field -- HsUserBang Nothing False means the same as HsNoBang | HsUnpack -- Definite commitment: this field is strict and unboxed (Maybe Coercion) -- co :: arg-ty ~ product-ty | HsStrict -- Definite commitment: this field is strict but not unboxed This data type is a bit unclear to me: * What are the reasons for the following constructor overlaps? * `HsNoBang` and `HsUserBang Nothing False` * `HsStrict` and `HsUserBang Nothing True` * `HsUnpack mb_co` and `HsUserBang (Just True) True` * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) True`? * Is there a difference in what the user wrote in the case of HsUserBang and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the compiler as opposed to being written by the user (the function documentation notwithstanding)? A very related function is isBanged: isBanged :: HsBang -> Bool isBanged HsNoBang = False isBanged (HsUserBang Nothing bang) = bang isBanged _ = True What's the meaning of this function? Is it intended to communicate what the user wrote or whether result of what the user wrote results in a strict function? Context: I'm adding a new StrictData language pragma [1] that makes fields strict by default and a '~' annotation of fields to reverse the default behavior. My intention is to change HsBang like so: - Bool -- True <=> '!' specified + (Maybe Bool) -- True <=> '!' specified, False <=> '~' + -- specified, Nothing <=> unspecified 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Thu Jan 8 08:00:52 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 08 Jan 2015 00:00:52 -0800 Subject: seq#: do we actually need it as a primitive? In-Reply-To: References: Message-ID: <1420704021-sup-5888@sabre> For posterity, the answer is no, and it is explained in this comment: https://ghc.haskell.org/trac/ghc/ticket/5129#comment:2 Edward Excerpts from David Feuer's message of 2015-01-07 11:12:55 -0800: > I've read about the inlining issues surrounding > Control.Exception.evaluate that seem to have prompted the creation of > seq#, but I'm still missing something. Isn't seq# a s the same as > let !a' = a in (# s, a' #) ? > > David From johan.tibell at gmail.com Thu Jan 8 08:15:17 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 8 Jan 2015 09:15:17 +0100 Subject: Clarification of HsBang and isBanged In-Reply-To: References: Message-ID: I also note that the definition of isBanged is confusing: isBanged :: HsBang -> Bool isBanged HsNoBang = False isBanged (HsUserBang Nothing bang) = bang isBanged _ = True Why is `HsUserBang (Just False) False`, corresponding to a NOUNPACK annotations with a missing "!", considered "banged"? On Thu, Jan 8, 2015 at 8:36 AM, Johan Tibell wrote: > HsBang is defined as: > > -- HsBang describes what the *programmer* wrote > -- This info is retained in the DataCon.dcStrictMarks field > data HsBang > = HsUserBang -- The user's source-code request > (Maybe Bool) -- Just True {-# UNPACK #-} > -- Just False {-# NOUNPACK #-} > -- Nothing no pragma > Bool -- True <=> '!' specified > > | HsNoBang -- Lazy field > -- HsUserBang Nothing False means the same > as HsNoBang > > | HsUnpack -- Definite commitment: this field is strict > and unboxed > (Maybe Coercion) -- co :: arg-ty ~ product-ty > > | HsStrict -- Definite commitment: this field is strict > but not unboxed > > This data type is a bit unclear to me: > > * What are the reasons for the following constructor overlaps? > * `HsNoBang` and `HsUserBang Nothing False` > * `HsStrict` and `HsUserBang Nothing True` > * `HsUnpack mb_co` and `HsUserBang (Just True) True` > > * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) > True`? > > * Is there a difference in what the user wrote in the case of HsUserBang > and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the > compiler as opposed to being written by the user (the function > documentation notwithstanding)? > > A very related function is isBanged: > > isBanged :: HsBang -> Bool > isBanged HsNoBang = False > isBanged (HsUserBang Nothing bang) = bang > isBanged _ = True > > What's the meaning of this function? Is it intended to communicate what > the user wrote or whether result of what the user wrote results in a strict > function? > > Context: I'm adding a new StrictData language pragma [1] that makes fields > strict by default and a '~' annotation of fields to reverse the default > behavior. My intention is to change HsBang like so: > > - Bool -- True <=> '!' specified > + (Maybe Bool) -- True <=> '!' specified, False <=> '~' > + -- specified, Nothing <=> unspecified > > 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma > > -- Johan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Thu Jan 8 08:22:04 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 8 Jan 2015 10:22:04 +0200 Subject: Clarification of HsBang and isBanged In-Reply-To: References: Message-ID: I know there was a bug in the parser related to setting the HsBang value, it could be that this whole area has just not received solid scrutiny before now. Alan On Thu, Jan 8, 2015 at 10:15 AM, Johan Tibell wrote: > I also note that the definition of isBanged is confusing: > > isBanged :: HsBang -> Bool > isBanged HsNoBang = False > isBanged (HsUserBang Nothing bang) = bang > isBanged _ = True > > Why is `HsUserBang (Just False) False`, corresponding to a NOUNPACK > annotations with a missing "!", considered "banged"? > > On Thu, Jan 8, 2015 at 8:36 AM, Johan Tibell > wrote: > >> HsBang is defined as: >> >> -- HsBang describes what the *programmer* wrote >> -- This info is retained in the DataCon.dcStrictMarks field >> data HsBang >> = HsUserBang -- The user's source-code request >> (Maybe Bool) -- Just True {-# UNPACK #-} >> -- Just False {-# NOUNPACK #-} >> -- Nothing no pragma >> Bool -- True <=> '!' specified >> >> | HsNoBang -- Lazy field >> -- HsUserBang Nothing False means the same >> as HsNoBang >> >> | HsUnpack -- Definite commitment: this field is >> strict and unboxed >> (Maybe Coercion) -- co :: arg-ty ~ product-ty >> >> | HsStrict -- Definite commitment: this field is >> strict but not unboxed >> >> This data type is a bit unclear to me: >> >> * What are the reasons for the following constructor overlaps? >> * `HsNoBang` and `HsUserBang Nothing False` >> * `HsStrict` and `HsUserBang Nothing True` >> * `HsUnpack mb_co` and `HsUserBang (Just True) True` >> >> * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just >> True) True`? >> >> * Is there a difference in what the user wrote in the case of HsUserBang >> and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the >> compiler as opposed to being written by the user (the function >> documentation notwithstanding)? >> >> A very related function is isBanged: >> >> isBanged :: HsBang -> Bool >> isBanged HsNoBang = False >> isBanged (HsUserBang Nothing bang) = bang >> isBanged _ = True >> >> What's the meaning of this function? Is it intended to communicate what >> the user wrote or whether result of what the user wrote results in a strict >> function? >> >> Context: I'm adding a new StrictData language pragma [1] that makes >> fields strict by default and a '~' annotation of fields to reverse the >> default behavior. My intention is to change HsBang like so: >> >> - Bool -- True <=> '!' specified >> + (Maybe Bool) -- True <=> '!' specified, False <=> '~' >> + -- specified, Nothing <=> unspecified >> >> 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma >> >> -- Johan >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jan 8 09:32:24 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 8 Jan 2015 09:32:24 +0000 Subject: Redundant constraints In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629B07D@DB3PRD3001MB020.064d.mgd.msft.net> Ha ha ha. Very good, yes! | -----Original Message----- | From: Michael Sloan [mailto:mgsloan at gmail.com] | Sent: 08 January 2015 00:06 | To: Johan Tibell | Cc: Simon Peyton Jones; Milan Straka; Bill Mitchell | (Bill.Mitchell at hq.bcs.org.uk); Ross Paterson; ghc-devs at haskell.org | Subject: Re: Redundant constraints | | One option for avoiding the warning without runtime overhead would be | to do something like this: | | f :: Ord a => a -> a -> Bool | f x y = True | where | _ = x < y | | | On Wed, Jan 7, 2015 at 7:27 AM, Johan Tibell | wrote: | > I think this probably makes sense, especially since you can silence | > the warning when you intend to add an unnecessary constraint. | > | > I had one thought though: consider an abstract data type with | > functions that operates over it. I might want to require e.g Ord in | > the definition of a function so I have freedom to change my | > implementation later, even though the current implementation doesn't | > need Ord. Think of it as separating specification and | implementation. | > An example is 'nub'. I initially might implement it as a O(n^2) | > algorithm using only Eq, but I might want to leave the door open to | > using Ord to create something better, without later having to break | backwards compatibility. | > | > On Wed, Jan 7, 2015 at 4:19 PM, Simon Peyton Jones | > | > wrote: | >> | >> Friends | >> | >> I?ve pushed a big patch that adds ?fwarn-redundant-constraints (on | by | >> default). It tells you when a constraint in a signature is | >> unnecessary, e.g. | >> | >> f :: Ord a => a -> a -> Bool | >> | >> f x y = True | >> | >> I think I have done all the necessary library updates etc, so | >> everything should build fine. | >> | >> Four libraries which we don?t maintain have such warnings (MANY of | >> them in | >> transformers) so I?m ccing the maintainers: | >> | >> o containers | >> | >> o haskeline | >> | >> o transformers | >> | >> o binary | >> | >> | >> | >> Enjoy! | >> | >> | >> | >> Simon | >> | >> | >> _______________________________________________ | >> ghc-devs mailing list | >> ghc-devs at haskell.org | >> http://www.haskell.org/mailman/listinfo/ghc-devs | >> | > | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | > From johan.tibell at gmail.com Thu Jan 8 10:01:13 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 8 Jan 2015 11:01:13 +0100 Subject: Clarification of HsBang and isBanged In-Reply-To: References: Message-ID: >From looking at the code a bit more I'm pretty sure that only HsUserBang corresponds to what the user wrote and the remaining constructors are used to note the actual decision we made (e.g. are we going to unpack). Is that correct Simon PJ? If that is the case, why isn't this information split over two data types (which would make functions over HsBang simpler)? On Thu, Jan 8, 2015 at 9:22 AM, Alan & Kim Zimmerman wrote: > I know there was a bug in the parser related to setting the HsBang value, > it could be that this whole area has just not received solid scrutiny > before now. > > Alan > > On Thu, Jan 8, 2015 at 10:15 AM, Johan Tibell > wrote: > >> I also note that the definition of isBanged is confusing: >> >> isBanged :: HsBang -> Bool >> isBanged HsNoBang = False >> isBanged (HsUserBang Nothing bang) = bang >> isBanged _ = True >> >> Why is `HsUserBang (Just False) False`, corresponding to a NOUNPACK >> annotations with a missing "!", considered "banged"? >> >> On Thu, Jan 8, 2015 at 8:36 AM, Johan Tibell >> wrote: >> >>> HsBang is defined as: >>> >>> -- HsBang describes what the *programmer* wrote >>> -- This info is retained in the DataCon.dcStrictMarks field >>> data HsBang >>> = HsUserBang -- The user's source-code request >>> (Maybe Bool) -- Just True {-# UNPACK #-} >>> -- Just False {-# NOUNPACK #-} >>> -- Nothing no pragma >>> Bool -- True <=> '!' specified >>> >>> | HsNoBang -- Lazy field >>> -- HsUserBang Nothing False means the same >>> as HsNoBang >>> >>> | HsUnpack -- Definite commitment: this field is >>> strict and unboxed >>> (Maybe Coercion) -- co :: arg-ty ~ product-ty >>> >>> | HsStrict -- Definite commitment: this field is >>> strict but not unboxed >>> >>> This data type is a bit unclear to me: >>> >>> * What are the reasons for the following constructor overlaps? >>> * `HsNoBang` and `HsUserBang Nothing False` >>> * `HsStrict` and `HsUserBang Nothing True` >>> * `HsUnpack mb_co` and `HsUserBang (Just True) True` >>> >>> * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just >>> True) True`? >>> >>> * Is there a difference in what the user wrote in the case of HsUserBang >>> and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the >>> compiler as opposed to being written by the user (the function >>> documentation notwithstanding)? >>> >>> A very related function is isBanged: >>> >>> isBanged :: HsBang -> Bool >>> isBanged HsNoBang = False >>> isBanged (HsUserBang Nothing bang) = bang >>> isBanged _ = True >>> >>> What's the meaning of this function? Is it intended to communicate what >>> the user wrote or whether result of what the user wrote results in a strict >>> function? >>> >>> Context: I'm adding a new StrictData language pragma [1] that makes >>> fields strict by default and a '~' annotation of fields to reverse the >>> default behavior. My intention is to change HsBang like so: >>> >>> - Bool -- True <=> '!' specified >>> + (Maybe Bool) -- True <=> '!' specified, False <=> '~' >>> + -- specified, Nothing <=> unspecified >>> >>> 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma >>> >>> -- Johan >>> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roma at ro-che.info Thu Jan 8 13:42:11 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Thu, 08 Jan 2015 15:42:11 +0200 Subject: seq#: do we actually need it as a primitive? In-Reply-To: <1420704021-sup-5888@sabre> References: <1420704021-sup-5888@sabre> Message-ID: <54AE8933.7030806@ro-che.info> On 08/01/15 10:00, Edward Z. Yang wrote: > For posterity, the answer is no, and it is explained in this comment: > https://ghc.haskell.org/trac/ghc/ticket/5129#comment:2 Thanks, this is helpful. So we have three potential implementations for evaluate: (1) \x -> return $! x (2) \x -> (return $! x) >>= return (3) implemented using seq# (1) and (2) are supposed to be equivalent (by the monad law), but are not in reality, since in (2) evaluate x is always a value. The documentation for 'evaluate' talks about the difference between (1) and (2). Furthermore, it suggests that (2) is a valid implementation. (1) is buggy, as explained in #5129 linked above. However, it doesn't say anything about (2). Would (2) still suffer from #5129? In that case, the docs should be fixed. Also, where can I find the 'instance Monad IO' as understood by GHC? grep didn't find one. Roman From roma at ro-che.info Thu Jan 8 13:47:29 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Thu, 08 Jan 2015 15:47:29 +0200 Subject: seq#: do we actually need it as a primitive? In-Reply-To: <54AE8933.7030806@ro-che.info> References: <1420704021-sup-5888@sabre> <54AE8933.7030806@ro-che.info> Message-ID: <54AE8A71.6020004@ro-che.info> On 08/01/15 15:42, Roman Cheplyaka wrote: > Also, where can I find the 'instance Monad IO' as understood by GHC? > grep didn't find one. Found it; it's in libraries/base/GHC/Base.hs. There are two spaces after "instance"; that's why I didn't find it the first time. Roman From david.feuer at gmail.com Thu Jan 8 13:47:42 2015 From: david.feuer at gmail.com (David Feuer) Date: Thu, 8 Jan 2015 08:47:42 -0500 Subject: seq#: do we actually need it as a primitive? In-Reply-To: <54AE8933.7030806@ro-che.info> References: <1420704021-sup-5888@sabre> <54AE8933.7030806@ro-che.info> Message-ID: On Thu, Jan 8, 2015 at 8:42 AM, Roman Cheplyaka wrote: > Also, where can I find the 'instance Monad IO' as understood by GHC? > grep didn't find one. It's in GHC.Base. From simonpj at microsoft.com Thu Jan 8 15:05:05 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 8 Jan 2015 15:05:05 +0000 Subject: seq#: do we actually need it as a primitive? In-Reply-To: <54AE8933.7030806@ro-che.info> References: <1420704021-sup-5888@sabre> <54AE8933.7030806@ro-che.info> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629B5ED@DB3PRD3001MB020.064d.mgd.msft.net> No (2) would not suffer from #5129. Think of type IO a = State# -> (State#, a) return x = \s -> (s, x) (>>=) m k s = case m s of (s, r) -> k r s (it's a newtype actually, but this will do here). (2) says = \x -> (return $! x) >>= return = \x. \s. case return $! x s of (s1, r) -> return r s1 = \x\s. x `seq` case (s,x) of (s1, r) -> return r s1 = \x\s. x `seq` (s,x) which is fine. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Roman Cheplyaka | Sent: 08 January 2015 13:42 | To: Edward Z. Yang; David Feuer | Cc: ghc-devs | Subject: Re: seq#: do we actually need it as a primitive? | | On 08/01/15 10:00, Edward Z. Yang wrote: | > For posterity, the answer is no, and it is explained in this | comment: | > https://ghc.haskell.org/trac/ghc/ticket/5129#comment:2 | | Thanks, this is helpful. | | So we have three potential implementations for evaluate: | | (1) \x -> return $! x | (2) \x -> (return $! x) >>= return | (3) implemented using seq# | | (1) and (2) are supposed to be equivalent (by the monad law), but are | not in reality, since in (2) evaluate x is always a value. | | The documentation for 'evaluate' talks about the difference between | (1) and (2). Furthermore, it suggests that (2) is a valid | implementation. | | (1) is buggy, as explained in #5129 linked above. However, it doesn't | say anything about (2). | | Would (2) still suffer from #5129? In that case, the docs should be | fixed. | | Also, where can I find the 'instance Monad IO' as understood by GHC? | grep didn't find one. | | Roman | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Thu Jan 8 15:09:52 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 8 Jan 2015 15:09:52 +0000 Subject: Clarification of HsBang and isBanged In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629B640@DB3PRD3001MB020.064d.mgd.msft.net> I?m glad you are getting back to strictness. Good questions. I?ve pushed (or will as soon as I have validated) a patch that adds type synonyms, updates comments (some of which were indeed misleading), and changes a few names for clarity and consistency. I hope that answers all your questions. Except these: ? Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) True`? Because the former is implementation generated but the latter is source code specified. ? Why isn't this information split over two data types. Because there?s a bit of overlap. See comments with HsSrcBang Simon From: Johan Tibell [mailto:johan.tibell at gmail.com] Sent: 08 January 2015 07:36 To: ghc-devs at haskell.org Cc: Simon Peyton Jones Subject: Clarification of HsBang and isBanged HsBang is defined as: -- HsBang describes what the *programmer* wrote -- This info is retained in the DataCon.dcStrictMarks field data HsBang = HsUserBang -- The user's source-code request (Maybe Bool) -- Just True {-# UNPACK #-} -- Just False {-# NOUNPACK #-} -- Nothing no pragma Bool -- True <=> '!' specified | HsNoBang -- Lazy field -- HsUserBang Nothing False means the same as HsNoBang | HsUnpack -- Definite commitment: this field is strict and unboxed (Maybe Coercion) -- co :: arg-ty ~ product-ty | HsStrict -- Definite commitment: this field is strict but not unboxed This data type is a bit unclear to me: * What are the reasons for the following constructor overlaps? * `HsNoBang` and `HsUserBang Nothing False` * `HsStrict` and `HsUserBang Nothing True` * `HsUnpack mb_co` and `HsUserBang (Just True) True` * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) True`? * Is there a difference in what the user wrote in the case of HsUserBang and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the compiler as opposed to being written by the user (the function documentation notwithstanding)? A very related function is isBanged: isBanged :: HsBang -> Bool isBanged HsNoBang = False isBanged (HsUserBang Nothing bang) = bang isBanged _ = True What's the meaning of this function? Is it intended to communicate what the user wrote or whether result of what the user wrote results in a strict function? Context: I'm adding a new StrictData language pragma [1] that makes fields strict by default and a '~' annotation of fields to reverse the default behavior. My intention is to change HsBang like so: - Bool -- True <=> '!' specified + (Maybe Bool) -- True <=> '!' specified, False <=> '~' + -- specified, Nothing <=> unspecified 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From roma at ro-che.info Thu Jan 8 16:26:12 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Thu, 08 Jan 2015 18:26:12 +0200 Subject: seq#: do we actually need it as a primitive? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629B5ED@DB3PRD3001MB020.064d.mgd.msft.net> References: <1420704021-sup-5888@sabre> <54AE8933.7030806@ro-che.info> <618BE556AADD624C9C918AA5D5911BEF5629B5ED@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <54AEAFA4.1090302@ro-che.info> Then why was the primop introduced? On 08/01/15 17:05, Simon Peyton Jones wrote: > No (2) would not suffer from #5129. Think of > > type IO a = State# -> (State#, a) > return x = \s -> (s, x) > (>>=) m k s = case m s of (s, r) -> k r s > > (it's a newtype actually, but this will do here). > > (2) says > > = \x -> (return $! x) >>= return > = \x. \s. case return $! x s of (s1, r) -> return r s1 > = \x\s. x `seq` case (s,x) of (s1, r) -> return r s1 > = \x\s. x `seq` (s,x) > > which is fine. > > Simon > > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | Roman Cheplyaka > | Sent: 08 January 2015 13:42 > | To: Edward Z. Yang; David Feuer > | Cc: ghc-devs > | Subject: Re: seq#: do we actually need it as a primitive? > | > | On 08/01/15 10:00, Edward Z. Yang wrote: > | > For posterity, the answer is no, and it is explained in this > | comment: > | > https://ghc.haskell.org/trac/ghc/ticket/5129#comment:2 > | > | Thanks, this is helpful. > | > | So we have three potential implementations for evaluate: > | > | (1) \x -> return $! x > | (2) \x -> (return $! x) >>= return > | (3) implemented using seq# > | > | (1) and (2) are supposed to be equivalent (by the monad law), but are > | not in reality, since in (2) evaluate x is always a value. > | > | The documentation for 'evaluate' talks about the difference between > | (1) and (2). Furthermore, it suggests that (2) is a valid > | implementation. > | > | (1) is buggy, as explained in #5129 linked above. However, it doesn't > | say anything about (2). > | > | Would (2) still suffer from #5129? In that case, the docs should be > | fixed. > | > | Also, where can I find the 'instance Monad IO' as understood by GHC? > | grep didn't find one. > | > | Roman > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > From david.macek.0 at gmail.com Thu Jan 8 17:00:35 2015 From: david.macek.0 at gmail.com (David Macek) Date: Thu, 08 Jan 2015 18:00:35 +0100 Subject: Windows build gotchas In-Reply-To: References: Message-ID: <54AEB7B3.5000602@gmail.com> On 1. 1. 2015 19:01, Martin Foster wrote: > Hello all, > > I've been spending some of my winter break trying my hand at compiling GHC, with a mind to hopefully contributing down the line. > > I've got it working, but I ran into a few things along the way that I figure might be worth fixing and/or documenting. In the approximate order I encountered them: > > * The first pacman mirror on the list bundled with MSYS2 is down, with the result that every download pacman makes takes ~10sec longer than it should. It downloads a lot, so that really adds up - but it's easy to fix, just "pacman -Sy pacman-mirrors" before doing anything else with it. Is that worth mentioning on the wiki? I was thinking a line on https://ghc.haskell.org/trac/ghc/wiki/Building/Preparation/Windows could be helpful. This is an unfortunate, but temporary situation. The next MSYS2 installer will come with updated mirror lists. I don't know what's the policy on including this kind of information on the wiki. > * That page mentions "If you see errors related to fork(), try closing and reopening the shell" - I've determined that you can reliably avoid that problem by following the instructions at http://sourceforge.net/p/msys2/wiki/MSYS2%20installation/#iii-updating-packages, ie by running "pacman --needed -S bash pacman msys2-runtime", then closing & re-opening the MSYS shell, before you tell pacman to install the GHC prerequisite packages. This may be true for the GHC guide, but AFAIK if you decide to install other packages, you may still encounter fork errors. The installation process is taken care of by updating bash, pacman and the runtime separately, but subsequent invocations of MSYS2 programs could still fail due to newly installed MSYS2 libraries (if any). > * And finally, the big one: cabal and/or ghc-pkg put some files outside the MSYS root directory, and caused me no end of trouble in doing so... > > I made a bit of a mess at one point, and tried to fix it by starting over completely from scratch. I expected uninstalling & reinstalling MSYS to achieve this (it deletes its root directory when you uninstall it), but that left me with a huge pile of errors when I tried to run "cabal install -j --prefix=/usr/local alex happy", of the form "Could not find module `...': There are files missing in the `...' package". > > I noticed that the cabal output made reference to "C:\Users\Martin\AppData\Roaming\cabal\", so tried moving that out of the way, but it only made the problem worse. I did figure it out eventually: in addition to that directory, "%APPDATA%\cabal", there were also files left over in "%APPDATA%\ghc". Once I removed that directory as well, things started working again - but it took me a lot of time & frustration to get there. > > I'm not entirely sure, but I think the copy of Cabal I already had from installing the Platform may also have been storing files in those directories, in which case this process completely mangled them - which isn't great. > > It seems to me that, ideally, the "build GHC inside MSYS" procedure would keep itself entirely inside the MSYS directory structure: if it were wholly self-contained, you'd know where everything is, and it couldn't break anything outside. As far as I can tell, the only breach is those two directories courtesy of Cabal, so I didn't think it would be too difficult - but none of the things I've tried (the --package-db cabal flag, a custom cabal --config-file, setting the GHC_PACKAGE_PATH environment variable, maybe some others I've forgotten) had the desired effect. Is it possible? Is it even a good idea? > > If that's just how it has to be, I feel like there should be an obvious note somewhere for the sake of the next person to trip over it. I had problems with this also, so I definitely support mentioning these two on the wiki. If we ever get to having a ghc package for MSYS2, it will use $HOME instead of $APPDATA, but that won't actually help with the problem of MSYS2 re-install not cleaning everything the build left behind. -- David Macek -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4234 bytes Desc: S/MIME Cryptographic Signature URL: From R.Paterson at city.ac.uk Thu Jan 8 17:25:38 2015 From: R.Paterson at city.ac.uk (Ross Paterson) Date: Thu, 8 Jan 2015 17:25:38 +0000 Subject: Redundant constraints In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <20150108172537.GA25675@city.ac.uk> On Wed, Jan 07, 2015 at 03:19:15PM +0000, Simon Peyton Jones wrote: > I?ve pushed a big patch that adds ?fwarn-redundant-constraints (on by > default). It tells you when a constraint in a signature is unnecessary, e.g. > > f :: Ord a => a -> a -> Bool > > f x y = True > > I think I have done all the necessary library updates etc, so everything > should build fine. > > Four libraries which we don?t maintain have such warnings (MANY of them in > transformers) so I?m ccing the maintainers: I've fixed some of the warnings in transformers, but there are still 14 of them, triggered by Applicative becoming a superclass of Monad. I can't get rid of those because the package has to build with old GHCs when bootstrapping the compiler. On Wed, Jan 07, 2015 at 04:27:21PM +0100, Johan Tibell wrote: > I had one thought though: consider an abstract data type with functions > that operates over it. I might want to require e.g Ord in the definition > of a function so I have freedom to change my implementation later, > even though the current implementation doesn't need Ord. Think of it > as separating specification and implementation. I think some of the changes already made are of this sort, exposing details of the GHC implementation, e.g. the changes to the public interface of Array and Ratio. For example, it's probably reasonable to remove the Ix constraint from Data.Array.bounds, but the portable reference implementation of Data.Array.elems requires Ix, even though the GHC implementation doesn't. Similarly a portable implementation of the Functor instance for Array i requires Ix, but the GHC implementation doesn't. From mail at joachim-breitner.de Thu Jan 8 19:58:30 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 08 Jan 2015 20:58:30 +0100 Subject: linker_unload In-Reply-To: <548021A2.4050707@gmail.com> References: <618BE556AADD624C9C918AA5D5911BEF3F3D93CE@DB3PRD3001MB020.064d.mgd.msft.net> <5473063F.5080101@gmail.com> <87d28cpxsj.fsf@gmail.com> <1417595155.2162.2.camel@joachim-breitner.de> <618BE556AADD624C9C918AA5D5911BEF3F3F1301@DB3PRD3001MB020.064d.mgd.msft.net> <877fy9kpdx.fsf@gmail.com> <547F0EAD.1070100@gmail.com> <1417613121.2162.8.camel@joachim-breitner.de> <548021A2.4050707@gmail.com> Message-ID: <1420747110.32037.4.camel@joachim-breitner.de> Dear Simon, Am Donnerstag, den 04.12.2014, 08:56 +0000 schrieb Simon Marlow: > > no, does not help, as -lgmp is already passed to gcc by ghc: > > > > > > /usr/bin/gcc -fno-stack-protector -DTABLES_NEXT_TO_CODE -o > > linker_unload linker_unload.o > > -L/data1/ghc-builder/ghc-master/libraries/base/dist-install/build -L/data1/ghc-builder/ghc-master/libraries/integer-gmp2/dist-install/build -L/data1/ghc-builder/ghc-master/libraries/ghc-prim/dist-install/build -L/data1/ghc-builder/ghc-master/rts/dist/build /tmp/ghc26637_1/ghc26637_4.o /tmp/ghc26637_1/ghc26637_6.o -Wl,-u,ghczmprim_GHCziTypes_Izh_static_info -Wl,-u,ghczmprim_GHCziTypes_Czh_static_info -Wl,-u,ghczmprim_GHCziTypes_Fzh_static_info -Wl,-u,ghczmprim_GHCziTypes_Dzh_static_info -Wl,-u,base_GHCziPtr_Ptr_static_info -Wl,-u,ghczmprim_GHCziTypes_Wzh_static_info -Wl,-u,base_GHCziInt_I8zh_static_info -Wl,-u,base_GHCziInt_I16zh_static_info -Wl,-u,base_GHCziInt_I32zh_static_info -Wl,-u,base_GHCziInt_I64zh_static_info -Wl,-u,base_GHCziWord_W8zh_static_info -Wl,-u,base_GHCziWord_W16zh_static_info -Wl,-u,base_GHCziWord_W32zh_static_info -Wl,-u,base_GHCziWord_W64zh_static_info -Wl,-u,base_GHCziStable_StablePtr_static_info -Wl,-u,ghczmprim_GHCziTypes_Izh_con_info -Wl,-u,ghczmp > rim_GHCziTypes_Czh_con_info -Wl,-u,ghczmprim_GHCziTypes_Fzh_con_info -Wl,-u,ghczmprim_GHCziTypes_Dzh_con_info -Wl,-u,base_GHCziPtr_Ptr_con_info -Wl,-u,base_GHCziPtr_FunPtr_con_info -Wl,-u,base_GHCziStable_StablePtr_con_info -Wl,-u,ghczmprim_GHCziTypes_False_closure -Wl,-u,ghczmprim_GHCziTypes_True_closure -Wl,-u,base_GHCziPack_unpackCString_closure -Wl,-u,base_GHCziIOziException_stackOverflow_closure -Wl,-u,base_GHCziIOziException_heapOverflow_closure -Wl,-u,base_ControlziExceptionziBase_nonTermination_closure -Wl,-u,base_GHCziIOziException_blockedIndefinitelyOnMVar_closure -Wl,-u,base_GHCziIOziException_blockedIndefinitelyOnSTM_closure -Wl,-u,base_GHCziIOziException_allocationLimitExceeded_closure -Wl,-u,base_ControlziExceptionziBase_nestedAtomically_closure -Wl,-u,base_GHCziEventziThread_blockedOnBadFD_closure -Wl,-u,base_GHCziWeak_runFinalizzerBatch_closure -Wl,-u,base_GHCziTopHandler_flushStdHandles_closure -Wl,-u,base_GHCziTopHandler_runIO_closure -Wl,-u,base_GHCziTopHandler_run > NonIO_closure -Wl,-u,base_GHCziConcziIO_ensureIOManagerIsRunning_closure -Wl,-u,base_GHCziConcziIO_ioManagerCapabilitiesChanged_closure -Wl,-u,base_GHCziConcziSync_runSparks_closure -Wl,-u,base_GHCziConcziSignal_runHandlers_closure -lHSbase_469rOtLAqwTGFEOGWxSUiQ -lHSinteg_21cuTlnn00eFNd4GMrxOMi -lHSghcpr_FgrV6cgh2JHBlbcx1OSlwt -lHSrts_debug -lCffi > > -lgmp -lm -lrt -ldl '-Wl,--hash-size=31' > > -Wl,--reduce-memory-overheads > > > > but due to --no-needed, and linker_unload indeed not requiring any > > symbols from gmp, the linker does not link it. > > Ok, I was afraid of that. The test needs to be fixed to explicitly > dlopen("libgmp"). I'll take a look at it today. this is still one of the test suite failures showing up at the performance builders. Would you mind having a look? Thanks, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From kolmodin at gmail.com Thu Jan 8 20:26:27 2015 From: kolmodin at gmail.com (Lennart Kolmodin) Date: Fri, 9 Jan 2015 00:26:27 +0400 Subject: Redundant constraints In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: 2015-01-07 18:19 GMT+03:00 Simon Peyton Jones : > Friends > > I?ve pushed a big patch that adds ?fwarn-redundant-constraints (on by > default). It tells you when a constraint in a signature is unnecessary, > e.g. > > f :: Ord a => a -> a -> Bool > > f x y = True > > I think I have done all the necessary library updates etc, so everything > should build fine. > > Four libraries which we don?t maintain have such warnings (MANY of them in > transformers) so I?m ccing the maintainers: > > o containers > > o haskeline > > o transformers > > o binary > I'd like to update binary to not have any unnecessary constraints. I couldn't find any though. commit c409b6f30373535b6eed92e55d4695688d32be9e removes unnecessary constraints from ghc maintained libraries, and silences the redundant-constraints warnings from the other libraries containers, haskeline and transformers. I couldn't find anything related to binary though, nor any warnings in the build log. If there are any, please let me know, or file a bug at http://github.com/kolmodin/binary Thanks! Lennart > Enjoy! > > > > Simon > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Thu Jan 8 21:45:40 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 8 Jan 2015 23:45:40 +0200 Subject: warn-redundant-constraints present as errors Message-ID: This is a great feature, here is some feedback My syntax highlighter in emacs expects warnings to have the word "warning" in them. So for the two warnings reported below, the first is highlighted as an error, and the second as a warning Language/Haskell/Refact/Utils/TypeUtils.hs:3036:17: Redundant constraint: SYB.Data t In the type signature for: duplicateDecl :: SYB.Data t => [GHC.LHsBind GHC.Name] -> t -> GHC.Name -> GHC.Name -> RefactGhc [GHC.LHsBind GHC.Name] Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Warning: Defined but not used: ?toks This is in a ghci session, and the file loads without problems, so it is indeed a warning. Can we perhaps add the word "Warning" to the output for Redundant constraints? I also had a situation where it asked me to remove a whole lot of constraints from different functions, I did them in batches, so did not remove them all from the file at once, and at some point I had to add at least one of them back, albeit based on an error message. Regards Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From scpmw at leeds.ac.uk Thu Jan 8 22:51:11 2015 From: scpmw at leeds.ac.uk (Peter Wortmann) Date: Thu, 08 Jan 2015 23:51:11 +0100 Subject: Shipping core libraries with debug symbols In-Reply-To: References: Message-ID: (sorry for late answer) Yes, that's pretty much what this would boil down to. The patch is trivial: https://github.com/scpmw/ghc/commit/29acc#diff-1 I think this is a good idea anyways. We can always re-introduce the data for higher -g levels. Greetings, Peter On 05/01/2015 00:59, Johan Tibell wrote: > What about keeping exactly what -g1 keeps for gcc (i.e. functions, > external variables, and line number tables)? > > On Sun, Jan 4, 2015 at 5:48 PM, Peter Wortmann > wrote: > > > > Okay, I ran a little experiment - here's the size of the debug > sections that Fission would keep (for base library): > > .debug_abbrev: 8932 - 0.06% > .debug_line: 374134 - 2.6% > .debug_frame: 671200 - 4.5% > > Not that much. On the other hand, .debug_info is a significant > contributor: > > .debug_info(full): 4527391 - 30% > > Here's what this contains: All procs get a corresponding DWARF > entry, and we declare all Cmm blocks as "lexical blocks". The latter > isn't actually required right now - to my knowledge, GDB simply > ignores it, while LLDB shows it as "inlined" routines. In either > case, it just shows yet more GHC-generated names, so it's really > only useful for profiling tools that know Cmm block names. > > So here's what we get if we strip out block information: > > .debug_info(!block): 1688410 - 11% > > This eliminates a good chunk of information, and might therefore be > a good idea for "-g1" at minimum. If we want this as default for > 7.10, this would make the total overhead about 18%. Acceptable? I > can supply a patch if needed. > > Just for comparison - for Fission we'd strip proc records as well, > which would cause even more extreme savings: > > .debug_info(!proc): 36081 - 0.2% > > At this point the overhead would be just about 7% - but without > doing Fission properly this would most certainly affect debuggers. > > Greetings, > Peter > > On 03/01/2015 21:22, Johan Tibell wrote: > > How much debug info (as a percentage) do we currently generate? Could we just keep it in there in the release? > > _________________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/__mailman/listinfo/ghc-devs > > > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From simonpj at microsoft.com Thu Jan 8 23:14:40 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 8 Jan 2015 23:14:40 +0000 Subject: Redundant constraints In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629BD40@DB3PRD3001MB020.064d.mgd.msft.net> I was wrong about binary, sorry. It was just the other three Simon From: Lennart Kolmodin [mailto:kolmodin at gmail.com] Sent: 08 January 2015 20:26 To: Simon Peyton Jones Cc: ghc-devs at haskell.org; Milan Straka; Bill Mitchell (Bill.Mitchell at hq.bcs.org.uk); Judah Jacobson; Ross Paterson Subject: Re: Redundant constraints 2015-01-07 18:19 GMT+03:00 Simon Peyton Jones >: Friends I?ve pushed a big patch that adds ?fwarn-redundant-constraints (on by default). It tells you when a constraint in a signature is unnecessary, e.g. f :: Ord a => a -> a -> Bool f x y = True I think I have done all the necessary library updates etc, so everything should build fine. Four libraries which we don?t maintain have such warnings (MANY of them in transformers) so I?m ccing the maintainers: o containers o haskeline o transformers o binary I'd like to update binary to not have any unnecessary constraints. I couldn't find any though. commit c409b6f30373535b6eed92e55d4695688d32be9e removes unnecessary constraints from ghc maintained libraries, and silences the redundant-constraints warnings from the other libraries containers, haskeline and transformers. I couldn't find anything related to binary though, nor any warnings in the build log. If there are any, please let me know, or file a bug at http://github.com/kolmodin/binary Thanks! Lennart Enjoy! Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Fri Jan 9 07:46:59 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 9 Jan 2015 08:46:59 +0100 Subject: Shipping core libraries with debug symbols In-Reply-To: References: Message-ID: We should merge this fix to the 7.10 branch. On Jan 8, 2015 11:52 PM, "Peter Wortmann" wrote: > > (sorry for late answer) > > Yes, that's pretty much what this would boil down to. The patch is trivial: > > https://github.com/scpmw/ghc/commit/29acc#diff-1 > > I think this is a good idea anyways. We can always re-introduce the data > for higher -g levels. > > Greetings, > Peter > > > On 05/01/2015 00:59, Johan Tibell wrote: > >> What about keeping exactly what -g1 keeps for gcc (i.e. functions, >> external variables, and line number tables)? >> >> On Sun, Jan 4, 2015 at 5:48 PM, Peter Wortmann > > wrote: >> >> >> >> Okay, I ran a little experiment - here's the size of the debug >> sections that Fission would keep (for base library): >> >> .debug_abbrev: 8932 - 0.06% >> .debug_line: 374134 - 2.6% >> .debug_frame: 671200 - 4.5% >> >> Not that much. On the other hand, .debug_info is a significant >> contributor: >> >> .debug_info(full): 4527391 - 30% >> >> Here's what this contains: All procs get a corresponding DWARF >> entry, and we declare all Cmm blocks as "lexical blocks". The latter >> isn't actually required right now - to my knowledge, GDB simply >> ignores it, while LLDB shows it as "inlined" routines. In either >> case, it just shows yet more GHC-generated names, so it's really >> only useful for profiling tools that know Cmm block names. >> >> So here's what we get if we strip out block information: >> >> .debug_info(!block): 1688410 - 11% >> >> This eliminates a good chunk of information, and might therefore be >> a good idea for "-g1" at minimum. If we want this as default for >> 7.10, this would make the total overhead about 18%. Acceptable? I >> can supply a patch if needed. >> >> Just for comparison - for Fission we'd strip proc records as well, >> which would cause even more extreme savings: >> >> .debug_info(!proc): 36081 - 0.2% >> >> At this point the overhead would be just about 7% - but without >> doing Fission properly this would most certainly affect debuggers. >> >> Greetings, >> Peter >> >> On 03/01/2015 21:22, Johan Tibell wrote: >> > How much debug info (as a percentage) do we currently generate? >> Could we just keep it in there in the release? >> >> _________________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/__mailman/listinfo/ghc-devs >> >> >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From k-bx at k-bx.com Fri Jan 9 09:18:30 2015 From: k-bx at k-bx.com (Konstantine Rybnikov) Date: Fri, 9 Jan 2015 11:18:30 +0200 Subject: warn-redundant-constraints present as errors In-Reply-To: References: Message-ID: On a slightly unrelated note I should say it would be great to have errors contain word "Error:". This is especially nice to have because when you build with "-j" your error that stops compilation gets lost somewhere in the middle of many warnings (which my projects have, unfortunately). On Thu, Jan 8, 2015 at 11:45 PM, Alan & Kim Zimmerman wrote: > This is a great feature, here is some feedback > > My syntax highlighter in emacs expects warnings to have the word "warning" > in them. > > So for the two warnings reported below, the first is highlighted as an > error, and the second as a warning > > > Language/Haskell/Refact/Utils/TypeUtils.hs:3036:17: > Redundant constraint: SYB.Data t > In the type signature for: > duplicateDecl :: SYB.Data t => > [GHC.LHsBind GHC.Name] > -> t -> GHC.Name -> GHC.Name -> RefactGhc > [GHC.LHsBind GHC.Name] > > Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Warning: > Defined but not used: ?toks > > > This is in a ghci session, and the file loads without problems, so it is > indeed a warning. > > Can we perhaps add the word "Warning" to the output for Redundant > constraints? > > I also had a situation where it asked me to remove a whole lot of > constraints from different functions, I did them in batches, so did not > remove them all from the file at once, and at some point I had to add at > least one of them back, albeit based on an error message. > > > Regards > Alan > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Fri Jan 9 09:22:37 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 9 Jan 2015 10:22:37 +0100 Subject: Deprecating functions Message-ID: <201501091022.37679.jan.stolarek@p.lodz.pl> I want to deprecate some Template Haskell functions in GHC 7.12 and then remove them in GHC 7.14 (or whatever version that comes after 7.12). Do we have any workflow for remembering these kind of things? I was thinking about adding a conditional error: #if __GLASGOW_HASKELL__ > 712 #error Remove functions foo bar from TH #endif Is this a good way of doing this? Or should I do it differently? Janek From simonpj at microsoft.com Fri Jan 9 09:39:50 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Jan 2015 09:39:50 +0000 Subject: warn-redundant-constraints present as errors In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629C106@DB3PRD3001MB020.064d.mgd.msft.net> Alan?s point is a bug ? I will fix. Konstantine?s point is reasonable. we could easily say Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Error: blah blah (the bit in red is the new bit) But I?m not sure that everyone else would want that. If a consensus forms it would be easy to excecute I suppose there could be yet another flag to control it (!) Simon From: Konstantine Rybnikov [mailto:k-bx at k-bx.com] Sent: 09 January 2015 09:19 To: Alan & Kim Zimmerman Cc: ghc-devs at haskell.org; Simon Peyton Jones Subject: Re: warn-redundant-constraints present as errors On a slightly unrelated note I should say it would be great to have errors contain word "Error:". This is especially nice to have because when you build with "-j" your error that stops compilation gets lost somewhere in the middle of many warnings (which my projects have, unfortunately). On Thu, Jan 8, 2015 at 11:45 PM, Alan & Kim Zimmerman > wrote: This is a great feature, here is some feedback My syntax highlighter in emacs expects warnings to have the word "warning" in them. So for the two warnings reported below, the first is highlighted as an error, and the second as a warning Language/Haskell/Refact/Utils/TypeUtils.hs:3036:17: Redundant constraint: SYB.Data t In the type signature for: duplicateDecl :: SYB.Data t => [GHC.LHsBind GHC.Name] -> t -> GHC.Name -> GHC.Name -> RefactGhc [GHC.LHsBind GHC.Name] Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Warning: Defined but not used: ?toks This is in a ghci session, and the file loads without problems, so it is indeed a warning. Can we perhaps add the word "Warning" to the output for Redundant constraints? I also had a situation where it asked me to remove a whole lot of constraints from different functions, I did them in batches, so did not remove them all from the file at once, and at some point I had to add at least one of them back, albeit based on an error message. Regards Alan _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Fri Jan 9 09:54:03 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 9 Jan 2015 10:54:03 +0100 Subject: Deprecating functions In-Reply-To: <201501091022.37679.jan.stolarek@p.lodz.pl> References: <201501091022.37679.jan.stolarek@p.lodz.pl> Message-ID: We could file a tracking bug against the 7.14 milestone. Just curious, is there a way to keep these functions for backwards compat in 7.14 or is that unfeasible? On Fri, Jan 9, 2015 at 10:22 AM, Jan Stolarek wrote: > I want to deprecate some Template Haskell functions in GHC 7.12 and then > remove them in GHC 7.14 > (or whatever version that comes after 7.12). Do we have any workflow for > remembering these kind > of things? I was thinking about adding a conditional error: > > #if __GLASGOW_HASKELL__ > 712 > #error Remove functions foo bar from TH > #endif > > Is this a good way of doing this? Or should I do it differently? > > Janek > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Fri Jan 9 09:56:57 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 9 Jan 2015 10:56:57 +0100 Subject: warn-redundant-constraints present as errors In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629C106@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF5629C106@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I think using the words error and warning makes sense. For example, this is how Clang (LLVM) does it: format-strings.c:91:13: warning: '.*' specified field precision is missing a matching 'int' argument printf("%.*d"); ^ t.c:7:39: error: invalid operands to binary expression ('int' and 'struct A') return y + func(y ? ((SomeA.X + 40) + SomeA) / 42 + SomeA.X : SomeA.X); ~~~~~~~~~~~~~~ ^ ~~~~~ (Also note how lovely it is to have a caret pointing at the error.) On Fri, Jan 9, 2015 at 10:39 AM, Simon Peyton Jones wrote: > Alan?s point is a bug ? I will fix. > > > > Konstantine?s point is reasonable. we could easily say > > > > Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Error: > blah blah > > > > (the bit in red is the new bit) > > But I?m not sure that everyone else would want that. If a consensus > forms it would be easy to excecute > > > > I suppose there could be yet another flag to control it (!) > > > > Simon > > > > *From:* Konstantine Rybnikov [mailto:k-bx at k-bx.com] > *Sent:* 09 January 2015 09:19 > *To:* Alan & Kim Zimmerman > *Cc:* ghc-devs at haskell.org; Simon Peyton Jones > *Subject:* Re: warn-redundant-constraints present as errors > > > > On a slightly unrelated note I should say it would be great to have errors > contain word "Error:". This is especially nice to have because when you > build with "-j" your error that stops compilation gets lost somewhere in > the middle of many warnings (which my projects have, unfortunately). > > > > On Thu, Jan 8, 2015 at 11:45 PM, Alan & Kim Zimmerman > wrote: > > This is a great feature, here is some feedback > > My syntax highlighter in emacs expects warnings to have the word "warning" > in them. > > So for the two warnings reported below, the first is highlighted as an > error, and the second as a warning > > > Language/Haskell/Refact/Utils/TypeUtils.hs:3036:17: > Redundant constraint: SYB.Data t > In the type signature for: > duplicateDecl :: SYB.Data t => > [GHC.LHsBind GHC.Name] > -> t -> GHC.Name -> GHC.Name -> RefactGhc > [GHC.LHsBind GHC.Name] > > Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Warning: > Defined but not used: ?toks > > This is in a ghci session, and the file loads without problems, so it > is indeed a warning. > > Can we perhaps add the word "Warning" to the output for Redundant > constraints? > > I also had a situation where it asked me to remove a whole lot of > constraints from different functions, I did them in batches, so did not > remove them all from the file at once, and at some point I had to add at > least one of them back, albeit based on an error message. > > > > Regards > > Alan > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Fri Jan 9 10:02:04 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 9 Jan 2015 11:02:04 +0100 Subject: Deprecating functions In-Reply-To: References: <201501091022.37679.jan.stolarek@p.lodz.pl> Message-ID: <201501091102.04230.jan.stolarek@p.lodz.pl> > We could file a tracking bug against the 7.14 milestone. I was considering that but we don't have 7.14 milestone yet. > Just curious, is there a way to keep these functions for backwards compat > in 7.14 or is that unfeasible? They could stay, technically that's not a problem. But I'm adding new functions that can do the same thing (and more), so we have redundancy. Janek > > On Fri, Jan 9, 2015 at 10:22 AM, Jan Stolarek > > wrote: > > I want to deprecate some Template Haskell functions in GHC 7.12 and then > > remove them in GHC 7.14 > > (or whatever version that comes after 7.12). Do we have any workflow for > > remembering these kind > > of things? I was thinking about adding a conditional error: > > > > #if __GLASGOW_HASKELL__ > 712 > > #error Remove functions foo bar from TH > > #endif > > > > Is this a good way of doing this? Or should I do it differently? > > > > Janek > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs From roma at ro-che.info Fri Jan 9 10:09:07 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Fri, 09 Jan 2015 12:09:07 +0200 Subject: Deprecating functions In-Reply-To: <201501091102.04230.jan.stolarek@p.lodz.pl> References: <201501091022.37679.jan.stolarek@p.lodz.pl> <201501091102.04230.jan.stolarek@p.lodz.pl> Message-ID: <54AFA8C3.2040500@ro-che.info> On 09/01/15 12:02, Jan Stolarek wrote: >> We could file a tracking bug against the 7.14 milestone. > I was considering that but we don't have 7.14 milestone yet. > >> Just curious, is there a way to keep these functions for backwards compat >> in 7.14 or is that unfeasible? > They could stay, technically that's not a problem. But I'm adding new functions that can do the > same thing (and more), so we have redundancy. Can you hide them in the haddock but leave in the module, so that we don't break existing code? Roman From johan.tibell at gmail.com Fri Jan 9 10:12:57 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 9 Jan 2015 11:12:57 +0100 Subject: Deprecating functions In-Reply-To: <54AFA8C3.2040500@ro-che.info> References: <201501091022.37679.jan.stolarek@p.lodz.pl> <201501091102.04230.jan.stolarek@p.lodz.pl> <54AFA8C3.2040500@ro-che.info> Message-ID: On Fri, Jan 9, 2015 at 11:09 AM, Roman Cheplyaka wrote: > On 09/01/15 12:02, Jan Stolarek wrote: > >> We could file a tracking bug against the 7.14 milestone. > > I was considering that but we don't have 7.14 milestone yet. > > > >> Just curious, is there a way to keep these functions for backwards > compat > >> in 7.14 or is that unfeasible? > > They could stay, technically that's not a problem. But I'm adding new > functions that can do the > > same thing (and more), so we have redundancy. > > Can you hide them in the haddock but leave in the module, so that we > don't break existing code? > I agree. You'll get rid of the redundancy in the library by removing it but you're users will have to live with #if MIN_VERSION_template_haskell(X,Y,X) -- new way #else -- old way #endif for 3+ years (which is typically how many GHC versions popular libraries try to support). -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Fri Jan 9 10:13:21 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 09 Jan 2015 11:13:21 +0100 Subject: Deprecating functions In-Reply-To: <54AFA8C3.2040500@ro-che.info> (Roman Cheplyaka's message of "Fri, 09 Jan 2015 12:09:07 +0200") References: <201501091022.37679.jan.stolarek@p.lodz.pl> <201501091102.04230.jan.stolarek@p.lodz.pl> <54AFA8C3.2040500@ro-che.info> Message-ID: <87sifkb6ta.fsf@gmail.com> On 2015-01-09 at 11:09:07 +0100, Roman Cheplyaka wrote: [...] >>> Just curious, is there a way to keep these functions for backwards compat >>> in 7.14 or is that unfeasible? >> They could stay, technically that's not a problem. But I'm adding new functions that can do the >> same thing (and more), so we have redundancy. > > Can you hide them in the haddock but leave in the module, so that we > don't break existing code? Why hide them? DEPRECATEd entities have the deprecation-message shown in discouraging red letters (including any hyperlinks to their replacements) in the generated Haddock documentation... From jan.stolarek at p.lodz.pl Fri Jan 9 10:15:29 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 9 Jan 2015 11:15:29 +0100 Subject: Deprecating functions In-Reply-To: <54AFA8C3.2040500@ro-che.info> References: <201501091022.37679.jan.stolarek@p.lodz.pl> <201501091102.04230.jan.stolarek@p.lodz.pl> <54AFA8C3.2040500@ro-che.info> Message-ID: <201501091115.29683.jan.stolarek@p.lodz.pl> > Can you hide them in the haddock but leave in the module, so that we > don't break existing code? Not sure. These functions are in Language.Haskell.TH.Lib module and functions there are not haddockified at all. I initially thought this module is internal but Richard told me that people are actually using functions from that module and there were complaints from the users when he removed some functions from there. Janek From jan.stolarek at p.lodz.pl Fri Jan 9 10:18:02 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 9 Jan 2015 11:18:02 +0100 Subject: Deprecating functions In-Reply-To: References: <201501091022.37679.jan.stolarek@p.lodz.pl> <54AFA8C3.2040500@ro-che.info> Message-ID: <201501091118.02634.jan.stolarek@p.lodz.pl> > I agree. You'll get rid of the redundancy in the library by removing it but > you're users will have to live with (...) That's why I want to deprecate them first and give users one release cycle to switch to new functions. I assumed that's enough but we could make this two or three release cycles. The reall question is how to remember that we should remove this at some point? Janek From hvriedel at gmail.com Fri Jan 9 10:25:19 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 09 Jan 2015 11:25:19 +0100 Subject: Deprecating functions In-Reply-To: <201501091118.02634.jan.stolarek@p.lodz.pl> (Jan Stolarek's message of "Fri, 9 Jan 2015 11:18:02 +0100") References: <201501091022.37679.jan.stolarek@p.lodz.pl> <54AFA8C3.2040500@ro-che.info> <201501091118.02634.jan.stolarek@p.lodz.pl> Message-ID: <87oaq8b69c.fsf@gmail.com> On 2015-01-09 at 11:18:02 +0100, Jan Stolarek wrote: > The reall > question is how to remember that we should remove this at some point? This affects all exposed libraries; I think it's enough to simply make this part of the release-procedure at some point in the release-cycle, to actively scan all DEPRECATIONs, and decide for each whether to kill them or let them live for another cycle. It simplifies things though, if it's obvious when a deprecation was declared so one doesn't have to `git blame` for it. Many deprecations already have a comment attached like "deprecated in GHC x.y" From johan.tibell at gmail.com Fri Jan 9 10:28:21 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 9 Jan 2015 11:28:21 +0100 Subject: Deprecating functions In-Reply-To: <87sifkb6ta.fsf@gmail.com> References: <201501091022.37679.jan.stolarek@p.lodz.pl> <201501091102.04230.jan.stolarek@p.lodz.pl> <54AFA8C3.2040500@ro-che.info> <87sifkb6ta.fsf@gmail.com> Message-ID: On Fri, Jan 9, 2015 at 11:13 AM, Herbert Valerio Riedel wrote: > Why hide them? DEPRECATEd entities have the deprecation-message shown in > discouraging red letters (including any hyperlinks to their > replacements) in the generated Haddock documentation... > I think Java's (!) policy for deprecation is good: Deprecation is (mostly) for functions that are error prone or otherwise dangerous. Unless the cost of keeping the function is large, removing functions should be avoided. The docs can point to the newer functions, but DEPRECATION pragmas will just add noise to users' compiles. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Fri Jan 9 10:29:55 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 9 Jan 2015 11:29:55 +0100 Subject: Deprecating functions In-Reply-To: <201501091118.02634.jan.stolarek@p.lodz.pl> References: <201501091022.37679.jan.stolarek@p.lodz.pl> <54AFA8C3.2040500@ro-che.info> <201501091118.02634.jan.stolarek@p.lodz.pl> Message-ID: On Fri, Jan 9, 2015 at 11:18 AM, Jan Stolarek wrote: > > I agree. You'll get rid of the redundancy in the library by removing it > but > > you're users will have to live with (...) > That's why I want to deprecate them first and give users one release cycle > to switch to new > functions. I assumed that's enough but we could make this two or three > release cycles. The reall > question is how to remember that we should remove this at some point? > If we want to avoid the CPP we need warning to be in major version X if that's when the old function is deprecated and the new one is added and the actual removal in X+2. At that point I'd just consider keeping the function and avoid the hassle. :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Fri Jan 9 10:37:00 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 9 Jan 2015 11:37:00 +0100 Subject: Deprecating functions In-Reply-To: References: <201501091022.37679.jan.stolarek@p.lodz.pl> <87sifkb6ta.fsf@gmail.com> Message-ID: <201501091137.01128.jan.stolarek@p.lodz.pl> > I think Java's (!) policy for deprecation is good I think it's not. It keeps the library code a mess and many times I have seen users using functions that have been deprecated for years just because it's easier to suppress a warning than change the code. I don't want Haskell to go down that path and I'm strongly in favour of removing these functions. Especially that we're talking about internal TH module - I'll be surprised if there are more than 10 users. Janek From roma at ro-che.info Fri Jan 9 10:37:39 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Fri, 09 Jan 2015 12:37:39 +0200 Subject: Deprecating functions In-Reply-To: <87sifkb6ta.fsf@gmail.com> References: <201501091022.37679.jan.stolarek@p.lodz.pl> <201501091102.04230.jan.stolarek@p.lodz.pl> <54AFA8C3.2040500@ro-che.info> <87sifkb6ta.fsf@gmail.com> Message-ID: <54AFAF73.4040402@ro-che.info> On 09/01/15 12:13, Herbert Valerio Riedel wrote: > On 2015-01-09 at 11:09:07 +0100, Roman Cheplyaka wrote: > [...] >>>> Just curious, is there a way to keep these functions for backwards compat >>>> in 7.14 or is that unfeasible? >>> They could stay, technically that's not a problem. But I'm adding new functions that can do the >>> same thing (and more), so we have redundancy. >> >> Can you hide them in the haddock but leave in the module, so that we >> don't break existing code? > > Why hide them? DEPRECATEd entities have the deprecation-message shown in > discouraging red letters (including any hyperlinks to their > replacements) in the generated Haddock documentation... I'll rephrase your question: why show them, if they are not supposed to be used? My point is that hiding from the haddocks is more or less equivalent to removing the function altogether for new users, while avoiding the penalty for the existing users, who have to update their code sooner or later for no good reason. Roman From simonpj at microsoft.com Fri Jan 9 10:40:03 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Jan 2015 10:40:03 +0000 Subject: seq#: do we actually need it as a primitive? References: <1420704021-sup-5888@sabre> <54AE8933.7030806@ro-che.info> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629C301@DB3PRD3001MB020.064d.mgd.msft.net> I was wrong. See https://ghc.haskell.org/trac/ghc/ticket/5129#comment:17 which I have just added. Simon | -----Original Message----- | From: Simon Peyton Jones | Sent: 08 January 2015 15:05 | To: 'Roman Cheplyaka'; Edward Z. Yang; David Feuer | Cc: ghc-devs | Subject: RE: seq#: do we actually need it as a primitive? | | No (2) would not suffer from #5129. Think of | | type IO a = State# -> (State#, a) | return x = \s -> (s, x) | (>>=) m k s = case m s of (s, r) -> k r s | | (it's a newtype actually, but this will do here). | | (2) says | | = \x -> (return $! x) >>= return | = \x. \s. case return $! x s of (s1, r) -> return r s1 | = \x\s. x `seq` case (s,x) of (s1, r) -> return r s1 | = \x\s. x `seq` (s,x) | | which is fine. From johan.tibell at gmail.com Fri Jan 9 10:43:41 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 9 Jan 2015 11:43:41 +0100 Subject: Deprecating functions In-Reply-To: <201501091137.01128.jan.stolarek@p.lodz.pl> References: <201501091022.37679.jan.stolarek@p.lodz.pl> <87sifkb6ta.fsf@gmail.com> <201501091137.01128.jan.stolarek@p.lodz.pl> Message-ID: On Fri, Jan 9, 2015 at 11:37 AM, Jan Stolarek wrote: > > I think Java's (!) policy for deprecation is good > I think it's not. It keeps the library code a mess and many times I have > seen users using > functions that have been deprecated for years just because it's easier to > suppress a warning than > change the code. I don't want Haskell to go down that path and I'm > strongly in favour of removing > these functions. Especially that we're talking about internal TH module - > I'll be surprised if > there are more than 10 users. It also keeps Java having users. ;) More seriously, we who maintain the core libraries spend too much time dealing with breakages due to continuously moving libraries when we could spend time on building upwards to make Haskell a better platform for building applications. *In practice* our code is worse because of these continuous breakages (as it's full with hard to maintain CPP), not better. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 9 10:47:22 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Jan 2015 10:47:22 +0000 Subject: Redundant constraints In-Reply-To: <20150108172537.GA25675@city.ac.uk> References: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> <20150108172537.GA25675@city.ac.uk> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629C38F@DB3PRD3001MB020.064d.mgd.msft.net> | I've fixed some of the warnings in transformers, but there are still | 14 of them, triggered by Applicative becoming a superclass of Monad. In GHC's own source code I did this #if __GLASGOW_HASKELL__ < 710 -- Pre-AMP change runGhcT :: (ExceptionMonad m, Functor m) => #else runGhcT :: (ExceptionMonad m) => #endif | I think some of the changes already made are of this sort, exposing | details of the GHC implementation, e.g. the changes to the public | interface of Array and Ratio. For example, it's probably reasonable | to remove the Ix constraint from Data.Array.bounds, but the portable | reference implementation of Data.Array.elems requires Ix, even though | the GHC implementation doesn't. Similarly a portable implementation | of the Functor instance for Array i requires Ix, but the GHC | implementation doesn't. Fair enough. If the Core Libraries Committee wants to add some of these constraints back in, that's fine with me. They just need a comment to explain. (We have a trick, now in the user manual, for how to add a redundant constraint without triggering a complaint.) Simon From simonpj at microsoft.com Fri Jan 9 11:08:05 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Jan 2015 11:08:05 +0000 Subject: warn-redundant-constraints present as errors In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629C4E4@DB3PRD3001MB020.064d.mgd.msft.net> I?ve fixed this From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 08 January 2015 21:46 To: ghc-devs at haskell.org; Simon Peyton Jones Subject: warn-redundant-constraints present as errors This is a great feature, here is some feedback My syntax highlighter in emacs expects warnings to have the word "warning" in them. So for the two warnings reported below, the first is highlighted as an error, and the second as a warning Language/Haskell/Refact/Utils/TypeUtils.hs:3036:17: Redundant constraint: SYB.Data t In the type signature for: duplicateDecl :: SYB.Data t => [GHC.LHsBind GHC.Name] -> t -> GHC.Name -> GHC.Name -> RefactGhc [GHC.LHsBind GHC.Name] Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Warning: Defined but not used: ?toks This is in a ghci session, and the file loads without problems, so it is indeed a warning. Can we perhaps add the word "Warning" to the output for Redundant constraints? I also had a situation where it asked me to remove a whole lot of constraints from different functions, I did them in batches, so did not remove them all from the file at once, and at some point I had to add at least one of them back, albeit based on an error message. Regards Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Fri Jan 9 11:21:49 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Fri, 9 Jan 2015 13:21:49 +0200 Subject: warn-redundant-constraints present as errors In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629C4E4@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF5629C4E4@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Thanks. I've found a case where it warns of a redundant constraint, but if I remove the constraint I get an error saying the constraint is required -------------------------------------------- import qualified GHC as GHC import qualified Data.Generics as SYB duplicateDecl :: (SYB.Data t) => -- **** The constraint being warned against ******* [GHC.LHsBind GHC.Name] -- ^ The declaration list ->t -- ^ Any signatures are in here ->GHC.Name -- ^ The identifier whose definition is to be duplicated ->GHC.Name -- ^ The new name (possibly qualified) ->IO [GHC.LHsBind GHC.Name] -- ^ The result duplicateDecl decls sigs n newFunName = do let sspan = undefined newSpan <- case typeSig of [] -> return sspan _ -> do let Just sspanSig = getSrcSpan typeSig toksSig <- getToksForSpan sspanSig let [(GHC.L sspanSig' _)] = typeSig return sspanSig' undefined where typeSig = definingSigsNames [n] sigs -- |Find those type signatures for the specified GHC.Names. definingSigsNames :: (SYB.Data t) => [GHC.Name] -- ^ The specified identifiers. ->t -- ^ A collection of declarations. ->[GHC.LSig GHC.Name] -- ^ The result. definingSigsNames pns ds = def ds where def = undefined getSrcSpan = undefined getToksForSpan = undefined -------------------------------------------- On Fri, Jan 9, 2015 at 1:08 PM, Simon Peyton Jones wrote: > I?ve fixed this > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* 08 January 2015 21:46 > *To:* ghc-devs at haskell.org; Simon Peyton Jones > *Subject:* warn-redundant-constraints present as errors > > > > This is a great feature, here is some feedback > > My syntax highlighter in emacs expects warnings to have the word "warning" > in them. > > So for the two warnings reported below, the first is highlighted as an > error, and the second as a warning > > > Language/Haskell/Refact/Utils/TypeUtils.hs:3036:17: > Redundant constraint: SYB.Data t > In the type signature for: > duplicateDecl :: SYB.Data t => > [GHC.LHsBind GHC.Name] > -> t -> GHC.Name -> GHC.Name -> RefactGhc > [GHC.LHsBind GHC.Name] > > Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Warning: > Defined but not used: ?toks > > This is in a ghci session, and the file loads without problems, so it > is indeed a warning. > > Can we perhaps add the word "Warning" to the output for Redundant > constraints? > > I also had a situation where it asked me to remove a whole lot of > constraints from different functions, I did them in batches, so did not > remove them all from the file at once, and at some point I had to add at > least one of them back, albeit based on an error message. > > > > Regards > > Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 9 11:48:27 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Jan 2015 11:48:27 +0000 Subject: warn-redundant-constraints present as errors In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629C4E4@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629C61D@DB3PRD3001MB020.064d.mgd.msft.net> If you remove the constraint from duplicateDecl, then I get Redundant constraint: SYB.Data t In the type signature for: definingSigsNames :: SYB.Data t => [GHC.Name] -> t -> [GHC.LSig GHC.Name] which is 100% correct: defininingSigssNames doesn?t use its SYB.Data t constraint Simon From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 09 January 2015 11:22 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: warn-redundant-constraints present as errors Thanks. I've found a case where it warns of a redundant constraint, but if I remove the constraint I get an error saying the constraint is required -------------------------------------------- import qualified GHC as GHC import qualified Data.Generics as SYB duplicateDecl :: (SYB.Data t) => -- **** The constraint being warned against ******* [GHC.LHsBind GHC.Name] -- ^ The declaration list ->t -- ^ Any signatures are in here ->GHC.Name -- ^ The identifier whose definition is to be duplicated ->GHC.Name -- ^ The new name (possibly qualified) ->IO [GHC.LHsBind GHC.Name] -- ^ The result duplicateDecl decls sigs n newFunName = do let sspan = undefined newSpan <- case typeSig of [] -> return sspan _ -> do let Just sspanSig = getSrcSpan typeSig toksSig <- getToksForSpan sspanSig let [(GHC.L sspanSig' _)] = typeSig return sspanSig' undefined where typeSig = definingSigsNames [n] sigs -- |Find those type signatures for the specified GHC.Names. definingSigsNames :: (SYB.Data t) => [GHC.Name] -- ^ The specified identifiers. ->t -- ^ A collection of declarations. ->[GHC.LSig GHC.Name] -- ^ The result. definingSigsNames pns ds = def ds where def = undefined getSrcSpan = undefined getToksForSpan = undefined -------------------------------------------- On Fri, Jan 9, 2015 at 1:08 PM, Simon Peyton Jones > wrote: I?ve fixed this From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 08 January 2015 21:46 To: ghc-devs at haskell.org; Simon Peyton Jones Subject: warn-redundant-constraints present as errors This is a great feature, here is some feedback My syntax highlighter in emacs expects warnings to have the word "warning" in them. So for the two warnings reported below, the first is highlighted as an error, and the second as a warning Language/Haskell/Refact/Utils/TypeUtils.hs:3036:17: Redundant constraint: SYB.Data t In the type signature for: duplicateDecl :: SYB.Data t => [GHC.LHsBind GHC.Name] -> t -> GHC.Name -> GHC.Name -> RefactGhc [GHC.LHsBind GHC.Name] Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Warning: Defined but not used: ?toks This is in a ghci session, and the file loads without problems, so it is indeed a warning. Can we perhaps add the word "Warning" to the output for Redundant constraints? I also had a situation where it asked me to remove a whole lot of constraints from different functions, I did them in batches, so did not remove them all from the file at once, and at some point I had to add at least one of them back, albeit based on an error message. Regards Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Fri Jan 9 11:53:32 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Fri, 9 Jan 2015 13:53:32 +0200 Subject: warn-redundant-constraints present as errors In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629C61D@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF5629C4E4@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF5629C61D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: In the original definingSigsNames requires the constraint, I left that out to simplify the example, as the movement of the warning to an error still happens. Original definingSigsNames ------------------ -- |Find those type signatures for the specified GHC.Names. definingSigsNames :: (SYB.Data t) => [GHC.Name] -- ^ The specified identifiers. ->t -- ^ A collection of declarations. ->[GHC.LSig GHC.Name] -- ^ The result. definingSigsNames pns ds = def ds where def decl = SYB.everythingStaged SYB.Renamer (++) [] ([] `SYB.mkQ` inSig) decl where inSig :: (GHC.LSig GHC.Name) -> [GHC.LSig GHC.Name] inSig (GHC.L l (GHC.TypeSig ns t p)) | defines' ns /= [] = [(GHC.L l (GHC.TypeSig (defines' ns) t p))] inSig _ = [] defines' (p::[GHC.Located GHC.Name]) = filter (\(GHC.L _ n) -> n `elem` pns) p ---------------------- On Fri, Jan 9, 2015 at 1:48 PM, Simon Peyton Jones wrote: > If you remove the constraint from duplicateDecl, then I get > > > > Redundant constraint: SYB.Data t > > In the type signature for: > > definingSigsNames :: SYB.Data t => > > [GHC.Name] -> t -> [GHC.LSig GHC.Name] > > > > which is 100% correct: defininingSigssNames doesn?t use its SYB.Data t > constraint > > > > Simon > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* 09 January 2015 11:22 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: warn-redundant-constraints present as errors > > > > Thanks. > > I've found a case where it warns of a redundant constraint, but if I > remove the constraint I get an error saying the constraint is required > > -------------------------------------------- > import qualified GHC as GHC > > import qualified Data.Generics as SYB > > duplicateDecl :: (SYB.Data t) => -- **** The constraint being warned > against ******* > [GHC.LHsBind GHC.Name] -- ^ The declaration list > ->t -- ^ Any signatures are in here > ->GHC.Name -- ^ The identifier whose definition is to be > duplicated > ->GHC.Name -- ^ The new name (possibly qualified) > ->IO [GHC.LHsBind GHC.Name] -- ^ The result > duplicateDecl decls sigs n newFunName > = do > let sspan = undefined > newSpan <- case typeSig of > [] -> return sspan > _ -> do > let Just sspanSig = getSrcSpan typeSig > toksSig <- getToksForSpan sspanSig > > let [(GHC.L sspanSig' _)] = typeSig > > return sspanSig' > > undefined > where > typeSig = definingSigsNames [n] sigs > > -- |Find those type signatures for the specified GHC.Names. > definingSigsNames :: (SYB.Data t) => > [GHC.Name] -- ^ The specified identifiers. > ->t -- ^ A collection of declarations. > ->[GHC.LSig GHC.Name] -- ^ The result. > definingSigsNames pns ds = def ds > where def = undefined > > getSrcSpan = undefined > getToksForSpan = undefined > > -------------------------------------------- > > > > On Fri, Jan 9, 2015 at 1:08 PM, Simon Peyton Jones > wrote: > > I?ve fixed this > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* 08 January 2015 21:46 > *To:* ghc-devs at haskell.org; Simon Peyton Jones > *Subject:* warn-redundant-constraints present as errors > > > > This is a great feature, here is some feedback > > My syntax highlighter in emacs expects warnings to have the word "warning" > in them. > > So for the two warnings reported below, the first is highlighted as an > error, and the second as a warning > > > Language/Haskell/Refact/Utils/TypeUtils.hs:3036:17: > Redundant constraint: SYB.Data t > In the type signature for: > duplicateDecl :: SYB.Data t => > [GHC.LHsBind GHC.Name] > -> t -> GHC.Name -> GHC.Name -> RefactGhc > [GHC.LHsBind GHC.Name] > > Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Warning: > Defined but not used: ?toks > > This is in a ghci session, and the file loads without problems, so it is > indeed a warning. > > Can we perhaps add the word "Warning" to the output for Redundant > constraints? > > I also had a situation where it asked me to remove a whole lot of > constraints from different functions, I did them in batches, so did not > remove them all from the file at once, and at some point I had to add at > least one of them back, albeit based on an error message. > > > > Regards > > Alan > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 9 12:18:39 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Jan 2015 12:18:39 +0000 Subject: warn-redundant-constraints present as errors In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629C4E4@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF5629C61D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629C706@DB3PRD3001MB020.064d.mgd.msft.net> Now I get Foo1.hs:39:8: Not in scope: ?SYB.everythingStaged? Foo1.hs:39:29: Not in scope: data constructor ?SYB.Renamer? Do you think you could open a ticket with a reproducible test case? That would be helpful Simon From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 09 January 2015 11:54 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: warn-redundant-constraints present as errors In the original definingSigsNames requires the constraint, I left that out to simplify the example, as the movement of the warning to an error still happens. Original definingSigsNames ------------------ -- |Find those type signatures for the specified GHC.Names. definingSigsNames :: (SYB.Data t) => [GHC.Name] -- ^ The specified identifiers. ->t -- ^ A collection of declarations. ->[GHC.LSig GHC.Name] -- ^ The result. definingSigsNames pns ds = def ds where def decl = SYB.everythingStaged SYB.Renamer (++) [] ([] `SYB.mkQ` inSig) decl where inSig :: (GHC.LSig GHC.Name) -> [GHC.LSig GHC.Name] inSig (GHC.L l (GHC.TypeSig ns t p)) | defines' ns /= [] = [(GHC.L l (GHC.TypeSig (defines' ns) t p))] inSig _ = [] defines' (p::[GHC.Located GHC.Name]) = filter (\(GHC.L _ n) -> n `elem` pns) p ---------------------- On Fri, Jan 9, 2015 at 1:48 PM, Simon Peyton Jones > wrote: If you remove the constraint from duplicateDecl, then I get Redundant constraint: SYB.Data t In the type signature for: definingSigsNames :: SYB.Data t => [GHC.Name] -> t -> [GHC.LSig GHC.Name] which is 100% correct: defininingSigssNames doesn?t use its SYB.Data t constraint Simon From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 09 January 2015 11:22 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: warn-redundant-constraints present as errors Thanks. I've found a case where it warns of a redundant constraint, but if I remove the constraint I get an error saying the constraint is required -------------------------------------------- import qualified GHC as GHC import qualified Data.Generics as SYB duplicateDecl :: (SYB.Data t) => -- **** The constraint being warned against ******* [GHC.LHsBind GHC.Name] -- ^ The declaration list ->t -- ^ Any signatures are in here ->GHC.Name -- ^ The identifier whose definition is to be duplicated ->GHC.Name -- ^ The new name (possibly qualified) ->IO [GHC.LHsBind GHC.Name] -- ^ The result duplicateDecl decls sigs n newFunName = do let sspan = undefined newSpan <- case typeSig of [] -> return sspan _ -> do let Just sspanSig = getSrcSpan typeSig toksSig <- getToksForSpan sspanSig let [(GHC.L sspanSig' _)] = typeSig return sspanSig' undefined where typeSig = definingSigsNames [n] sigs -- |Find those type signatures for the specified GHC.Names. definingSigsNames :: (SYB.Data t) => [GHC.Name] -- ^ The specified identifiers. ->t -- ^ A collection of declarations. ->[GHC.LSig GHC.Name] -- ^ The result. definingSigsNames pns ds = def ds where def = undefined getSrcSpan = undefined getToksForSpan = undefined -------------------------------------------- On Fri, Jan 9, 2015 at 1:08 PM, Simon Peyton Jones > wrote: I?ve fixed this From: Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] Sent: 08 January 2015 21:46 To: ghc-devs at haskell.org; Simon Peyton Jones Subject: warn-redundant-constraints present as errors This is a great feature, here is some feedback My syntax highlighter in emacs expects warnings to have the word "warning" in them. So for the two warnings reported below, the first is highlighted as an error, and the second as a warning Language/Haskell/Refact/Utils/TypeUtils.hs:3036:17: Redundant constraint: SYB.Data t In the type signature for: duplicateDecl :: SYB.Data t => [GHC.LHsBind GHC.Name] -> t -> GHC.Name -> GHC.Name -> RefactGhc [GHC.LHsBind GHC.Name] Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Warning: Defined but not used: ?toks This is in a ghci session, and the file loads without problems, so it is indeed a warning. Can we perhaps add the word "Warning" to the output for Redundant constraints? I also had a situation where it asked me to remove a whole lot of constraints from different functions, I did them in batches, so did not remove them all from the file at once, and at some point I had to add at least one of them back, albeit based on an error message. Regards Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Fri Jan 9 13:50:04 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Fri, 9 Jan 2015 15:50:04 +0200 Subject: warn-redundant-constraints present as errors In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629C706@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF5629C4E4@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF5629C61D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF5629C706@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: See https://ghc.haskell.org/trac/ghc/ticket/9973, my original file did not in fact exhibit the bug. On Fri, Jan 9, 2015 at 2:18 PM, Simon Peyton Jones wrote: > Now I get > > Foo1.hs:39:8: Not in scope: ?SYB.everythingStaged? > > Foo1.hs:39:29: Not in scope: data constructor ?SYB.Renamer? > > > > Do you think you could open a ticket with a reproducible test case? That > would be helpful > > > Simon > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* 09 January 2015 11:54 > > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: warn-redundant-constraints present as errors > > > > In the original definingSigsNames requires the constraint, I left that out > to simplify the example, as the movement of the warning to an error still > happens. > > Original definingSigsNames > > ------------------ > -- |Find those type signatures for the specified GHC.Names. > definingSigsNames :: (SYB.Data t) => > [GHC.Name] -- ^ The specified identifiers. > ->t -- ^ A collection of declarations. > ->[GHC.LSig GHC.Name] -- ^ The result. > definingSigsNames pns ds = def ds > where > def decl > = SYB.everythingStaged SYB.Renamer (++) [] ([] `SYB.mkQ` inSig) decl > where > inSig :: (GHC.LSig GHC.Name) -> [GHC.LSig GHC.Name] > inSig (GHC.L l (GHC.TypeSig ns t p)) > | defines' ns /= [] = [(GHC.L l (GHC.TypeSig (defines' ns) t p))] > inSig _ = [] > > defines' (p::[GHC.Located GHC.Name]) > = filter (\(GHC.L _ n) -> n `elem` pns) p > ---------------------- > > > > On Fri, Jan 9, 2015 at 1:48 PM, Simon Peyton Jones > wrote: > > If you remove the constraint from duplicateDecl, then I get > > > > Redundant constraint: SYB.Data t > > In the type signature for: > > definingSigsNames :: SYB.Data t => > > [GHC.Name] -> t -> [GHC.LSig GHC.Name] > > > > which is 100% correct: defininingSigssNames doesn?t use its SYB.Data t > constraint > > > > Simon > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* 09 January 2015 11:22 > *To:* Simon Peyton Jones > *Cc:* ghc-devs at haskell.org > *Subject:* Re: warn-redundant-constraints present as errors > > > > Thanks. > > I've found a case where it warns of a redundant constraint, but if I > remove the constraint I get an error saying the constraint is required > > -------------------------------------------- > import qualified GHC as GHC > > import qualified Data.Generics as SYB > > duplicateDecl :: (SYB.Data t) => -- **** The constraint being warned > against ******* > [GHC.LHsBind GHC.Name] -- ^ The declaration list > ->t -- ^ Any signatures are in here > ->GHC.Name -- ^ The identifier whose definition is to be > duplicated > ->GHC.Name -- ^ The new name (possibly qualified) > ->IO [GHC.LHsBind GHC.Name] -- ^ The result > duplicateDecl decls sigs n newFunName > = do > let sspan = undefined > newSpan <- case typeSig of > [] -> return sspan > _ -> do > let Just sspanSig = getSrcSpan typeSig > toksSig <- getToksForSpan sspanSig > > let [(GHC.L sspanSig' _)] = typeSig > > return sspanSig' > > undefined > where > typeSig = definingSigsNames [n] sigs > > -- |Find those type signatures for the specified GHC.Names. > definingSigsNames :: (SYB.Data t) => > [GHC.Name] -- ^ The specified identifiers. > ->t -- ^ A collection of declarations. > ->[GHC.LSig GHC.Name] -- ^ The result. > definingSigsNames pns ds = def ds > where def = undefined > > getSrcSpan = undefined > getToksForSpan = undefined > > -------------------------------------------- > > > > On Fri, Jan 9, 2015 at 1:08 PM, Simon Peyton Jones > wrote: > > I?ve fixed this > > > > *From:* Alan & Kim Zimmerman [mailto:alan.zimm at gmail.com] > *Sent:* 08 January 2015 21:46 > *To:* ghc-devs at haskell.org; Simon Peyton Jones > *Subject:* warn-redundant-constraints present as errors > > > > This is a great feature, here is some feedback > > My syntax highlighter in emacs expects warnings to have the word "warning" > in them. > > So for the two warnings reported below, the first is highlighted as an > error, and the second as a warning > > > Language/Haskell/Refact/Utils/TypeUtils.hs:3036:17: > Redundant constraint: SYB.Data t > In the type signature for: > duplicateDecl :: SYB.Data t => > [GHC.LHsBind GHC.Name] > -> t -> GHC.Name -> GHC.Name -> RefactGhc > [GHC.LHsBind GHC.Name] > > Language/Haskell/Refact/Utils/TypeUtils.hs:3045:7: Warning: > Defined but not used: ?toks > > This is in a ghci session, and the file loads without problems, so it is > indeed a warning. > > Can we perhaps add the word "Warning" to the output for Redundant > constraints? > > I also had a situation where it asked me to remove a whole lot of > constraints from different functions, I did them in batches, so did not > remove them all from the file at once, and at some point I had to add at > least one of them back, albeit based on an error message. > > > > Regards > > Alan > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Fri Jan 9 14:18:47 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Fri, 9 Jan 2015 09:18:47 -0500 Subject: Deprecating functions In-Reply-To: <201501091137.01128.jan.stolarek@p.lodz.pl> References: <201501091022.37679.jan.stolarek@p.lodz.pl> <87sifkb6ta.fsf@gmail.com> <201501091137.01128.jan.stolarek@p.lodz.pl> Message-ID: <3A263929-121E-4C81-A759-B084806D0B10@cis.upenn.edu> On Jan 9, 2015, at 5:37 AM, Jan Stolarek wrote: > Especially that we're talking about internal TH module - I'll be surprised if > there are more than 10 users. As I understand it, TH.Lib is not an internal module. Though I, personally, have never found the functions there to suit my needs as a user, I think the functions exported from there are the go-to place for lots of people using TH. For example, Ollie Charles's recent blog post on TH (https://ocharles.org.uk/blog/guest-posts/2014-12-22-template-haskell.html), written by Sean Westfall, uses functions exported from TH.Lib. I'm rather ambivalent on the deprecate vs. remove vs. hide vs. leave alone debate, but I do think we should treat TH.Lib as a fully public module as we're debating. Richard From jwlato at gmail.com Fri Jan 9 15:57:36 2015 From: jwlato at gmail.com (John Lato) Date: Fri, 09 Jan 2015 15:57:36 +0000 Subject: Deprecating functions References: <201501091022.37679.jan.stolarek@p.lodz.pl> <87sifkb6ta.fsf@gmail.com> <201501091137.01128.jan.stolarek@p.lodz.pl> <3A263929-121E-4C81-A759-B084806D0B10@cis.upenn.edu> Message-ID: I agree with Johan. I do think it makes sense to remove deprecated/replaced functions, but only after N+2 cycles. On 06:18, Fri, Jan 9, 2015 Richard Eisenberg wrote: > > On Jan 9, 2015, at 5:37 AM, Jan Stolarek wrote: > > > Especially that we're talking about internal TH module - I'll be > surprised if > > there are more than 10 users. > > As I understand it, TH.Lib is not an internal module. Though I, > personally, have never found the functions there to suit my needs as a > user, I think the functions exported from there are the go-to place for > lots of people using TH. For example, Ollie Charles's recent blog post on > TH (https://ocharles.org.uk/blog/guest-posts/2014-12-22- > template-haskell.html), written by Sean Westfall, uses functions exported > from TH.Lib. > > I'm rather ambivalent on the deprecate vs. remove vs. hide vs. leave alone > debate, but I do think we should treat TH.Lib as a fully public module as > we're debating. > > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Fri Jan 9 16:11:48 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 09 Jan 2015 16:11:48 +0000 Subject: Shipping core libraries with debug symbols In-Reply-To: References: Message-ID: <54AFFDC4.3010005@gmail.com> I've been building the RTS with debug symbols for our internal GHC build at FB, because it makes investigating problems a lot easier. I should probably upstream this patch. Shipping libraries with debug symbols should be fine, as long as they can be stripped - Peter, does stripping remove everything that -g creates? Cheers, Simon On 02/01/2015 23:18, Johan Tibell wrote: > Hi! > > We are now able to generate DWARF debug info, by passing -g to GHC. This > will allow for better debugging (e.g. using GDB) and profiling (e.g. > using Linux perf events). To make this feature more user accessible we > need to ship debug info for the core libraries (and perhaps the RTS). > The reason we need to ship debug info is that it's difficult, or > impossible in the case of base, for the user to rebuild these > libraries.The question is, how do we do this well? I don't think our > "way" solution works very well. It causes us to recompile too much and > GHC doesn't know which "ways" have been built or not. > > I believe other compilers, e.g. GCC, ship debug symbols in separate > files (https://packages.debian.org/sid/libc-dbg) that e.g. GDB can then > look up. > > -- Johan > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From spam at scientician.net Fri Jan 9 16:24:57 2015 From: spam at scientician.net (Bardur Arantsson) Date: Fri, 09 Jan 2015 17:24:57 +0100 Subject: Deprecating functions In-Reply-To: <87oaq8b69c.fsf@gmail.com> References: <201501091022.37679.jan.stolarek@p.lodz.pl> <54AFA8C3.2040500@ro-che.info> <201501091118.02634.jan.stolarek@p.lodz.pl> <87oaq8b69c.fsf@gmail.com> Message-ID: On 2015-01-09 11:25, Herbert Valerio Riedel wrote: > On 2015-01-09 at 11:18:02 +0100, Jan Stolarek wrote: > >> The reall >> question is how to remember that we should remove this at some point? > > This affects all exposed libraries; I think it's enough to simply make > this part of the release-procedure at some point in the release-cycle, > to actively scan all DEPRECATIONs, and decide for each whether to kill > them or let them live for another cycle. > > It simplifies things though, if it's obvious when a deprecation was > declared so one doesn't have to `git blame` for it. Many deprecations > already have a comment attached like "deprecated in GHC x.y" > I think Google's Guava library for Java does a great job at this. In the documentation is says something like: *Deprecated*: Use xxx instead. This class is scheduled for removal in June 2016. Then one just needs to add a "Remove all scheduled deprecations" to the do-a-release checklist. From johan.tibell at gmail.com Fri Jan 9 16:34:39 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 9 Jan 2015 17:34:39 +0100 Subject: Shipping core libraries with debug symbols In-Reply-To: <54AFFDC4.3010005@gmail.com> References: <54AFFDC4.3010005@gmail.com> Message-ID: Could we get this for 7.10 so our debug info story is more "well-rounded"? On Fri, Jan 9, 2015 at 5:11 PM, Simon Marlow wrote: > I've been building the RTS with debug symbols for our internal GHC build > at FB, because it makes investigating problems a lot easier. I should > probably upstream this patch. > > Shipping libraries with debug symbols should be fine, as long as they can > be stripped - Peter, does stripping remove everything that -g creates? > > Cheers, > Simon > > > On 02/01/2015 23:18, Johan Tibell wrote: > >> Hi! >> >> We are now able to generate DWARF debug info, by passing -g to GHC. This >> will allow for better debugging (e.g. using GDB) and profiling (e.g. >> using Linux perf events). To make this feature more user accessible we >> need to ship debug info for the core libraries (and perhaps the RTS). >> The reason we need to ship debug info is that it's difficult, or >> impossible in the case of base, for the user to rebuild these >> libraries.The question is, how do we do this well? I don't think our >> "way" solution works very well. It causes us to recompile too much and >> GHC doesn't know which "ways" have been built or not. >> >> I believe other compilers, e.g. GCC, ship debug symbols in separate >> files (https://packages.debian.org/sid/libc-dbg) that e.g. GDB can then >> look up. >> >> -- Johan >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From scpmw at leeds.ac.uk Fri Jan 9 16:51:09 2015 From: scpmw at leeds.ac.uk (Peter Wortmann) Date: Fri, 09 Jan 2015 17:51:09 +0100 Subject: Shipping core libraries with debug symbols In-Reply-To: <54AFFDC4.3010005@gmail.com> References: <54AFFDC4.3010005@gmail.com> Message-ID: Yes - strip will catch everything. Greetings, Peter On 09/01/2015 17:11, Simon Marlow wrote: > I've been building the RTS with debug symbols for our internal GHC build > at FB, because it makes investigating problems a lot easier. I should > probably upstream this patch. > > Shipping libraries with debug symbols should be fine, as long as they > can be stripped - Peter, does stripping remove everything that -g creates? > > Cheers, > Simon > > On 02/01/2015 23:18, Johan Tibell wrote: >> Hi! >> >> We are now able to generate DWARF debug info, by passing -g to GHC. This >> will allow for better debugging (e.g. using GDB) and profiling (e.g. >> using Linux perf events). To make this feature more user accessible we >> need to ship debug info for the core libraries (and perhaps the RTS). >> The reason we need to ship debug info is that it's difficult, or >> impossible in the case of base, for the user to rebuild these >> libraries.The question is, how do we do this well? I don't think our >> "way" solution works very well. It causes us to recompile too much and >> GHC doesn't know which "ways" have been built or not. >> >> I believe other compilers, e.g. GCC, ship debug symbols in separate >> files (https://packages.debian.org/sid/libc-dbg) that e.g. GDB can then >> look up. >> >> -- Johan >> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> From simonpj at microsoft.com Fri Jan 9 17:14:42 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 9 Jan 2015 17:14:42 +0000 Subject: pef T3294 Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629CED4@DB3PRD3001MB020.064d.mgd.msft.net> perf/compiler T3294 [stat not good enough] (normal) This test is failing on Phab all the time now. but it doesn't fail on my machine, so I can't see what's wrong. Could someone look? Simion -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Fri Jan 9 21:49:29 2015 From: david.feuer at gmail.com (David Feuer) Date: Fri, 9 Jan 2015 16:49:29 -0500 Subject: Milestones Message-ID: I took the liberty of pushing back the milestones for a few tickets that looked unlikely to be acted on for 7.10.1, based on a combination of severity, recent activity, and perceived intrusiveness. If anyone objects, please move them back. #9314: Each object file in a static archive file (.a) is loaded into its own mmap()ed page #8440: Get rid of the remaining static flags #8634: Relax functional dependency coherence check ("liberal coverage condition") David From eric at seidel.io Sat Jan 10 06:15:48 2015 From: eric at seidel.io (Eric Seidel) Date: Fri, 09 Jan 2015 22:15:48 -0800 Subject: How to get notifications when a build fails? Message-ID: <1420870548.3109074.212107389.77AEA507@webmail.messagingengine.com> Hi devs, Is there any way to make Phab email you when a revision you've submitted fails to build? I looked through the settings but couldn't find anything that seems to fit the bill. Thanks! Eric From alan.zimm at gmail.com Sat Jan 10 15:39:03 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sat, 10 Jan 2015 17:39:03 +0200 Subject: Possible unpackFS problem Message-ID: While doing further round-trip testing, I cam across the following issue Original source module Deprecation {-# Deprecated ["This is a module \"deprecation\"", "multi-line"] #-} ( foo ) where Pretty-printed AST via ppr (L {examples/Deprecation.hs:(3,1)-(4,30)} (DeprecatedTxt (L {examples/Deprecation.hs:3:1-14} "{-# Deprecated") [ (L {examples/Deprecation.hs:3:17-50} {FastString: "This is a module \"deprecation\""}), (L {examples/Deprecation.hs:4:14-25} {FastString: "multi-line"})]))) output where the FastString is converted to a string via unpackFS module Deprecation {-# Deprecated ["This is a module "deprecation"", "multi-line"] #-} ( foo ) where So, the ppr (via Pretty.ftext) is able to reproduce the escape characters in original string, but unpackFS does not. Is this a problem for anyone else? Regards Aan -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Sat Jan 10 16:03:17 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sat, 10 Jan 2015 18:03:17 +0200 Subject: Possible unpackFS problem In-Reply-To: References: Message-ID: On further digging I see that Pretty.ftext eventually uses unpackFS, but outputs it via hPutStr. So no problem in GHC. Alan On Sat, Jan 10, 2015 at 5:39 PM, Alan & Kim Zimmerman wrote: > While doing further round-trip testing, I cam across the following issue > > Original source > > module Deprecation > {-# Deprecated ["This is a module \"deprecation\"", > "multi-line"] #-} > ( foo ) > where > > Pretty-printed AST via ppr > > (L {examples/Deprecation.hs:(3,1)-(4,30)} > (DeprecatedTxt > (L {examples/Deprecation.hs:3:1-14} "{-# Deprecated") > [ > (L {examples/Deprecation.hs:3:17-50} {FastString: "This is a module > \"deprecation\""}), > (L {examples/Deprecation.hs:4:14-25} {FastString: "multi-line"})]))) > > output where the FastString is converted to a string via unpackFS > > module Deprecation > {-# Deprecated ["This is a module "deprecation"", > "multi-line"] #-} > ( foo ) > where > > So, the ppr (via Pretty.ftext) is able to reproduce the escape characters > in original string, but unpackFS does not. > > Is this a problem for anyone else? > > Regards > Aan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sat Jan 10 19:11:32 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sat, 10 Jan 2015 19:11:32 +0000 Subject: Redundant constraints In-Reply-To: <20150110134441.GA16236@auryn.cz> References: <618BE556AADD624C9C918AA5D5911BEF5629A2DA@DB3PRD3001MB020.064d.mgd.msft.net> <20150110134441.GA16236@auryn.cz> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629DAD9@DB3PRD3001MB020.064d.mgd.msft.net> | Current master of containers has no more unnecessary constraints, | although I had to use the | where _ = ... | trick | | Do you want me to release a bugfix version of containers? I'll let Austin decide. I _think_ that all we need do is update the GHC repo to point to your new commit in 'containers'; we don't need an actual release. Over to you, Austin Simon | -----Original Message----- | From: Milan Straka [mailto:fox at ucw.cz] | Sent: 10 January 2015 13:45 | To: Simon Peyton Jones | Cc: ghc-devs at haskell.org; Johan Tibell | Subject: Re: Redundant constraints | | Hi Simon and all, | | > -----Original message----- | > From: Simon Peyton Jones | > Sent: 7 Jan 2015, 15:19 | > | > Friends | > I've pushed a big patch that adds -fwarn-redundant-constraints (on by | default). It tells you when a constraint in a signature is unnecessary, | e.g. | > f :: Ord a => a -> a -> Bool | > f x y = True | > I think I have done all the necessary library updates etc, so | everything should build fine. | > Four libraries which we don't maintain have such warnings (MANY of them | in transformers) so I'm ccing the maintainers: | > | > o containers | | thanks, a neat feature :) | | Current master of containers has no more unnecessary constraints, | although I had to use the | where _ = ... | trick once -- we have two versions of Data.Sequence.fromArray function, | and the GHC version did not need the (Ix *) constraint, but the standard | Haskell version does. | | Do you want me to release a bugfix version of containers? | | Cheers, | Milan From djsamperi at gmail.com Sun Jan 11 06:07:18 2015 From: djsamperi at gmail.com (Dominick Samperi) Date: Sun, 11 Jan 2015 01:07:18 -0500 Subject: Build failure under Fedora 21 Message-ID: Hello, I'm trying to build ghc-7.8 branch under Fedora 21 and I get the failure diagnostics appended below. The problem seems to be that the build process cannot satisfy deepseq >=1.2 && <1.4, but I explicitly installed deepseq-1.3.0.2, and this did not help (same error?). Under Fedora 21 Haskell Platform contains ghc-7.6.3, so this is the boot compiler. The ghc binary for centos65 installs provided I define /usr/lib64/libgmp.so.3 as a symbolic link to /usr/lib64/libgmp.so.10, a risky move. Any tips would be much appreciated. Thanks, Dominick ===--- building phase 0 make -r --no-print-directory -f ghc.mk phase=0 phase_0_builds make[1]: Nothing to be done for 'phase_0_builds'. ===--- building phase 1 make -r --no-print-directory -f ghc.mk phase=1 phase_1_builds "inplace/bin/ghc-cabal" check libraries/containers 'ghc-options: -O2' is rarely needed. Check that it is giving a real benefit and not just imposing longer compile times on your users. "inplace/bin/ghc-cabal" configure libraries/containers dist-install "" --with-ghc="/home/dsamperi/install/git/ghc/inplace/bin/ghc-stage1" --with-ghc-pkg="/home/dsamperi/install/git/ghc/inplace/bin/ghc-pkg" --disable-library-for-ghci --enable-library-vanilla --disable-library-profiling --enable-shared --configure-option=CFLAGS=" -fno-stack-protector " --configure-option=LDFLAGS=" " --configure-option=CPPFLAGS=" " --gcc-options=" -fno-stack-protector " --with-gcc="/usr/bin/gcc" --with-ld="/usr/bin/ld" --configure-option=--with-cc="/usr/bin/gcc" --with-ar="/usr/bin/ar" --with-ranlib="/usr/bin/ranlib" --with-alex="/home/dsamperi/.cabal/bin/alex" --with-happy="/home/dsamperi/.cabal/bin/happy" Configuring containers-0.5.5.1... ghc-cabal: At least the following dependencies are missing: deepseq >=1.2 && <1.4 libraries/containers/ghc.mk:4: recipe for target 'libraries/containers/dist-install/package-data.mk' failed make[1]: *** [libraries/containers/dist-install/package-data.mk] Error 1 Makefile:71: recipe for target 'all' failed make: *** [all] Error 2 From djsamperi at gmail.com Sun Jan 11 06:32:19 2015 From: djsamperi at gmail.com (Dominick Samperi) Date: Sun, 11 Jan 2015 01:32:19 -0500 Subject: Build failure under Fedora 21 In-Reply-To: References: Message-ID: Here is a little more context...if I use the centos65 binary and try to install pandoc, it fails on the install of the vector package, because the shared library libHSprimitive-0.5.4.0.so is not found (the package primitive-0.5.4.0 contains libHSprimitive-0.5.4.0.a, no .so file?). Here is the diagnostic: Resolving dependencies... Configuring vector-0.10.12.2... Building vector-0.10.12.2... Preprocessing library vector-0.10.12.2... [ 1 of 19] Compiling Data.Vector.Storable.Internal ( Data/Vector/Storable/Internal.hs, dist/build/Data/Vector/Storable/Internal.o ) [ 2 of 19] Compiling Data.Vector.Fusion.Util ( Data/Vector/Fusion/Util.hs, dist/build/Data/Vector/Fusion/Util.o ) [ 3 of 19] Compiling Data.Vector.Fusion.Stream.Size ( Data/Vector/Fusion/Stream/Size.hs, dist/build/Data/Vector/Fusion/Stream/Size.o ) Data/Vector/Fusion/Stream/Size.hs:25:10: Warning: No explicit implementation for '*', 'abs', and 'signum' In the instance declaration for 'Num Size' [ 4 of 19] Compiling Data.Vector.Internal.Check ( Data/Vector/Internal/Check.hs, dist/build/Data/Vector/Internal/Check.o ) [ 5 of 19] Compiling Data.Vector.Fusion.Stream.Monadic ( Data/Vector/Fusion/Stream/Monadic.hs, dist/build/Data/Vector/Fusion/Stream/Monadic.o ) Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Loading package primitive-0.5.4.0 ... : can't load .so/.DLL for: libHSprimitive-0.5.4.0.so (libHSprimitive-0.5.4.0.so: cannot open shared object file: No such file or directory) Failed to install vector-0.10.12.2 cabal: Error: some packages failed to install: vector-0.10.12.2 failed during the building phase. The exception was: ExitFailure 1 On Sun, Jan 11, 2015 at 1:07 AM, Dominick Samperi wrote: > Hello, > > I'm trying to build ghc-7.8 branch under Fedora 21 and I get the > failure diagnostics appended below. The problem seems to be that the > build process cannot satisfy deepseq >=1.2 && <1.4, but I explicitly > installed deepseq-1.3.0.2, and this did not help (same error?). > > Under Fedora 21 Haskell Platform contains ghc-7.6.3, so this is > the boot compiler. The ghc binary for centos65 installs provided I define > /usr/lib64/libgmp.so.3 as a symbolic link to /usr/lib64/libgmp.so.10, > a risky move. > > Any tips would be much appreciated. > > Thanks, > Dominick > > > ===--- building phase 0 > make -r --no-print-directory -f ghc.mk phase=0 phase_0_builds > make[1]: Nothing to be done for 'phase_0_builds'. > ===--- building phase 1 > make -r --no-print-directory -f ghc.mk phase=1 phase_1_builds > "inplace/bin/ghc-cabal" check libraries/containers > 'ghc-options: -O2' is rarely needed. Check that it is giving a real > benefit and not just imposing longer compile times on your users. > "inplace/bin/ghc-cabal" configure libraries/containers dist-install "" > --with-ghc="/home/dsamperi/install/git/ghc/inplace/bin/ghc-stage1" > --with-ghc-pkg="/home/dsamperi/install/git/ghc/inplace/bin/ghc-pkg" > --disable-library-for-ghci --enable-library-vanilla > --disable-library-profiling --enable-shared > --configure-option=CFLAGS=" -fno-stack-protector " > --configure-option=LDFLAGS=" " --configure-option=CPPFLAGS=" " > --gcc-options=" -fno-stack-protector " --with-gcc="/usr/bin/gcc" > --with-ld="/usr/bin/ld" --configure-option=--with-cc="/usr/bin/gcc" > --with-ar="/usr/bin/ar" --with-ranlib="/usr/bin/ranlib" > --with-alex="/home/dsamperi/.cabal/bin/alex" > --with-happy="/home/dsamperi/.cabal/bin/happy" > Configuring containers-0.5.5.1... > ghc-cabal: At least the following dependencies are missing: > deepseq >=1.2 && <1.4 > libraries/containers/ghc.mk:4: recipe for target > 'libraries/containers/dist-install/package-data.mk' failed > make[1]: *** [libraries/containers/dist-install/package-data.mk] Error 1 > Makefile:71: recipe for target 'all' failed > make: *** [all] Error 2 From ezyang at mit.edu Sun Jan 11 06:44:19 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Sat, 10 Jan 2015 22:44:19 -0800 Subject: Build failure under Fedora 21 In-Reply-To: References: Message-ID: <1420958479-sup-5449@sabre> What you cabal installed should be irrelevant, since containers is never built by the boot compiler. In your full log, is 'deepseq' registered before the build process attempts to register 'containers'? Is there a deepseq file in inplace/lib/package.conf.d? Edward Excerpts from Dominick Samperi's message of 2015-01-10 22:07:18 -0800: > Hello, > > I'm trying to build ghc-7.8 branch under Fedora 21 and I get the > failure diagnostics appended below. The problem seems to be that the > build process cannot satisfy deepseq >=1.2 && <1.4, but I explicitly > installed deepseq-1.3.0.2, and this did not help (same error?). > > Under Fedora 21 Haskell Platform contains ghc-7.6.3, so this is > the boot compiler. The ghc binary for centos65 installs provided I define > /usr/lib64/libgmp.so.3 as a symbolic link to /usr/lib64/libgmp.so.10, > a risky move. > > Any tips would be much appreciated. > > Thanks, > Dominick > > > ===--- building phase 0 > make -r --no-print-directory -f ghc.mk phase=0 phase_0_builds > make[1]: Nothing to be done for 'phase_0_builds'. > ===--- building phase 1 > make -r --no-print-directory -f ghc.mk phase=1 phase_1_builds > "inplace/bin/ghc-cabal" check libraries/containers > 'ghc-options: -O2' is rarely needed. Check that it is giving a real > benefit and not just imposing longer compile times on your users. > "inplace/bin/ghc-cabal" configure libraries/containers dist-install "" > --with-ghc="/home/dsamperi/install/git/ghc/inplace/bin/ghc-stage1" > --with-ghc-pkg="/home/dsamperi/install/git/ghc/inplace/bin/ghc-pkg" > --disable-library-for-ghci --enable-library-vanilla > --disable-library-profiling --enable-shared > --configure-option=CFLAGS=" -fno-stack-protector " > --configure-option=LDFLAGS=" " --configure-option=CPPFLAGS=" " > --gcc-options=" -fno-stack-protector " --with-gcc="/usr/bin/gcc" > --with-ld="/usr/bin/ld" --configure-option=--with-cc="/usr/bin/gcc" > --with-ar="/usr/bin/ar" --with-ranlib="/usr/bin/ranlib" > --with-alex="/home/dsamperi/.cabal/bin/alex" > --with-happy="/home/dsamperi/.cabal/bin/happy" > Configuring containers-0.5.5.1... > ghc-cabal: At least the following dependencies are missing: > deepseq >=1.2 && <1.4 > libraries/containers/ghc.mk:4: recipe for target > 'libraries/containers/dist-install/package-data.mk' failed > make[1]: *** [libraries/containers/dist-install/package-data.mk] Error 1 > Makefile:71: recipe for target 'all' failed > make: *** [all] Error 2 From roma at ro-che.info Sun Jan 11 10:04:25 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Sun, 11 Jan 2015 12:04:25 +0200 Subject: Build failure under Fedora 21 In-Reply-To: References: Message-ID: <54B24AA9.3060503@ro-che.info> Do you know you can simply install the ghc binary distribution without having to compile anything? https://www.haskell.org/ghc/download_ghc_7_8_4 On 11/01/15 08:07, Dominick Samperi wrote: > Hello, > > I'm trying to build ghc-7.8 branch under Fedora 21 and I get the > failure diagnostics appended below. The problem seems to be that the > build process cannot satisfy deepseq >=1.2 && <1.4, but I explicitly > installed deepseq-1.3.0.2, and this did not help (same error?). > > Under Fedora 21 Haskell Platform contains ghc-7.6.3, so this is > the boot compiler. The ghc binary for centos65 installs provided I define > /usr/lib64/libgmp.so.3 as a symbolic link to /usr/lib64/libgmp.so.10, > a risky move. > > Any tips would be much appreciated. > > Thanks, > Dominick > > > ===--- building phase 0 > make -r --no-print-directory -f ghc.mk phase=0 phase_0_builds > make[1]: Nothing to be done for 'phase_0_builds'. > ===--- building phase 1 > make -r --no-print-directory -f ghc.mk phase=1 phase_1_builds > "inplace/bin/ghc-cabal" check libraries/containers > 'ghc-options: -O2' is rarely needed. Check that it is giving a real > benefit and not just imposing longer compile times on your users. > "inplace/bin/ghc-cabal" configure libraries/containers dist-install "" > --with-ghc="/home/dsamperi/install/git/ghc/inplace/bin/ghc-stage1" > --with-ghc-pkg="/home/dsamperi/install/git/ghc/inplace/bin/ghc-pkg" > --disable-library-for-ghci --enable-library-vanilla > --disable-library-profiling --enable-shared > --configure-option=CFLAGS=" -fno-stack-protector " > --configure-option=LDFLAGS=" " --configure-option=CPPFLAGS=" " > --gcc-options=" -fno-stack-protector " --with-gcc="/usr/bin/gcc" > --with-ld="/usr/bin/ld" --configure-option=--with-cc="/usr/bin/gcc" > --with-ar="/usr/bin/ar" --with-ranlib="/usr/bin/ranlib" > --with-alex="/home/dsamperi/.cabal/bin/alex" > --with-happy="/home/dsamperi/.cabal/bin/happy" > Configuring containers-0.5.5.1... > ghc-cabal: At least the following dependencies are missing: > deepseq >=1.2 && <1.4 > libraries/containers/ghc.mk:4: recipe for target > 'libraries/containers/dist-install/package-data.mk' failed > make[1]: *** [libraries/containers/dist-install/package-data.mk] Error 1 > Makefile:71: recipe for target 'all' failed > make: *** [all] Error 2 > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > . > From roma at ro-che.info Sun Jan 11 14:25:52 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Sun, 11 Jan 2015 16:25:52 +0200 Subject: seq#: do we actually need it as a primitive? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629C301@DB3PRD3001MB020.064d.mgd.msft.net> References: <1420704021-sup-5888@sabre> <54AE8933.7030806@ro-che.info> <618BE556AADD624C9C918AA5D5911BEF5629C301@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <54B287F0.7090906@ro-che.info> I made an attempt at a better documentation for evaluate. See here: https://phabricator.haskell.org/D615 From ky3 at atamo.com Sun Jan 11 17:07:02 2015 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Mon, 12 Jan 2015 00:07:02 +0700 Subject: seq#: do we actually need it as a primitive? In-Reply-To: <54B287F0.7090906@ro-che.info> References: <1420704021-sup-5888@sabre> <54AE8933.7030806@ro-che.info> <618BE556AADD624C9C918AA5D5911BEF5629C301@DB3PRD3001MB020.064d.mgd.msft.net> <54B287F0.7090906@ro-che.info> Message-ID: On Sun, Jan 11, 2015 at 9:25 PM, Roman Cheplyaka wrote: > I made an attempt at a better documentation for evaluate. > See here: https://phabricator.haskell.org/D615 > Wunderbar. I especially liked the prescription at the end on when to use evaluate and when to prefer (return $!). -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sun Jan 11 17:28:14 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Sun, 11 Jan 2015 18:28:14 +0100 Subject: Clarification of HsBang and isBanged In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF5629B640@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF5629B640@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Those comments and the renaming really help. Here are a couple of more questions I got after exploring some more: DsMeta.repBangTy look wrong to me: repBangTy :: LBangType Name -> DsM (Core (TH.StrictTypeQ)) repBangTy ty= do MkC s <- rep2 str [] MkC t <- repLTy ty' rep2 strictTypeName [s, t] where (str, ty') = case ty of L _ (HsBangTy (HsSrcBang (Just True) True) ty) -> (unpackedName, ty) L _ (HsBangTy (HsSrcBang _ True) ty) -> (isStrictName, ty) _ -> (notStrictName, ty) Shouldn't the second case look at whether -funbox-strict-fields or -funbox-small-strict-fields is set and use unpackedName instead of isStrictName if so? What is repBangTy for? A related question, in MkId.dataConArgRep we have: dataConArgRep _ _ arg_ty HsStrict = strict_but_not_unpacked arg_ty Here we're not looking at -funbox-strict-fields and -funbox-small-strict-fields. Is it the case that we only need to look at these flags in the case of HsSrcBang, because HsStrict can only be generated by us (and we presumably looked at the flags when we converted a HsSrcBang to a HsStrict)? On Thu, Jan 8, 2015 at 4:09 PM, Simon Peyton Jones wrote: > I?m glad you are getting back to strictness. > > > > Good questions. > > > > I?ve pushed (or will as soon as I have validated) a patch that adds type > synonyms, updates comments (some of which were indeed misleading), and > changes a few names for clarity and consistency. I hope that answers all > your questions. > > > > Except these: > > > > ? Why is there a coercion in `HsUnpack` but not in `HsUserBang > (Just True) True`? Because the former is implementation generated but the > latter is source code specified. > > ? Why isn't this information split over two data types. Because > there?s a bit of overlap. See comments with HsSrcBang > > > > Simon > > > > *From:* Johan Tibell [mailto:johan.tibell at gmail.com] > *Sent:* 08 January 2015 07:36 > *To:* ghc-devs at haskell.org > *Cc:* Simon Peyton Jones > *Subject:* Clarification of HsBang and isBanged > > > > HsBang is defined as: > > -- HsBang describes what the *programmer* wrote > > -- This info is retained in the DataCon.dcStrictMarks field > > data HsBang > > = HsUserBang -- The user's source-code request > > (Maybe Bool) -- Just True {-# UNPACK #-} > > -- Just False {-# NOUNPACK #-} > > -- Nothing no pragma > > Bool -- True <=> '!' specified > > > > | HsNoBang -- Lazy field > > -- HsUserBang Nothing False means the same > as HsNoBang > > > > | HsUnpack -- Definite commitment: this field is strict > and unboxed > > (Maybe Coercion) -- co :: arg-ty ~ product-ty > > > > | HsStrict -- Definite commitment: this field is strict > but not unboxed > > > This data type is a bit unclear to me: > > * What are the reasons for the following constructor overlaps? > * `HsNoBang` and `HsUserBang Nothing False` > * `HsStrict` and `HsUserBang Nothing True` > * `HsUnpack mb_co` and `HsUserBang (Just True) True` > > > * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) > True`? > > > > * Is there a difference in what the user wrote in the case of HsUserBang > and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the > compiler as opposed to being written by the user (the function > documentation notwithstanding)? > > A very related function is isBanged: > > isBanged :: HsBang -> Bool > > isBanged HsNoBang = False > > isBanged (HsUserBang Nothing bang) = bang > > isBanged _ = True > > > > What's the meaning of this function? Is it intended to communicate what > the user wrote or whether result of what the user wrote results in a strict > function? > > > Context: I'm adding a new StrictData language pragma [1] that makes fields > strict by default and a '~' annotation of fields to reverse the default > behavior. My intention is to change HsBang like so: > > - Bool -- True <=> '!' specified > + (Maybe Bool) -- True <=> '!' specified, False <=> '~' > + -- specified, Nothing <=> unspecified > > 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma > > > -- Johan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sun Jan 11 19:11:57 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Sun, 11 Jan 2015 20:11:57 +0100 Subject: Clarification of HsBang and isBanged In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629B640@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Yet another one. TcSplice.reifyStrict doesn't take the unboxing flags into account either. Should it? reifyStrict :: DataCon.HsSrcBang -> TH.Strict reifyStrict HsNoBang = TH.NotStrict reifyStrict (HsSrcBang _ False) = TH.NotStrict reifyStrict (HsSrcBang (Just True) True) = TH.Unpacked reifyStrict (HsSrcBang _ True) = TH.IsStrict reifyStrict HsStrict = TH.IsStrict reifyStrict (HsUnpack {}) = TH.Unpacked Should reifyStrict (HsSrcBang _ True) = TH.IsStrict be TH.Unpacked if we have -funbox-strict-fields? On Sun, Jan 11, 2015 at 6:28 PM, Johan Tibell wrote: > Those comments and the renaming really help. Here are a couple of more > questions I got after exploring some more: > > DsMeta.repBangTy look wrong to me: > > repBangTy :: LBangType Name -> DsM (Core (TH.StrictTypeQ)) > repBangTy ty= do > MkC s <- rep2 str [] > MkC t <- repLTy ty' > rep2 strictTypeName [s, t] > where > (str, ty') = case ty of > L _ (HsBangTy (HsSrcBang (Just True) True) ty) -> > (unpackedName, ty) > L _ (HsBangTy (HsSrcBang _ True) ty) -> > (isStrictName, ty) > _ -> > (notStrictName, ty) > > Shouldn't the second case look at whether -funbox-strict-fields or > -funbox-small-strict-fields is set and use unpackedName instead of > isStrictName if so? What is repBangTy for? > > A related question, in MkId.dataConArgRep we have: > > dataConArgRep _ _ arg_ty HsStrict > = strict_but_not_unpacked arg_ty > > Here we're not looking at -funbox-strict-fields > and -funbox-small-strict-fields. Is it the case that we only need to look > at these flags in the case of HsSrcBang, because HsStrict can only be > generated by us (and we presumably looked at the flags when we converted a > HsSrcBang to a HsStrict)? > > On Thu, Jan 8, 2015 at 4:09 PM, Simon Peyton Jones > wrote: > >> I?m glad you are getting back to strictness. >> >> >> >> Good questions. >> >> >> >> I?ve pushed (or will as soon as I have validated) a patch that adds type >> synonyms, updates comments (some of which were indeed misleading), and >> changes a few names for clarity and consistency. I hope that answers all >> your questions. >> >> >> >> Except these: >> >> >> >> ? Why is there a coercion in `HsUnpack` but not in `HsUserBang >> (Just True) True`? Because the former is implementation generated but the >> latter is source code specified. >> >> ? Why isn't this information split over two data types. Because >> there?s a bit of overlap. See comments with HsSrcBang >> >> >> >> Simon >> >> >> >> *From:* Johan Tibell [mailto:johan.tibell at gmail.com] >> *Sent:* 08 January 2015 07:36 >> *To:* ghc-devs at haskell.org >> *Cc:* Simon Peyton Jones >> *Subject:* Clarification of HsBang and isBanged >> >> >> >> HsBang is defined as: >> >> -- HsBang describes what the *programmer* wrote >> >> -- This info is retained in the DataCon.dcStrictMarks field >> >> data HsBang >> >> = HsUserBang -- The user's source-code request >> >> (Maybe Bool) -- Just True {-# UNPACK #-} >> >> -- Just False {-# NOUNPACK #-} >> >> -- Nothing no pragma >> >> Bool -- True <=> '!' specified >> >> >> >> | HsNoBang -- Lazy field >> >> -- HsUserBang Nothing False means the same >> as HsNoBang >> >> >> >> | HsUnpack -- Definite commitment: this field is >> strict and unboxed >> >> (Maybe Coercion) -- co :: arg-ty ~ product-ty >> >> >> >> | HsStrict -- Definite commitment: this field is >> strict but not unboxed >> >> >> This data type is a bit unclear to me: >> >> * What are the reasons for the following constructor overlaps? >> * `HsNoBang` and `HsUserBang Nothing False` >> * `HsStrict` and `HsUserBang Nothing True` >> * `HsUnpack mb_co` and `HsUserBang (Just True) True` >> >> >> * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just >> True) True`? >> >> >> >> * Is there a difference in what the user wrote in the case of HsUserBang >> and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the >> compiler as opposed to being written by the user (the function >> documentation notwithstanding)? >> >> A very related function is isBanged: >> >> isBanged :: HsBang -> Bool >> >> isBanged HsNoBang = False >> >> isBanged (HsUserBang Nothing bang) = bang >> >> isBanged _ = True >> >> >> >> What's the meaning of this function? Is it intended to communicate what >> the user wrote or whether result of what the user wrote results in a strict >> function? >> >> >> Context: I'm adding a new StrictData language pragma [1] that makes >> fields strict by default and a '~' annotation of fields to reverse the >> default behavior. My intention is to change HsBang like so: >> >> - Bool -- True <=> '!' specified >> + (Maybe Bool) -- True <=> '!' specified, False <=> '~' >> + -- specified, Nothing <=> unspecified >> >> 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma >> >> >> -- Johan >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Sun Jan 11 22:27:19 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Sun, 11 Jan 2015 23:27:19 +0100 Subject: Clarification of HsBang and isBanged In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629B640@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Yet more questions. I think I'm on the wrong track. I was trying to change MkId.dataConArgRep in order to make user-defined fields get the right strictness. However, some debug tracing suggests that this function isn't used (or isn't only used) to compute the strictness and "unpackedness" of a data constructor defined in the module being compiled, but also for modules being imported. Is that correct? The code (including tests) is here: https://github.com/ghc/ghc/compare/601e345e5df6%5E...1cee34c71e80 The parser changes I'm making seem to not be quite right. I've changed the strict_mark parser in Parser.y to read: strict_mark :: { Located ([AddAnn],HsBang) } : '!' { sL1 $1 ([], HsSrcBang Nothing (Just True)) } | '~' { sL1 $1 ([], HsSrcBang Nothing (Just False)) } | '{-# UNPACK' '#-}' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just True) Nothing) } | '{-# NOUNPACK' '#-}' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just False) Nothing) } | '{-# UNPACK' '#-}' '!' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just True) (Just True)) } | '{-# NOUNPACK' '#-}' '!' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just False) (Just True)) } | '{-# UNPACK' '#-}' '~' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just True) (Just False)) } | '{-# NOUNPACK' '#-}' '~' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just False) (Just False)) } -- Although UNPACK with no '!' and UNPACK with '~' are illegal, we get a -- better error message if we parse them here but parsing this data type data Lazy a = L ~a gives this error DsStrictData.hs:14:1: parse error (possibly incorrect indentation or mismatched brackets) -- Johan On Sun, Jan 11, 2015 at 8:11 PM, Johan Tibell wrote: > Yet another one. TcSplice.reifyStrict doesn't take the unboxing flags into > account either. Should it? > > reifyStrict :: DataCon.HsSrcBang -> TH.Strict > reifyStrict HsNoBang = TH.NotStrict > reifyStrict (HsSrcBang _ False) = TH.NotStrict > reifyStrict (HsSrcBang (Just True) True) = TH.Unpacked > reifyStrict (HsSrcBang _ True) = TH.IsStrict > reifyStrict HsStrict = TH.IsStrict > reifyStrict (HsUnpack {}) = TH.Unpacked > > Should > > reifyStrict (HsSrcBang _ True) = TH.IsStrict > > be TH.Unpacked if we have -funbox-strict-fields? > > On Sun, Jan 11, 2015 at 6:28 PM, Johan Tibell > wrote: > >> Those comments and the renaming really help. Here are a couple of more >> questions I got after exploring some more: >> >> DsMeta.repBangTy look wrong to me: >> >> repBangTy :: LBangType Name -> DsM (Core (TH.StrictTypeQ)) >> repBangTy ty= do >> MkC s <- rep2 str [] >> MkC t <- repLTy ty' >> rep2 strictTypeName [s, t] >> where >> (str, ty') = case ty of >> L _ (HsBangTy (HsSrcBang (Just True) True) ty) -> >> (unpackedName, ty) >> L _ (HsBangTy (HsSrcBang _ True) ty) -> >> (isStrictName, ty) >> _ -> >> (notStrictName, ty) >> >> Shouldn't the second case look at whether -funbox-strict-fields or >> -funbox-small-strict-fields is set and use unpackedName instead of >> isStrictName if so? What is repBangTy for? >> >> A related question, in MkId.dataConArgRep we have: >> >> dataConArgRep _ _ arg_ty HsStrict >> = strict_but_not_unpacked arg_ty >> >> Here we're not looking at -funbox-strict-fields >> and -funbox-small-strict-fields. Is it the case that we only need to look >> at these flags in the case of HsSrcBang, because HsStrict can only be >> generated by us (and we presumably looked at the flags when we converted a >> HsSrcBang to a HsStrict)? >> >> On Thu, Jan 8, 2015 at 4:09 PM, Simon Peyton Jones > > wrote: >> >>> I?m glad you are getting back to strictness. >>> >>> >>> >>> Good questions. >>> >>> >>> >>> I?ve pushed (or will as soon as I have validated) a patch that adds type >>> synonyms, updates comments (some of which were indeed misleading), and >>> changes a few names for clarity and consistency. I hope that answers all >>> your questions. >>> >>> >>> >>> Except these: >>> >>> >>> >>> ? Why is there a coercion in `HsUnpack` but not in `HsUserBang >>> (Just True) True`? Because the former is implementation generated but the >>> latter is source code specified. >>> >>> ? Why isn't this information split over two data types. >>> Because there?s a bit of overlap. See comments with HsSrcBang >>> >>> >>> >>> Simon >>> >>> >>> >>> *From:* Johan Tibell [mailto:johan.tibell at gmail.com] >>> *Sent:* 08 January 2015 07:36 >>> *To:* ghc-devs at haskell.org >>> *Cc:* Simon Peyton Jones >>> *Subject:* Clarification of HsBang and isBanged >>> >>> >>> >>> HsBang is defined as: >>> >>> -- HsBang describes what the *programmer* wrote >>> >>> -- This info is retained in the DataCon.dcStrictMarks field >>> >>> data HsBang >>> >>> = HsUserBang -- The user's source-code request >>> >>> (Maybe Bool) -- Just True {-# UNPACK #-} >>> >>> -- Just False {-# NOUNPACK #-} >>> >>> -- Nothing no pragma >>> >>> Bool -- True <=> '!' specified >>> >>> >>> >>> | HsNoBang -- Lazy field >>> >>> -- HsUserBang Nothing False means the same >>> as HsNoBang >>> >>> >>> >>> | HsUnpack -- Definite commitment: this field is >>> strict and unboxed >>> >>> (Maybe Coercion) -- co :: arg-ty ~ product-ty >>> >>> >>> >>> | HsStrict -- Definite commitment: this field is >>> strict but not unboxed >>> >>> >>> This data type is a bit unclear to me: >>> >>> * What are the reasons for the following constructor overlaps? >>> * `HsNoBang` and `HsUserBang Nothing False` >>> * `HsStrict` and `HsUserBang Nothing True` >>> * `HsUnpack mb_co` and `HsUserBang (Just True) True` >>> >>> >>> * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just >>> True) True`? >>> >>> >>> >>> * Is there a difference in what the user wrote in the case of HsUserBang >>> and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the >>> compiler as opposed to being written by the user (the function >>> documentation notwithstanding)? >>> >>> A very related function is isBanged: >>> >>> isBanged :: HsBang -> Bool >>> >>> isBanged HsNoBang = False >>> >>> isBanged (HsUserBang Nothing bang) = bang >>> >>> isBanged _ = True >>> >>> >>> >>> What's the meaning of this function? Is it intended to communicate what >>> the user wrote or whether result of what the user wrote results in a strict >>> function? >>> >>> >>> Context: I'm adding a new StrictData language pragma [1] that makes >>> fields strict by default and a '~' annotation of fields to reverse the >>> default behavior. My intention is to change HsBang like so: >>> >>> - Bool -- True <=> '!' specified >>> + (Maybe Bool) -- True <=> '!' specified, False <=> '~' >>> + -- specified, Nothing <=> unspecified >>> >>> 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma >>> >>> >>> -- Johan >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sun Jan 11 22:56:03 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 11 Jan 2015 22:56:03 +0000 Subject: Clarification of HsBang and isBanged In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629B640@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629E575@DB3PRD3001MB020.064d.mgd.msft.net> OK, so one thing I failed to explain in the comment is this: for imported DataCons, the dcSrcBangs field is precisely the [HsImplBang] decisions computed when compiling the defining module. So if the defining module was compiled with ?O ?funbox-strict-fields, GHC will make one set of choices. Those [HsImplBang] choices are recorded in the IfaceBangs in the interface file. Those IfaceBangs in turn get put back into the dcSrcBangs field of the DataCon constructed by TcIface when reading the interface file. So when the dcSrcBangs are in fact HsImpBangs, they should be followed slavishly, because the decisions have already been taken. Even if the ?O or ?funbox-strict-fields flags differ in the importing module from the defining module. I?ve rewritten the comment below. Maybe the field should not be called dcSrcBangs but dcOrigBangs? Simon {- Note [Bangs on data constructor arguments] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Consider data T = MkT !Int {-# UNPACK #-} !Int Bool When compiling the module, GHC will decide how to represent MkT, depending on the optimisation level, and settings of flags like -funbox-small-strict-fields. Terminology: * HsSrcBang: What the user wrote Constructors: HsNoBang, HsUserBang * HsImplBang: What GHC decided Constructors: HsNoBang, HsStrict, HsUnpack * If T was defined in this module, MkT's dcSrcBangs field records the [HsSrcBang] of what the user wrote; in the example [ HsSrcBang Nothing True , HsSrcBang (Just True) True , HsNoBang] * However, if T was defined in an imported module, MkT's dcSrcBangs field gives the [HsImplBang] recording the decisions of the defining module. The importing module must follow those decisions, regardless of the flag settings in the importing module. * The dcr_bangs field of the dcRep field records the [HsImplBang] If T was defined in this module, Without -O the dcr_bangs might be [HsStrict, HsStrict, HsNoBang] With -O it might be [HsStrict, HsUnpack, HsNoBang] With -funbox-small-strict-fields it might be [HsUnpack, HsUnpack, HsNoBang] From: Johan Tibell [mailto:johan.tibell at gmail.com] Sent: 11 January 2015 17:28 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Clarification of HsBang and isBanged Those comments and the renaming really help. Here are a couple of more questions I got after exploring some more: DsMeta.repBangTy look wrong to me: repBangTy :: LBangType Name -> DsM (Core (TH.StrictTypeQ)) repBangTy ty= do MkC s <- rep2 str [] MkC t <- repLTy ty' rep2 strictTypeName [s, t] where (str, ty') = case ty of L _ (HsBangTy (HsSrcBang (Just True) True) ty) -> (unpackedName, ty) L _ (HsBangTy (HsSrcBang _ True) ty) -> (isStrictName, ty) _ -> (notStrictName, ty) Shouldn't the second case look at whether -funbox-strict-fields or -funbox-small-strict-fields is set and use unpackedName instead of isStrictName if so? What is repBangTy for? A related question, in MkId.dataConArgRep we have: dataConArgRep _ _ arg_ty HsStrict = strict_but_not_unpacked arg_ty Here we're not looking at -funbox-strict-fields and -funbox-small-strict-fields. Is it the case that we only need to look at these flags in the case of HsSrcBang, because HsStrict can only be generated by us (and we presumably looked at the flags when we converted a HsSrcBang to a HsStrict)? On Thu, Jan 8, 2015 at 4:09 PM, Simon Peyton Jones > wrote: I?m glad you are getting back to strictness. Good questions. I?ve pushed (or will as soon as I have validated) a patch that adds type synonyms, updates comments (some of which were indeed misleading), and changes a few names for clarity and consistency. I hope that answers all your questions. Except these: ? Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) True`? Because the former is implementation generated but the latter is source code specified. ? Why isn't this information split over two data types. Because there?s a bit of overlap. See comments with HsSrcBang Simon From: Johan Tibell [mailto:johan.tibell at gmail.com] Sent: 08 January 2015 07:36 To: ghc-devs at haskell.org Cc: Simon Peyton Jones Subject: Clarification of HsBang and isBanged HsBang is defined as: -- HsBang describes what the *programmer* wrote -- This info is retained in the DataCon.dcStrictMarks field data HsBang = HsUserBang -- The user's source-code request (Maybe Bool) -- Just True {-# UNPACK #-} -- Just False {-# NOUNPACK #-} -- Nothing no pragma Bool -- True <=> '!' specified | HsNoBang -- Lazy field -- HsUserBang Nothing False means the same as HsNoBang | HsUnpack -- Definite commitment: this field is strict and unboxed (Maybe Coercion) -- co :: arg-ty ~ product-ty | HsStrict -- Definite commitment: this field is strict but not unboxed This data type is a bit unclear to me: * What are the reasons for the following constructor overlaps? * `HsNoBang` and `HsUserBang Nothing False` * `HsStrict` and `HsUserBang Nothing True` * `HsUnpack mb_co` and `HsUserBang (Just True) True` * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) True`? * Is there a difference in what the user wrote in the case of HsUserBang and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the compiler as opposed to being written by the user (the function documentation notwithstanding)? A very related function is isBanged: isBanged :: HsBang -> Bool isBanged HsNoBang = False isBanged (HsUserBang Nothing bang) = bang isBanged _ = True What's the meaning of this function? Is it intended to communicate what the user wrote or whether result of what the user wrote results in a strict function? Context: I'm adding a new StrictData language pragma [1] that makes fields strict by default and a '~' annotation of fields to reverse the default behavior. My intention is to change HsBang like so: - Bool -- True <=> '!' specified + (Maybe Bool) -- True <=> '!' specified, False <=> '~' + -- specified, Nothing <=> unspecified 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sun Jan 11 22:56:56 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 11 Jan 2015 22:56:56 +0000 Subject: Clarification of HsBang and isBanged In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629B640@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629E58F@DB3PRD3001MB020.064d.mgd.msft.net> Correct. And when dealing with an imported DataCon, it must slavishly follow the decisions taken in the defining module S From: Johan Tibell [mailto:johan.tibell at gmail.com] Sent: 11 January 2015 22:27 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Clarification of HsBang and isBanged Yet more questions. I think I'm on the wrong track. I was trying to change MkId.dataConArgRep in order to make user-defined fields get the right strictness. However, some debug tracing suggests that this function isn't used (or isn't only used) to compute the strictness and "unpackedness" of a data constructor defined in the module being compiled, but also for modules being imported. Is that correct? The code (including tests) is here: https://github.com/ghc/ghc/compare/601e345e5df6%5E...1cee34c71e80 The parser changes I'm making seem to not be quite right. I've changed the strict_mark parser in Parser.y to read: strict_mark :: { Located ([AddAnn],HsBang) } : '!' { sL1 $1 ([], HsSrcBang Nothing (Just True)) } | '~' { sL1 $1 ([], HsSrcBang Nothing (Just False)) } | '{-# UNPACK' '#-}' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just True) Nothing) } | '{-# NOUNPACK' '#-}' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just False) Nothing) } | '{-# UNPACK' '#-}' '!' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just True) (Just True)) } | '{-# NOUNPACK' '#-}' '!' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just False) (Just True)) } | '{-# UNPACK' '#-}' '~' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just True) (Just False)) } | '{-# NOUNPACK' '#-}' '~' { sLL $1 $> ([mo $1,mc $2], HsSrcBang (Just False) (Just False)) } -- Although UNPACK with no '!' and UNPACK with '~' are illegal, we get a -- better error message if we parse them here but parsing this data type data Lazy a = L ~a gives this error DsStrictData.hs:14:1: parse error (possibly incorrect indentation or mismatched brackets) -- Johan On Sun, Jan 11, 2015 at 8:11 PM, Johan Tibell > wrote: Yet another one. TcSplice.reifyStrict doesn't take the unboxing flags into account either. Should it? reifyStrict :: DataCon.HsSrcBang -> TH.Strict reifyStrict HsNoBang = TH.NotStrict reifyStrict (HsSrcBang _ False) = TH.NotStrict reifyStrict (HsSrcBang (Just True) True) = TH.Unpacked reifyStrict (HsSrcBang _ True) = TH.IsStrict reifyStrict HsStrict = TH.IsStrict reifyStrict (HsUnpack {}) = TH.Unpacked Should reifyStrict (HsSrcBang _ True) = TH.IsStrict be TH.Unpacked if we have -funbox-strict-fields? On Sun, Jan 11, 2015 at 6:28 PM, Johan Tibell > wrote: Those comments and the renaming really help. Here are a couple of more questions I got after exploring some more: DsMeta.repBangTy look wrong to me: repBangTy :: LBangType Name -> DsM (Core (TH.StrictTypeQ)) repBangTy ty= do MkC s <- rep2 str [] MkC t <- repLTy ty' rep2 strictTypeName [s, t] where (str, ty') = case ty of L _ (HsBangTy (HsSrcBang (Just True) True) ty) -> (unpackedName, ty) L _ (HsBangTy (HsSrcBang _ True) ty) -> (isStrictName, ty) _ -> (notStrictName, ty) Shouldn't the second case look at whether -funbox-strict-fields or -funbox-small-strict-fields is set and use unpackedName instead of isStrictName if so? What is repBangTy for? A related question, in MkId.dataConArgRep we have: dataConArgRep _ _ arg_ty HsStrict = strict_but_not_unpacked arg_ty Here we're not looking at -funbox-strict-fields and -funbox-small-strict-fields. Is it the case that we only need to look at these flags in the case of HsSrcBang, because HsStrict can only be generated by us (and we presumably looked at the flags when we converted a HsSrcBang to a HsStrict)? On Thu, Jan 8, 2015 at 4:09 PM, Simon Peyton Jones > wrote: I?m glad you are getting back to strictness. Good questions. I?ve pushed (or will as soon as I have validated) a patch that adds type synonyms, updates comments (some of which were indeed misleading), and changes a few names for clarity and consistency. I hope that answers all your questions. Except these: ? Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) True`? Because the former is implementation generated but the latter is source code specified. ? Why isn't this information split over two data types. Because there?s a bit of overlap. See comments with HsSrcBang Simon From: Johan Tibell [mailto:johan.tibell at gmail.com] Sent: 08 January 2015 07:36 To: ghc-devs at haskell.org Cc: Simon Peyton Jones Subject: Clarification of HsBang and isBanged HsBang is defined as: -- HsBang describes what the *programmer* wrote -- This info is retained in the DataCon.dcStrictMarks field data HsBang = HsUserBang -- The user's source-code request (Maybe Bool) -- Just True {-# UNPACK #-} -- Just False {-# NOUNPACK #-} -- Nothing no pragma Bool -- True <=> '!' specified | HsNoBang -- Lazy field -- HsUserBang Nothing False means the same as HsNoBang | HsUnpack -- Definite commitment: this field is strict and unboxed (Maybe Coercion) -- co :: arg-ty ~ product-ty | HsStrict -- Definite commitment: this field is strict but not unboxed This data type is a bit unclear to me: * What are the reasons for the following constructor overlaps? * `HsNoBang` and `HsUserBang Nothing False` * `HsStrict` and `HsUserBang Nothing True` * `HsUnpack mb_co` and `HsUserBang (Just True) True` * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) True`? * Is there a difference in what the user wrote in the case of HsUserBang and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the compiler as opposed to being written by the user (the function documentation notwithstanding)? A very related function is isBanged: isBanged :: HsBang -> Bool isBanged HsNoBang = False isBanged (HsUserBang Nothing bang) = bang isBanged _ = True What's the meaning of this function? Is it intended to communicate what the user wrote or whether result of what the user wrote results in a strict function? Context: I'm adding a new StrictData language pragma [1] that makes fields strict by default and a '~' annotation of fields to reverse the default behavior. My intention is to change HsBang like so: - Bool -- True <=> '!' specified + (Maybe Bool) -- True <=> '!' specified, False <=> '~' + -- specified, Nothing <=> unspecified 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sun Jan 11 22:58:17 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 11 Jan 2015 22:58:17 +0000 Subject: Clarification of HsBang and isBanged In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629B640@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629E5A2@DB3PRD3001MB020.064d.mgd.msft.net> No it shouldn?t. This is TH so we are trying to reify source code. If we have what the user wrote (a HsSrcBang) we just follow it. If we don?t (i.e. HsStrict/HsUnpack) then we just have to do the best we can Simon From: Johan Tibell [mailto:johan.tibell at gmail.com] Sent: 11 January 2015 19:12 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Clarification of HsBang and isBanged Yet another one. TcSplice.reifyStrict doesn't take the unboxing flags into account either. Should it? reifyStrict :: DataCon.HsSrcBang -> TH.Strict reifyStrict HsNoBang = TH.NotStrict reifyStrict (HsSrcBang _ False) = TH.NotStrict reifyStrict (HsSrcBang (Just True) True) = TH.Unpacked reifyStrict (HsSrcBang _ True) = TH.IsStrict reifyStrict HsStrict = TH.IsStrict reifyStrict (HsUnpack {}) = TH.Unpacked Should reifyStrict (HsSrcBang _ True) = TH.IsStrict be TH.Unpacked if we have -funbox-strict-fields? On Sun, Jan 11, 2015 at 6:28 PM, Johan Tibell > wrote: Those comments and the renaming really help. Here are a couple of more questions I got after exploring some more: DsMeta.repBangTy look wrong to me: repBangTy :: LBangType Name -> DsM (Core (TH.StrictTypeQ)) repBangTy ty= do MkC s <- rep2 str [] MkC t <- repLTy ty' rep2 strictTypeName [s, t] where (str, ty') = case ty of L _ (HsBangTy (HsSrcBang (Just True) True) ty) -> (unpackedName, ty) L _ (HsBangTy (HsSrcBang _ True) ty) -> (isStrictName, ty) _ -> (notStrictName, ty) Shouldn't the second case look at whether -funbox-strict-fields or -funbox-small-strict-fields is set and use unpackedName instead of isStrictName if so? What is repBangTy for? A related question, in MkId.dataConArgRep we have: dataConArgRep _ _ arg_ty HsStrict = strict_but_not_unpacked arg_ty Here we're not looking at -funbox-strict-fields and -funbox-small-strict-fields. Is it the case that we only need to look at these flags in the case of HsSrcBang, because HsStrict can only be generated by us (and we presumably looked at the flags when we converted a HsSrcBang to a HsStrict)? On Thu, Jan 8, 2015 at 4:09 PM, Simon Peyton Jones > wrote: I?m glad you are getting back to strictness. Good questions. I?ve pushed (or will as soon as I have validated) a patch that adds type synonyms, updates comments (some of which were indeed misleading), and changes a few names for clarity and consistency. I hope that answers all your questions. Except these: ? Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) True`? Because the former is implementation generated but the latter is source code specified. ? Why isn't this information split over two data types. Because there?s a bit of overlap. See comments with HsSrcBang Simon From: Johan Tibell [mailto:johan.tibell at gmail.com] Sent: 08 January 2015 07:36 To: ghc-devs at haskell.org Cc: Simon Peyton Jones Subject: Clarification of HsBang and isBanged HsBang is defined as: -- HsBang describes what the *programmer* wrote -- This info is retained in the DataCon.dcStrictMarks field data HsBang = HsUserBang -- The user's source-code request (Maybe Bool) -- Just True {-# UNPACK #-} -- Just False {-# NOUNPACK #-} -- Nothing no pragma Bool -- True <=> '!' specified | HsNoBang -- Lazy field -- HsUserBang Nothing False means the same as HsNoBang | HsUnpack -- Definite commitment: this field is strict and unboxed (Maybe Coercion) -- co :: arg-ty ~ product-ty | HsStrict -- Definite commitment: this field is strict but not unboxed This data type is a bit unclear to me: * What are the reasons for the following constructor overlaps? * `HsNoBang` and `HsUserBang Nothing False` * `HsStrict` and `HsUserBang Nothing True` * `HsUnpack mb_co` and `HsUserBang (Just True) True` * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) True`? * Is there a difference in what the user wrote in the case of HsUserBang and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the compiler as opposed to being written by the user (the function documentation notwithstanding)? A very related function is isBanged: isBanged :: HsBang -> Bool isBanged HsNoBang = False isBanged (HsUserBang Nothing bang) = bang isBanged _ = True What's the meaning of this function? Is it intended to communicate what the user wrote or whether result of what the user wrote results in a strict function? Context: I'm adding a new StrictData language pragma [1] that makes fields strict by default and a '~' annotation of fields to reverse the default behavior. My intention is to change HsBang like so: - Bool -- True <=> '!' specified + (Maybe Bool) -- True <=> '!' specified, False <=> '~' + -- specified, Nothing <=> unspecified 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Sun Jan 11 23:00:24 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Sun, 11 Jan 2015 23:00:24 +0000 Subject: Clarification of HsBang and isBanged In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF5629B640@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF5629E5B5@DB3PRD3001MB020.064d.mgd.msft.net> Shouldn't the second case look at whether -funbox-strict-fields or -funbox-small-strict-fields is set and use unpackedName instead of isStrictName if so? What is repBangTy for? No, we are generating code that, when run, will generate the TH data structure for a data type declaration. That is, source code. That source code might get compiled with ?O or ?funbox-strict-fields or whatever... but that comes later. At this point we are just generating source code. Simon From: Johan Tibell [mailto:johan.tibell at gmail.com] Sent: 11 January 2015 17:28 To: Simon Peyton Jones Cc: ghc-devs at haskell.org Subject: Re: Clarification of HsBang and isBanged Those comments and the renaming really help. Here are a couple of more questions I got after exploring some more: DsMeta.repBangTy look wrong to me: repBangTy :: LBangType Name -> DsM (Core (TH.StrictTypeQ)) repBangTy ty= do MkC s <- rep2 str [] MkC t <- repLTy ty' rep2 strictTypeName [s, t] where (str, ty') = case ty of L _ (HsBangTy (HsSrcBang (Just True) True) ty) -> (unpackedName, ty) L _ (HsBangTy (HsSrcBang _ True) ty) -> (isStrictName, ty) _ -> (notStrictName, ty) Shouldn't the second case look at whether -funbox-strict-fields or -funbox-small-strict-fields is set and use unpackedName instead of isStrictName if so? What is repBangTy for? A related question, in MkId.dataConArgRep we have: dataConArgRep _ _ arg_ty HsStrict = strict_but_not_unpacked arg_ty Here we're not looking at -funbox-strict-fields and -funbox-small-strict-fields. Is it the case that we only need to look at these flags in the case of HsSrcBang, because HsStrict can only be generated by us (and we presumably looked at the flags when we converted a HsSrcBang to a HsStrict)? On Thu, Jan 8, 2015 at 4:09 PM, Simon Peyton Jones > wrote: I?m glad you are getting back to strictness. Good questions. I?ve pushed (or will as soon as I have validated) a patch that adds type synonyms, updates comments (some of which were indeed misleading), and changes a few names for clarity and consistency. I hope that answers all your questions. Except these: ? Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) True`? Because the former is implementation generated but the latter is source code specified. ? Why isn't this information split over two data types. Because there?s a bit of overlap. See comments with HsSrcBang Simon From: Johan Tibell [mailto:johan.tibell at gmail.com] Sent: 08 January 2015 07:36 To: ghc-devs at haskell.org Cc: Simon Peyton Jones Subject: Clarification of HsBang and isBanged HsBang is defined as: -- HsBang describes what the *programmer* wrote -- This info is retained in the DataCon.dcStrictMarks field data HsBang = HsUserBang -- The user's source-code request (Maybe Bool) -- Just True {-# UNPACK #-} -- Just False {-# NOUNPACK #-} -- Nothing no pragma Bool -- True <=> '!' specified | HsNoBang -- Lazy field -- HsUserBang Nothing False means the same as HsNoBang | HsUnpack -- Definite commitment: this field is strict and unboxed (Maybe Coercion) -- co :: arg-ty ~ product-ty | HsStrict -- Definite commitment: this field is strict but not unboxed This data type is a bit unclear to me: * What are the reasons for the following constructor overlaps? * `HsNoBang` and `HsUserBang Nothing False` * `HsStrict` and `HsUserBang Nothing True` * `HsUnpack mb_co` and `HsUserBang (Just True) True` * Why is there a coercion in `HsUnpack` but not in `HsUserBang (Just True) True`? * Is there a difference in what the user wrote in the case of HsUserBang and HsNoBang/HsUnpack/HsStrict e.g are the latter three generated by the compiler as opposed to being written by the user (the function documentation notwithstanding)? A very related function is isBanged: isBanged :: HsBang -> Bool isBanged HsNoBang = False isBanged (HsUserBang Nothing bang) = bang isBanged _ = True What's the meaning of this function? Is it intended to communicate what the user wrote or whether result of what the user wrote results in a strict function? Context: I'm adding a new StrictData language pragma [1] that makes fields strict by default and a '~' annotation of fields to reverse the default behavior. My intention is to change HsBang like so: - Bool -- True <=> '!' specified + (Maybe Bool) -- True <=> '!' specified, False <=> '~' + -- specified, Nothing <=> unspecified 1. https://ghc.haskell.org/trac/ghc/wiki/StrictPragma -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From djsamperi at gmail.com Mon Jan 12 04:00:09 2015 From: djsamperi at gmail.com (Dominick Samperi) Date: Sun, 11 Jan 2015 23:00:09 -0500 Subject: Build failure under Fedora 21 In-Reply-To: <1420958479-sup-5449@sabre> References: <1420958479-sup-5449@sabre> Message-ID: Thank you Edward. I managed to build from HEAD under Fedora 21 (after 'cabal install containers'). The errors I reported may be related to the way I tried to clone a particular branch: git clone -b ghc-7.8 git://github.com/ghc/ghc.git It is not clear how to checkout/build/package a particular version of ghc for Fedora 21. Dominick On Sun, Jan 11, 2015 at 1:44 AM, Edward Z. Yang wrote: > What you cabal installed should be irrelevant, since containers is never > built by the boot compiler. In your full log, is 'deepseq' registered > before the build process attempts to register 'containers'? Is > there a deepseq file in inplace/lib/package.conf.d? > > Edward > > Excerpts from Dominick Samperi's message of 2015-01-10 22:07:18 -0800: >> Hello, >> >> I'm trying to build ghc-7.8 branch under Fedora 21 and I get the >> failure diagnostics appended below. The problem seems to be that the >> build process cannot satisfy deepseq >=1.2 && <1.4, but I explicitly >> installed deepseq-1.3.0.2, and this did not help (same error?). >> >> Under Fedora 21 Haskell Platform contains ghc-7.6.3, so this is >> the boot compiler. The ghc binary for centos65 installs provided I define >> /usr/lib64/libgmp.so.3 as a symbolic link to /usr/lib64/libgmp.so.10, >> a risky move. >> >> Any tips would be much appreciated. >> >> Thanks, >> Dominick >> >> >> ===--- building phase 0 >> make -r --no-print-directory -f ghc.mk phase=0 phase_0_builds >> make[1]: Nothing to be done for 'phase_0_builds'. >> ===--- building phase 1 >> make -r --no-print-directory -f ghc.mk phase=1 phase_1_builds >> "inplace/bin/ghc-cabal" check libraries/containers >> 'ghc-options: -O2' is rarely needed. Check that it is giving a real >> benefit and not just imposing longer compile times on your users. >> "inplace/bin/ghc-cabal" configure libraries/containers dist-install "" >> --with-ghc="/home/dsamperi/install/git/ghc/inplace/bin/ghc-stage1" >> --with-ghc-pkg="/home/dsamperi/install/git/ghc/inplace/bin/ghc-pkg" >> --disable-library-for-ghci --enable-library-vanilla >> --disable-library-profiling --enable-shared >> --configure-option=CFLAGS=" -fno-stack-protector " >> --configure-option=LDFLAGS=" " --configure-option=CPPFLAGS=" " >> --gcc-options=" -fno-stack-protector " --with-gcc="/usr/bin/gcc" >> --with-ld="/usr/bin/ld" --configure-option=--with-cc="/usr/bin/gcc" >> --with-ar="/usr/bin/ar" --with-ranlib="/usr/bin/ranlib" >> --with-alex="/home/dsamperi/.cabal/bin/alex" >> --with-happy="/home/dsamperi/.cabal/bin/happy" >> Configuring containers-0.5.5.1... >> ghc-cabal: At least the following dependencies are missing: >> deepseq >=1.2 && <1.4 >> libraries/containers/ghc.mk:4: recipe for target >> 'libraries/containers/dist-install/package-data.mk' failed >> make[1]: *** [libraries/containers/dist-install/package-data.mk] Error 1 >> Makefile:71: recipe for target 'all' failed >> make: *** [all] Error 2 From roma at ro-che.info Mon Jan 12 08:34:10 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Mon, 12 Jan 2015 10:34:10 +0200 Subject: Build failure under Fedora 21 In-Reply-To: References: <54B24AA9.3060503@ro-che.info> Message-ID: <54B38702.9030505@ro-che.info> On 12/01/15 04:00, Dominick Samperi wrote: > Hi Roman. > > As I said in my comments I tried the binary distribution provided for > CentOS65 Linux but ran into problems. I also tried the distribution provided > for Debian Linux; this installed, but there were problems. > > Fedora is the most "cutting edge" distro, and there is no ghc binary > provided for it. I was able to build from source under Fedora 20 about a year > ago. % lsb_release -d Description: Fedora release 21 (Twenty One) % ghc --version The Glorious Glasgow Haskell Compilation System, version 7.8.3 There's no reason the binary ghc release shouldn't work for you (specifically, the deb7 one). What problems did you run into? (There's also no reason you shouldn't be able to build from source; but if all you want is just a working ghc installation, installing the bindist is much easier and faster.) Roman From djsamperi at gmail.com Mon Jan 12 18:35:53 2015 From: djsamperi at gmail.com (Dominick Samperi) Date: Mon, 12 Jan 2015 13:35:53 -0500 Subject: Build failure under Fedora 21 In-Reply-To: <54B38702.9030505@ro-che.info> References: <54B24AA9.3060503@ro-che.info> <54B38702.9030505@ro-che.info> Message-ID: Hi Roman, Thank you for suggesting that I take another look at compiling from source under Fedora 21. This is indeed quite straightforward (configure --prefix=..., make, make install). The reason it failed earlier is that I had the latest build (from HEAD) in my path: ghc=7.11.20150111. Bootstrapping using this version of GHC is not supported. I should have paid closer attention to the diagnostics! After installing from the supplied binaries (for CentOS or Debian) I ran into problems installing pandoc (cabal install pandoc). There was an unresolved reference to libHSprimitive-0.5.4.0.so, I think. Cheers, Dominick On Mon, Jan 12, 2015 at 3:34 AM, Roman Cheplyaka wrote: > On 12/01/15 04:00, Dominick Samperi wrote: >> Hi Roman. >> >> As I said in my comments I tried the binary distribution provided for >> CentOS65 Linux but ran into problems. I also tried the distribution provided >> for Debian Linux; this installed, but there were problems. >> >> Fedora is the most "cutting edge" distro, and there is no ghc binary >> provided for it. I was able to build from source under Fedora 20 about a year >> ago. > > % lsb_release -d > Description: Fedora release 21 (Twenty One) > > % ghc --version > The Glorious Glasgow Haskell Compilation System, version 7.8.3 > > There's no reason the binary ghc release shouldn't work for you > (specifically, the deb7 one). What problems did you run into? > > (There's also no reason you shouldn't be able to build from source; but > if all you want is just a working ghc installation, installing the > bindist is much easier and faster.) > > Roman From juhpetersen at gmail.com Tue Jan 13 01:16:33 2015 From: juhpetersen at gmail.com (Jens Petersen) Date: Tue, 13 Jan 2015 10:16:33 +0900 Subject: ANNOUNCE: GHC version 7.8.4 In-Reply-To: References: Message-ID: On 23 December 2014 at 22:12, Austin Seipp wrote: > The GHC Team is pleased to announce a new patchlevel release of GHC, 7.8.4. > Thanks, I created a Fedora Copr repo with 7.8.4: http://copr.fedoraproject.org/coprs/petersen/ghc-7.8.4/ it has builds for current Fedora releases and EPEL 7. Cheers, Jens -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Tue Jan 13 13:59:38 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Tue, 13 Jan 2015 14:59:38 +0100 Subject: New performance dashboard front-end In-Reply-To: <1420586432.17696.17.camel@joachim-breitner.de> References: <1420586432.17696.17.camel@joachim-breitner.de> Message-ID: <1421157578.3150.9.camel@joachim-breitner.de> Hi, a small update on this: This will eventually be available at http://perf.ghc.haskell.org/ (or so it was promised to me :-). Until then, if you have your own projects where you want to use the tool (which is supposed to be somewhat generic), you can find instructions and code at: https://github.com/nomeata/gipeda I?ll add the relevant bit to our wiki once its live. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From magesh85 at gmail.com Tue Jan 13 14:49:31 2015 From: magesh85 at gmail.com (magesh b) Date: Tue, 13 Jan 2015 20:19:31 +0530 Subject: Difference in Partial TypeFamily application between 7.8.3 & 7.8.4 Message-ID: Hi, {-# LANGUAGE TypeFamilies, ConstraintKinds #-} import GHC.Exts type family TyFun a data DictC (c :: * -> Constraint) data DictTF (tf :: * -> *) type Test1 = DictTF TyFun -- Fails here in 7.8.4 type Test2 = DictC Show When I compile the above code, I'm getting the following error in 7.8.4 and the same code works in 7.8.3. Test.hs:11:1: Type synonym ?TyFun? should have 1 argument, but has been given none In the type declaration for ?Test1? Is this a bug or a expected behavior? For reference, I could find two fixes to type family related bug in this release. https://ghc.haskell.org/trac/ghc/ticket/9433 https://ghc.haskell.org/trac/ghc/ticket/9316 Thanks & Regards, Magesh B -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Tue Jan 13 15:02:48 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 13 Jan 2015 10:02:48 -0500 Subject: Difference in Partial TypeFamily application between 7.8.3 & 7.8.4 In-Reply-To: References: Message-ID: Hello Magesh, The bug is in 7.8.3, which should never have allowed your `Test1`. Haskell type families may not be partially applied -- the type system and type inference just don't know how to handle such things. In 7.8.3, the check was accidentally turned off, as discussed in #9433, as you found. 7.8.3 allows you to do a few limited things with partially applied families, but you'll get very strange errors if you continue down that road, as GHC quickly gets horribly confused. I'm afraid you'll have to find a different way to express what you want. Richard On Jan 13, 2015, at 9:49 AM, magesh b wrote: > Hi, > > {-# LANGUAGE TypeFamilies, ConstraintKinds #-} > > import GHC.Exts > > type family TyFun a > > data DictC (c :: * -> Constraint) > > data DictTF (tf :: * -> *) > > type Test1 = DictTF TyFun -- Fails here in 7.8.4 > > type Test2 = DictC Show > > When I compile the above code, I'm getting the following error in 7.8.4 and the same code works in 7.8.3. > > Test.hs:11:1: > Type synonym ?TyFun? should have 1 argument, but has been given none > In the type declaration for ?Test1? > > Is this a bug or a expected behavior? > For reference, I could find two fixes to type family related bug in this release. > https://ghc.haskell.org/trac/ghc/ticket/9433 > https://ghc.haskell.org/trac/ghc/ticket/9316 > > > Thanks & Regards, > Magesh B > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From magesh85 at gmail.com Tue Jan 13 16:24:13 2015 From: magesh85 at gmail.com (magesh b) Date: Tue, 13 Jan 2015 21:54:13 +0530 Subject: Difference in Partial TypeFamily application between 7.8.3 & 7.8.4 In-Reply-To: References: Message-ID: Thanks Richard. Basically I was storing the type family and its arguments in the data type, so that I can transform its arguments before it gets applied to stored typefamily (via. another closed type function). Is this even possible to do by any chance in 7.8.4 & later? On Tue, Jan 13, 2015 at 8:32 PM, Richard Eisenberg wrote: > Hello Magesh, > > The bug is in 7.8.3, which should never have allowed your `Test1`. Haskell > type families may not be partially applied -- the type system and type > inference just don't know how to handle such things. In 7.8.3, the check > was accidentally turned off, as discussed in #9433, as you found. 7.8.3 > allows you to do a few limited things with partially applied families, but > you'll get very strange errors if you continue down that road, as GHC > quickly gets horribly confused. > > I'm afraid you'll have to find a different way to express what you want. > > Richard > > On Jan 13, 2015, at 9:49 AM, magesh b wrote: > > Hi, > > {-# LANGUAGE TypeFamilies, ConstraintKinds #-} > > import GHC.Exts > > type family TyFun a > > data DictC (c :: * -> Constraint) > > data DictTF (tf :: * -> *) > > type Test1 = DictTF TyFun -- Fails here in 7.8.4 > > type Test2 = DictC Show > > When I compile the above code, I'm getting the following error in 7.8.4 > and the same code works in 7.8.3. > > Test.hs:11:1: > Type synonym ?TyFun? should have 1 argument, but has been given none > In the type declaration for ?Test1? > > Is this a bug or a expected behavior? > For reference, I could find two fixes to type family related bug in this > release. > https://ghc.haskell.org/trac/ghc/ticket/9433 > https://ghc.haskell.org/trac/ghc/ticket/9316 > > > Thanks & Regards, > Magesh B > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Tue Jan 13 17:16:30 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 13 Jan 2015 12:16:30 -0500 Subject: Difference in Partial TypeFamily application between 7.8.3 & 7.8.4 In-Reply-To: References: Message-ID: <91E22C7F-E639-4958-8D6D-5607C50AE9F3@cis.upenn.edu> It sounds like you're doing something quite like the defunctionalization trick I've used for partially applied type families: https://typesandkinds.wordpress.com/2013/04/01/defunctionalization-for-the-win/ That blog post eventually became an implementation and paper (co-authored with Jan Stolarek). The implementation is in the `singletons` package; the paper is http://www.cis.upenn.edu/~eir/papers/2014/promotion/promotion.pdf I have some ideas (written toward the end of that paper) about how to get this into GHC proper, but it's years away, so don't hold your breath! Richard On Jan 13, 2015, at 11:24 AM, magesh b wrote: > Thanks Richard. Basically I was storing the type family and its arguments in the data type, so that I can transform its arguments before it gets applied to stored typefamily (via. another closed type function). Is this even possible to do by any chance in 7.8.4 & later? > > On Tue, Jan 13, 2015 at 8:32 PM, Richard Eisenberg wrote: > Hello Magesh, > > The bug is in 7.8.3, which should never have allowed your `Test1`. Haskell type families may not be partially applied -- the type system and type inference just don't know how to handle such things. In 7.8.3, the check was accidentally turned off, as discussed in #9433, as you found. 7.8.3 allows you to do a few limited things with partially applied families, but you'll get very strange errors if you continue down that road, as GHC quickly gets horribly confused. > > I'm afraid you'll have to find a different way to express what you want. > > Richard > > On Jan 13, 2015, at 9:49 AM, magesh b wrote: > >> Hi, >> >> {-# LANGUAGE TypeFamilies, ConstraintKinds #-} >> >> import GHC.Exts >> >> type family TyFun a >> >> data DictC (c :: * -> Constraint) >> >> data DictTF (tf :: * -> *) >> >> type Test1 = DictTF TyFun -- Fails here in 7.8.4 >> >> type Test2 = DictC Show >> >> When I compile the above code, I'm getting the following error in 7.8.4 and the same code works in 7.8.3. >> >> Test.hs:11:1: >> Type synonym ?TyFun? should have 1 argument, but has been given none >> In the type declaration for ?Test1? >> >> Is this a bug or a expected behavior? >> For reference, I could find two fixes to type family related bug in this release. >> https://ghc.haskell.org/trac/ghc/ticket/9433 >> https://ghc.haskell.org/trac/ghc/ticket/9316 >> >> >> Thanks & Regards, >> Magesh B >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Tue Jan 13 21:49:25 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Tue, 13 Jan 2015 23:49:25 +0200 Subject: Ord instance for Data.Data.Constr Message-ID: Is there a reason there is no Ord instance for Data.Data.Constr? I want to use it as part of a key in a Map, and can't auto derive the instance as all the constructors are not in scope. Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Tue Jan 13 21:54:57 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Tue, 13 Jan 2015 23:54:57 +0200 Subject: Ord instance for Data.Data.Constr In-Reply-To: References: Message-ID: Never mind, I can derive one for ConstrRep Alan On Tue, Jan 13, 2015 at 11:49 PM, Alan & Kim Zimmerman wrote: > Is there a reason there is no Ord instance for Data.Data.Constr? > > I want to use it as part of a key in a Map, and can't auto derive the > instance as all the constructors are not in scope. > > Alan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Tue Jan 13 22:48:02 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Tue, 13 Jan 2015 17:48:02 -0500 Subject: vectorisation code? Message-ID: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> Hi devs, There's a sizable number of modules in the `vectorise` subdirectory of GHC. I'm sure these do all sorts of wonderful things. But what, exactly? And, does anyone make use of these wonderful things? A quick poking through the code shows a tiny link between the vectorise code and the rest of GHC -- the function `vectorise` exported from the module `Vectorise`, which is named in exactly one place from SimplCore. From what I can tell, the function will be called only when `-fvectorise` is specified, and then it seems to interact with a {-# VECTORISE #-} pragma. However, `{-# VECTORISE #-}` doesn't appear in the manual at all, and `-fvectorise` is given only a cursory explanation. It seems these work with DPH... which has been disabled, no? Searching online finds several hits, but nothing more recent than 2012. I hope this question doesn't offend -- it seems that vectorisation probably has amazing performance gains. Yet, the feature also seems unloved. In the meantime, compiling (and recompiling, and recompiling...) the modules takes time, as does going through them to propagate changes from elsewhere. If this feature is truly orphaned, unloved, and unused at the moment, is it reasonable to consider putting it on furlough? Thanks, Richard From jan.stolarek at p.lodz.pl Wed Jan 14 07:10:32 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Wed, 14 Jan 2015 08:10:32 +0100 Subject: vectorisation code? In-Reply-To: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> Message-ID: <201501140810.32764.jan.stolarek@p.lodz.pl> I share Richard's opinion. Janek Dnia wtorek, 13 stycznia 2015, Richard Eisenberg napisa?: > Hi devs, > > There's a sizable number of modules in the `vectorise` subdirectory of GHC. > I'm sure these do all sorts of wonderful things. But what, exactly? And, > does anyone make use of these wonderful things? > > A quick poking through the code shows a tiny link between the vectorise > code and the rest of GHC -- the function `vectorise` exported from the > module `Vectorise`, which is named in exactly one place from SimplCore. > From what I can tell, the function will be called only when `-fvectorise` > is specified, and then it seems to interact with a {-# VECTORISE #-} > pragma. However, `{-# VECTORISE #-}` doesn't appear in the manual at all, > and `-fvectorise` is given only a cursory explanation. It seems these work > with DPH... which has been disabled, no? Searching online finds several > hits, but nothing more recent than 2012. > > I hope this question doesn't offend -- it seems that vectorisation probably > has amazing performance gains. Yet, the feature also seems unloved. In the > meantime, compiling (and recompiling, and recompiling...) the modules takes > time, as does going through them to propagate changes from elsewhere. If > this feature is truly orphaned, unloved, and unused at the moment, is it > reasonable to consider putting it on furlough? > > Thanks, > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Wed Jan 14 17:40:43 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 14 Jan 2015 17:40:43 +0000 Subject: [Diffusion] [Build Failed] rGHCe4cb8370eb91: Tiny refactoring (shorter, simpler code) In-Reply-To: <20150114173701.22421.82198@phabricator.haskell.org> References: <20150114173701.22421.82198@phabricator.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562A57E7@DB3PRD3001MB020.064d.mgd.msft.net> This failure message leads me to https://phabricator.haskell.org/harbormaster/build/2964/ which says that ghcirun003 is failing. But how can I see the failure log???? (It doesn't fail for me.) Simon | -----Original Message----- | From: noreply at phabricator.haskell.org | [mailto:noreply at phabricator.haskell.org] | Sent: 14 January 2015 17:37 | To: Simon Peyton Jones | Subject: [Diffusion] [Build Failed] rGHCe4cb8370eb91: Tiny refactoring | (shorter, simpler code) | | Harbormaster failed to build B2940: rGHCe4cb8370eb91: Tiny refactoring | (shorter, simpler code)! | | USERS | simonpj (Author) | GHC - Type checker/inferencer (Auditor) | | COMMIT | https://phabricator.haskell.org/rGHCe4cb8370eb91 | | EMAIL PREFERENCES | https://phabricator.haskell.org/settings/panel/emailpreferences/ | | To: simonpj, GHC - Type checker/inferencer From alan.zimm at gmail.com Wed Jan 14 21:03:21 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Wed, 14 Jan 2015 23:03:21 +0200 Subject: Generic instances for GHC AST Message-ID: At the moment every part of the GHC AST derives instances of Data and Typeable. There are no instances of Generic. If I try to standalone derive these, the derivation eventually fails for deriving instance Generic (Name) because the constructors are not all in scope. So, does it make sense in GHC to at least derive Generic for the items that are opaque, and at most to do so for the whole AST. I know there were some concerns earlier about too many instances being derived, and its impact on compilation time and memory, so the minimal version may be best. This will allow the new generation libraries built around Generics to perform on GHC data structures too. Alan -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 14 23:16:07 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 14 Jan 2015 23:16:07 +0000 Subject: Request for assistance from Haskell-oriented startup: GHCi performance In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF562A5D89@DB3PRD3001MB020.064d.mgd.msft.net> Konrad That does sound frustrating. I think your first port of call should be Manuel Chakravarty, the author of accelerate. The example you give in your stackoverflow post can only be some weird systems thing. After all, you are executing precisely the same code (namely compiled Accelerate code); it?s just that in one case it?s dynamically linked and excecuted from GHCi and in the other it?s linked and executed by the shell. I have no clue what could cause that. I wonder if you are using a GPU and whether that might somehow behave differently. Could it be the difference between static linking and dynamic linking (which could plausibly account for some startup delay)? Is it a fixed overhead (eg takes 100ms extra) or does it run a factor of two slower (increase the size of your test case to see)? I?d be happy to have a Skype call with you, but I am rather unlikely to know anything helpful because it doesn?t sound like a core Haskell issue at all. You are executing the very same machine instructions! The overheads of the GHC API to compile and run the expression ?main? are pretty small. I?m copying ghc-devs in case anyone else has any ideas. Simon From: Konrad G?dek [mailto:kgadek at gmail.com] Sent: 14 January 2015 13:59 To: Simon Peyton Jones Cc: Piotr M?odawski; kgadek at flowbox.io Subject: Request for assistance from Haskell-oriented startup: GHCi performance Dear Mr Jones, My name is Konrad G?dek and I'm one of the programmers at Flowbox ( http://flowbox.io ), a startup that is to bring a fresh view on image composition in movie industry. We proudly use Haskell in nearly all of our development. I believe you may remember our CEO, Wojciech Dani?o, from discussions like in this thread: https://phabricator.haskell.org/D69 . What can be interesting for you is that to achieve our goals as a company, we started developing a new programming language - Luna. Long story short, we believe that Luna could be as beneficial for the Haskell community as Elixir is for Erlang. However, we found some major performance problems with the code that are as critical for us as they are cryptic. We have found difficulties in pinpointing the actual issue, not to mention solving it. We're getting a bit desperate about that, nobody so far has been able to help us, and so we would like to ask you for help. We would be really really grateful if you could take a look, maybe your fresh ideas could shed some light on the issue. Details are attached below. Is there any chance we could arrange eg. a Skype call so we could further discuss the matter? Thank you in advance! Background Currently Luna is trans-compiled to Haskell and then compiled to bytecode by GHC. Furthermore, we use ghci to evaluate expressions (the flow graph) interactively. We use accelerate library to perform high-performance computations with the help of graphic cards. The problem Executing some of the functions from libraries compiled with -O2 (especially from accelerate) is much slower than calling it from compiled executable (see http://stackoverflow.com/questions/27541609/difference-in-performance-of-compiled-accelerate-code-ran-from-ghci-and-shell and https://github.com/AccelerateHS/accelerate/issues/227). Maybe there is some other way to interactively evaluate Haskell code, which is more lightweight/more customizable ie. would not require all ghc-api features which are probably slowing down the whole process? Is it possible to just use ghc linker and make function calls simpler and more time efficient? Details We feed ghci with statements (using ghc-api) and declarations (using runStmt and runDecls). We can also change imports and language extensions before each call. The overall process is as follows: * on init: ? * set ghcpath to one with our custom installation of ghc with preinstalled graphic libraries * set imports to our libraries * enable/disable appropriate language extensions * for each run: ? * generate haskell code (including datatype declarations, using lenses and TemplateHaskell) and load it to ghci using runDecls * for each expression: o * run statements that use freshly generated code * bind (lazy) results to variables * evaluate values from bound variables, and get it from GhcMonad to runtime of our interpreter (see http://hackage.haskell.org/package/hint?0.4.2.1/docs/Language-Haskell-Interpreter.html#v:interpret) This behaviour was observed when using GHC 7.8.3 (with D69 patch) on Fedora 20 (x86-64), Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz Tried so far 1. Specializing nearly everything in accelerate library, specializing calls to accelearate methods (no speedup). 2. Load precompiled, optimised code to ghci (no speedup). 3. Truth to be told, we have no idea what to try next. -- Konrad G?dek typechecker team-leader in Flowbox -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhpetersen at gmail.com Thu Jan 15 09:55:36 2015 From: juhpetersen at gmail.com (Jens Petersen) Date: Thu, 15 Jan 2015 18:55:36 +0900 Subject: Build failure under Fedora 21 In-Reply-To: References: <54B24AA9.3060503@ro-che.info> <54B38702.9030505@ro-che.info> Message-ID: I also have ghc-7.8.4 builds available in https://copr.fedoraproject.org/coprs/petersen/ghc-7.8.4/ if that helps, though they replace the Fedora ghc packages. Jens -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Thu Jan 15 13:16:12 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 15 Jan 2015 14:16:12 +0100 Subject: API Annotations question Message-ID: <201501151416.12754.jan.stolarek@p.lodz.pl> Alan, devs, since API annotations were added some data types in the source code have extra comments related to these annotations, eg. -- | A type or class declaration. data TyClDecl name = -- | @type/data family T :: *->*@ -- -- - 'ApiAnnotation.AnnKeywordId' : 'ApiAnnotation.AnnType', -- 'ApiAnnotation.AnnData', -- 'ApiAnnotation.AnnFamily','ApiAnnotation.AnnWhere', -- 'ApiAnnotation.AnnOpen','ApiAnnotation.AnnDcolon', -- 'ApiAnnotation.AnnClose' FamDecl { tcdFam :: FamilyDecl name } I'm totally clueless about what these comments mean. Where can I find an explanation? Couldn't find anything on the wiki. Janek From alan.zimm at gmail.com Thu Jan 15 13:33:48 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 15 Jan 2015 15:33:48 +0200 Subject: API Annotations question In-Reply-To: <201501151416.12754.jan.stolarek@p.lodz.pl> References: <201501151416.12754.jan.stolarek@p.lodz.pl> Message-ID: This has come up in the review of D538 too. Once processed by haddock the online documentation links through to the hsSyn/ApiAnnotation module where more detail is given. I am not sure what the best way is to add an internal source code link to it too without disturbing the haddock. I am open for suggestions. Alan On Thu, Jan 15, 2015 at 3:16 PM, Jan Stolarek wrote: > Alan, devs, > > since API annotations were added some data types in the source code have > extra comments related to > these annotations, eg. > > -- | A type or class declaration. > data TyClDecl name > = -- | @type/data family T :: *->*@ > -- > -- - 'ApiAnnotation.AnnKeywordId' : 'ApiAnnotation.AnnType', > -- 'ApiAnnotation.AnnData', > -- 'ApiAnnotation.AnnFamily','ApiAnnotation.AnnWhere', > -- 'ApiAnnotation.AnnOpen','ApiAnnotation.AnnDcolon', > -- 'ApiAnnotation.AnnClose' > FamDecl { tcdFam :: FamilyDecl name } > > I'm totally clueless about what these comments mean. Where can I find an > explanation? Couldn't > find anything on the wiki. > > Janek > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Thu Jan 15 13:48:43 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 15 Jan 2015 14:48:43 +0100 Subject: API Annotations question In-Reply-To: References: <201501151416.12754.jan.stolarek@p.lodz.pl> Message-ID: <201501151448.43960.jan.stolarek@p.lodz.pl> > I am not sure what the best way is to add an internal source code link to > it too without disturbing the haddock. I am open for suggestions. For me as a developer the most important thing is when and how do I need to add/modify these comments when working with the source code? First place where I started looking was the wiki but surprisingly I couldn't find a page about API annotations. Does such a page exist? The best thing IMO would be to add for each data type reference to a Note that explains how to work with these annotations. But I'm not sure if that's feasible. How many data types have these annotations? Janek From alan.zimm at gmail.com Thu Jan 15 13:53:37 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 15 Jan 2015 15:53:37 +0200 Subject: API Annotations question In-Reply-To: <201501151448.43960.jan.stolarek@p.lodz.pl> References: <201501151416.12754.jan.stolarek@p.lodz.pl> <201501151448.43960.jan.stolarek@p.lodz.pl> Message-ID: All of the hsSyn ones. and the Wiki page is https://ghc.haskell.org/trac/ghc/wiki/GhcAstAnnotations They need to be updated whenever the Parser.y changes in a way that one of the keywords that is not otherwise captured in the AST is introduced, or moved. Alan On Thu, Jan 15, 2015 at 3:48 PM, Jan Stolarek wrote: > > I am not sure what the best way is to add an internal source code link to > > it too without disturbing the haddock. I am open for suggestions. > For me as a developer the most important thing is when and how do I need > to add/modify these > comments when working with the source code? First place where I started > looking was the wiki but > surprisingly I couldn't find a page about API annotations. Does such a > page exist? > The best thing IMO would be to add for each data type reference to a Note > that explains how to > work with these annotations. But I'm not sure if that's feasible. How many > data types have these > annotations? > > Janek > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Thu Jan 15 14:33:54 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 15 Jan 2015 15:33:54 +0100 Subject: API Annotations question In-Reply-To: References: <201501151416.12754.jan.stolarek@p.lodz.pl> <201501151448.43960.jan.stolarek@p.lodz.pl> Message-ID: <201501151533.55002.jan.stolarek@p.lodz.pl> > and the Wiki page is https://ghc.haskell.org/trac/ghc/wiki/GhcAstAnnotations I read this page and I don't feel it answers my questions as it mostly seems to talk about the design. As a side note I don't understand the discussion of the design either. Eg. the wiki page lists four helper functions in the parser - gj, gl, aa, ams - but does not show how to use them and omits several other functions from the parser. > They need to be updated whenever the Parser.y changes in a way that one of > the keywords that is not otherwise captured in the AST is introduced, or > moved. 1. How should I update the comment? For example I see that the list of annotations in the comment is preceeded with a dash. Is that mandatory? Why some annotations are separated by a colon and some by a comma: -- - 'ApiAnnotation.AnnKeywordId' : 'ApiAnnotation.AnnType', -- 'ApiAnnotation.AnnData', 2. How can I verify that my comments is correct? How do errors manifest? Is there a source code note that answers these questions? Are there any examples showing what to do when? All in all I feel totally lost with annotations. When changing the parser I'm mostly doing trial and error because I still don't know how to use all annotation functions in the parser. I'm struggling to understand this but haddock comments in the parser don't suffice for me. Janek > > Alan > > On Thu, Jan 15, 2015 at 3:48 PM, Jan Stolarek > > wrote: > > > I am not sure what the best way is to add an internal source code link > > > to it too without disturbing the haddock. I am open for suggestions. > > > > For me as a developer the most important thing is when and how do I need > > to add/modify these > > comments when working with the source code? First place where I started > > looking was the wiki but > > surprisingly I couldn't find a page about API annotations. Does such a > > page exist? > > The best thing IMO would be to add for each data type reference to a Note > > that explains how to > > work with these annotations. But I'm not sure if that's feasible. How > > many data types have these > > annotations? > > > > Janek From ben at smart-cactus.org Thu Jan 15 23:31:53 2015 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 15 Jan 2015 18:31:53 -0500 Subject: Floating through small case analyses Message-ID: <87sifbvcwm.fsf@gmail.com> Hello Simon, Today I've been working on trying to whip bytestring's Builder into shape [1,2]. The last remaining issue is a performance problem that appears to be due to over-zealous floating of some small literals in every GHC version I've tested (7.6, 7.8, and 7.10). The test case can be found here [3] and is characterized by repeated monoidal appends (reproduced roughly here in half-Core/half-Haskell), loop s n | s `seq` n `seq` False = undefined loop _ 0 = mempty loop s n = singleton (s + (fromInteger @Word8 (__integer 0))) <> singleton (s + (fromInteger @Word8 (__integer 1))) <> singleton (s + (fromInteger @Word8 (__integer 2))) <> singleton (s + (fromInteger @Word8 (__integer 3))) <> ... singleton (s + (fromInteger @Word8 (__integer 15))) <> loop (s+16) (s-16) The initial float-out stage floats all of the (fromInteger ...) expressions out to the top level, resulting in definitions of the form, lvl_s1b3j :: Integer lvl_s1b3j = __integer 0 lvl_s1b3k :: Word8 lvl_s1b3k = $fBitsWord8_$cfromInteger lvl_s1b3j Simplifier phase 2 then does some inlining and in so doing introduces a boolean case analysis. Both alternatives of the case require the now-top-level literals. Unfortunately, the float-in pass is quite strict about when it will float expressions inside case analyses. Specifically, it requires the float to be both small [4] (which I believe these are) and used in more than one but not all alternatives [5]. Given that there are only two alternatives here this seems a bit too restrictive. The end result is that we end up producing a bunch of bindings outside of a boolean case analysis, case narrow8Word# sc_s1c39 of a_s1c3g { __DEFAULT -> case plusWord# sc_s1c39 (__word 1) of sat_s1c3i { __DEFAULT -> case narrow8Word# sat_s1c3i of a1_s1c3h { __DEFAULT -> case plusWord# sc_s1c39 (__word 2) of sat_s1c3k { __DEFAULT -> ... case plusWord# sc_s1c39 (__word 16) of sat_s1c3M { __DEFAULT -> case narrow8Word# sat_s1c3M of a16_s1c3L { __DEFAULT -> case tagToEnum# @ Bool sat_s1c3P of _ { False -> {-# write each of the values above to memory #-} True -> {-# write each of the values above to memory #-} } This ends up producing extremely poor assembly, with the compiler first computing all 16 store addresses, placing them on the stack, and then performing all 16 stores. I'm a bit unsure of what to blame this on. Perhaps it would make sense to always float small dupable expressions in to sufficiently small case expressions (say, three or fewer alternatives)? Perhaps these near-literals should never have been floated out at all? Perhaps there another option I've not thought of? Thoughts? Cheers, - Ben [1]: https://github.com/kolmodin/binary/pull/65 [2]: https://github.com/haskell/bytestring/pull/40 [3]: https://gist.github.com/bgamari/22d127779a48113c1153#file-test-hs [4]: https://github.com/ghc/ghc/blob/master/compiler/coreSyn/CoreUtils.hs#L710 [5]: https://github.com/ghc/ghc/blob/master/compiler/simplCore/FloatIn.hs#L527 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From chak at cse.unsw.edu.au Fri Jan 16 02:58:28 2015 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Fri, 16 Jan 2015 13:58:28 +1100 Subject: vectorisation code? In-Reply-To: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> Message-ID: [Sorry, sent from the wrong account at first.] We currently don?t have the resources to work on DPH. I would obviously prefer to leave the code in, in the hope that we will be able to return to it. Manuel > Richard Eisenberg : > > Hi devs, > > There's a sizable number of modules in the `vectorise` subdirectory of GHC. I'm sure these do all sorts of wonderful things. But what, exactly? And, does anyone make use of these wonderful things? > > A quick poking through the code shows a tiny link between the vectorise code and the rest of GHC -- the function `vectorise` exported from the module `Vectorise`, which is named in exactly one place from SimplCore. From what I can tell, the function will be called only when `-fvectorise` is specified, and then it seems to interact with a {-# VECTORISE #-} pragma. However, `{-# VECTORISE #-}` doesn't appear in the manual at all, and `-fvectorise` is given only a cursory explanation. It seems these work with DPH... which has been disabled, no? Searching online finds several hits, but nothing more recent than 2012. > > I hope this question doesn't offend -- it seems that vectorisation probably has amazing performance gains. Yet, the feature also seems unloved. In the meantime, compiling (and recompiling, and recompiling...) the modules takes time, as does going through them to propagate changes from elsewhere. If this feature is truly orphaned, unloved, and unused at the moment, is it reasonable to consider putting it on furlough? > > Thanks, > Richard > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From jan.stolarek at p.lodz.pl Fri Jan 16 10:19:07 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 16 Jan 2015 11:19:07 +0100 Subject: American vs. British English Message-ID: <201501161119.07912.jan.stolarek@p.lodz.pl> I just realized GHC has data types named FamFlavor and FamFlavour. That said, is there a policy that says which English should be used in the source code? Janek From simonpj at microsoft.com Fri Jan 16 10:26:16 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 16 Jan 2015 10:26:16 +0000 Subject: American vs. British English In-Reply-To: <201501161119.07912.jan.stolarek@p.lodz.pl> References: <201501161119.07912.jan.stolarek@p.lodz.pl> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562A78FF@DB3PRD3001MB020.064d.mgd.msft.net> We don't have a solid policy. Personally I prefer English, but then I would. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Jan | Stolarek | Sent: 16 January 2015 10:19 | To: ghc-devs at haskell.org | Subject: American vs. British English | | I just realized GHC has data types named FamFlavor and FamFlavour. | That said, is there a policy that says which English should be used in | the source code? | | Janek | | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From sophie at traumapony.org Fri Jan 16 10:27:28 2015 From: sophie at traumapony.org (Sophie Taylor) Date: Fri, 16 Jan 2015 20:27:28 +1000 Subject: American vs. British English In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562A78FF@DB3PRD3001MB020.064d.mgd.msft.net> References: <201501161119.07912.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF562A78FF@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: If one has to choose an English, then it should be English English. On 16 January 2015 at 20:26, Simon Peyton Jones wrote: > We don't have a solid policy. Personally I prefer English, but then I > would. > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Jan > | Stolarek > | Sent: 16 January 2015 10:19 > | To: ghc-devs at haskell.org > | Subject: American vs. British English > | > | I just realized GHC has data types named FamFlavor and FamFlavour. > | That said, is there a policy that says which English should be used in > | the source code? > | > | Janek > | > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuncer.ayaz at gmail.com Fri Jan 16 11:03:09 2015 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Fri, 16 Jan 2015 12:03:09 +0100 Subject: vectorisation code? In-Reply-To: References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> Message-ID: On Fri, Jan 16, 2015 at 3:58 AM, Manuel M T Chakravarty wrote: > [Sorry, sent from the wrong account at first.] > > We currently don't have the resources to work on DPH. I would > obviously prefer to leave the code in, in the hope that we will be > able to return to it. What's the plan for DPH and 7.10? Is it bitrotting or abandoned, and does this mean there weren't enough users of it to notice and help maintain it? From simonpj at microsoft.com Fri Jan 16 11:12:26 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 16 Jan 2015 11:12:26 +0000 Subject: vectorisation code? In-Reply-To: References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562A7B4E@DB3PRD3001MB020.064d.mgd.msft.net> | What's the plan for DPH and 7.10? Is it bitrotting or abandoned, and | does this mean there weren't enough users of it to notice and help | maintain it? For 7.10, DPH is definitely not supported, I'm afraid. For a longer term vision I defer to Manuel! Simon From juhpetersen at gmail.com Fri Jan 16 13:19:30 2015 From: juhpetersen at gmail.com (Jens Petersen) Date: Fri, 16 Jan 2015 22:19:30 +0900 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 In-Reply-To: References: Message-ID: On 23 December 2014 at 23:36, Austin Seipp wrote: > We are pleased to announce the first release candidate for GHC 7.10.1: > Thanks! Maybe this is already fixed in git, but it seems to me that RC1 is not able to build itself? ghc-cabal: '/usr/bin/ghc-pkg' exited withghc-cabal: '/usr/bin/ghc-pkg' exited with an error: ghc-pkg: ghc no longer supports single-file style package databases (/builddir/build/BUILD/ghc-7.10.0.20141222/libraries/bootstrapping.conf) use 'ghc-pkg init' to create the database with the correct format. an error: ghc-pkg: ghc no longer supportutils/hsc2hs/ghc.mk:15: recipe for target 'utils/hsc2hs/dist/package-data.mk' failed make[1]: *** [utils/hsc2hs/dist/package-data.mk] Error 1 make[1]: *** Waiting for unfinished jobs.... s single-file style package databases (/builddir/build/BUILD/ghc-7.10.0.20141222/libraries/bootstrapping.conf) use 'ghc-pkg init' to create the database with the correct format. libraries/binary/ghc.mk:3: recipe for target 'libraries/binary/dist-boot/package-data.mk' failed make[1]: *** [libraries/binary/dist-boot/package-data.mk] Error 1 Makefile:71: recipe for target 'all' failed make: *** [all] Error 2 (See current https://copr-be.cloud.fedoraproject.org/results/petersen/ghc-7.10/fedora-rawhide-x86_64/ghc-7.10.0.20141222-0.2.fc21/ for the full buildlog.) Is that a known issue? Jens -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Fri Jan 16 13:28:58 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 16 Jan 2015 14:28:58 +0100 Subject: vectorisation code? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562A7B4E@DB3PRD3001MB020.064d.mgd.msft.net> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562A7B4E@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <201501161428.58536.jan.stolarek@p.lodz.pl> Out of curiosity I removed vectorisation code and did a devel2 build. Build time on my laptop went down from 25 minutes to 24 minutes - a modest 4% improvement. Of course there is more to be gained by avoiding recompilations later during development. > I would obviously prefer to leave the code in, in the hope that we will be able to return to it. Is this just hope or are there any actual plans to attempt to return to DPH? (I assume this has to do with funding.) Janek Dnia pi?tek, 16 stycznia 2015, Simon Peyton Jones napisa?: > | What's the plan for DPH and 7.10? Is it bitrotting or abandoned, and > | does this mean there weren't enough users of it to notice and help > | maintain it? > > For 7.10, DPH is definitely not supported, I'm afraid. > > For a longer term vision I defer to Manuel! > > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From austin at well-typed.com Fri Jan 16 15:46:44 2015 From: austin at well-typed.com (Austin Seipp) Date: Fri, 16 Jan 2015 09:46:44 -0600 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 In-Reply-To: References: Message-ID: Hi Jens, This was a result of https://ghc.haskell.org/trac/ghc/ticket/9652, which Edward fixed and I'll be merging into the 7.10 branch for RC2. On Fri, Jan 16, 2015 at 7:19 AM, Jens Petersen wrote: > On 23 December 2014 at 23:36, Austin Seipp wrote: >> >> We are pleased to announce the first release candidate for GHC 7.10.1: > > > Thanks! > > Maybe this is already fixed in git, but it seems to me that RC1 is not able > to build itself? > > ghc-cabal: '/usr/bin/ghc-pkg' exited withghc-cabal: '/usr/bin/ghc-pkg' > exited with an error: > ghc-pkg: ghc no longer supports single-file style package databases > (/builddir/build/BUILD/ghc-7.10.0.20141222/libraries/bootstrapping.conf) use > 'ghc-pkg init' to create the database with the correct format. > an error: > ghc-pkg: ghc no longer supportutils/hsc2hs/ghc.mk:15: recipe for target > 'utils/hsc2hs/dist/package-data.mk' failed > make[1]: *** [utils/hsc2hs/dist/package-data.mk] Error 1 > make[1]: *** Waiting for unfinished jobs.... > s single-file style package databases > (/builddir/build/BUILD/ghc-7.10.0.20141222/libraries/bootstrapping.conf) use > 'ghc-pkg init' to create the database with the correct format. > libraries/binary/ghc.mk:3: recipe for target > 'libraries/binary/dist-boot/package-data.mk' failed > make[1]: *** [libraries/binary/dist-boot/package-data.mk] Error 1 > Makefile:71: recipe for target 'all' failed > make: *** [all] Error 2 > > (See current > https://copr-be.cloud.fedoraproject.org/results/petersen/ghc-7.10/fedora-rawhide-x86_64/ghc-7.10.0.20141222-0.2.fc21/ > for the full buildlog.) > > Is that a known issue? > > Jens -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From eir at cis.upenn.edu Sat Jan 17 04:23:40 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Fri, 16 Jan 2015 21:23:40 -0700 Subject: vectorisation code? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562A7B4E@DB3PRD3001MB020.064d.mgd.msft.net> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562A7B4E@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Jan 16, 2015, at 4:12 AM, Simon Peyton Jones wrote: > For 7.10, DPH is definitely not supported, I'm afraid. Does this mean that the vectorisation code is also defunct? As in, is there a way to usefully access the feature without DPH? Richard From djsamperi at gmail.com Sat Jan 17 08:22:05 2015 From: djsamperi at gmail.com (Dominick Samperi) Date: Sat, 17 Jan 2015 03:22:05 -0500 Subject: Build failure under Fedora 21 In-Reply-To: References: <54B24AA9.3060503@ro-che.info> <54B38702.9030505@ro-che.info> Message-ID: It turns out that the undefined reference to libHSprimitive-0.5.4.0.so when installing pandoc is not related to the use of CentOS or Debian binaries. I get the same undefined reference when I try to use ghc-7.8.4 compiled from source under Fedora 21. Here is the output of 'locate libHSprimitive': /home/dsamperi/.cabal/lib/primitive-0.5.4.0/ghc-7.8.4/libHSprimitive-0.5.4.0.a /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1-ghc7.6.3.so /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1.a /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1_p.a So there is a .a library, but no .so (shared) lib in the build from source. Can someone explain how to get the build process to create all necessary shared libs? Thanks, Dominick On Thu, Jan 15, 2015 at 4:55 AM, Jens Petersen wrote: > I also have ghc-7.8.4 builds available in > > https://copr.fedoraproject.org/coprs/petersen/ghc-7.8.4/ > > if that helps, though they replace the Fedora ghc packages. > > Jens From hvriedel at gmail.com Sat Jan 17 08:36:38 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sat, 17 Jan 2015 09:36:38 +0100 Subject: Build failure under Fedora 21 In-Reply-To: (Dominick Samperi's message of "Sat, 17 Jan 2015 03:22:05 -0500") References: <54B24AA9.3060503@ro-che.info> <54B38702.9030505@ro-che.info> Message-ID: <87a91hajmx.fsf@gmail.com> On 2015-01-17 at 09:22:05 +0100, Dominick Samperi wrote: > It turns out that the undefined reference to libHSprimitive-0.5.4.0.so > when installing pandoc is not related to the use of CentOS or Debian > binaries. I get the same undefined reference when I try to use > ghc-7.8.4 compiled from source under Fedora 21. Here is the output of > 'locate libHSprimitive': > > /home/dsamperi/.cabal/lib/primitive-0.5.4.0/ghc-7.8.4/libHSprimitive-0.5.4.0.a > /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1-ghc7.6.3.so > /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1.a > /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1_p.a > > > So there is a .a library, but no .so (shared) lib in the build from > source. Can someone explain how to get the build process to create all > necessary shared libs? How did you compile GHC? Iirc `primitive` isn't supposed to be built/installed/used unless you enable DPH (otherwise, it would lead to a similiar issue like https://ghc.haskell.org/trac/ghc/ticket/8919) Cheers, hvr From alan.zimm at gmail.com Sat Jan 17 09:49:01 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sat, 17 Jan 2015 11:49:01 +0200 Subject: Generic instances for GHC AST In-Reply-To: References: Message-ID: FYI I have created ghc-generic-instances on hackage as a cache of the derived instances. Name cannot have a Generic instance due to the unboxed type(s) in it. Alan On Wed, Jan 14, 2015 at 11:03 PM, Alan & Kim Zimmerman wrote: > At the moment every part of the GHC AST derives instances of Data and > Typeable. > > There are no instances of Generic. > > If I try to standalone derive these, the derivation eventually fails for > > deriving instance Generic (Name) > > because the constructors are not all in scope. > > So, does it make sense in GHC to at least derive Generic for the items > that are opaque, and at most to do so for the whole AST. > > I know there were some concerns earlier about too many instances being > derived, and its impact on compilation time and memory, so the minimal > version may be best. > > This will allow the new generation libraries built around Generics to > perform on GHC data structures too. > > Alan > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.trommler at ohm-hochschule.de Sat Jan 17 12:15:33 2015 From: peter.trommler at ohm-hochschule.de (Peter Trommler) Date: Sat, 17 Jan 2015 13:15:33 +0100 Subject: Build failure under Fedora 21 References: <54B24AA9.3060503@ro-che.info> <54B38702.9030505@ro-che.info> Message-ID: Dominick Samperi wrote: > It turns out that the undefined reference to libHSprimitive-0.5.4.0.so > when installing pandoc is not related to the use of CentOS or Debian > binaries. I get the same undefined reference when I try to use > ghc-7.8.4 compiled from source under Fedora 21. Here is the output of > 'locate libHSprimitive': > > /home/dsamperi/.cabal/lib/primitive-0.5.4.0/ghc-7.8.4/libHSprimitive-0.5.4.0.a > /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1-ghc7.6.3.so > /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1.a > /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1_p.a > > > So there is a .a library, but no .so (shared) lib in the build from > source. Can someone explain how to get the build process to create all > necessary shared libs? Try adding `--enable-shared` to your cabal command to build the shared object libraries. You might have to tell cabal to reinstall your existing primitive-0.5.4.0 or blow away your ~/.cabal and ~/.ghc directories. HTH Peter From george.colpitts at gmail.com Sat Jan 17 12:36:00 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Sat, 17 Jan 2015 08:36:00 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - questions on Mac OS platform Message-ID: - Has anybody successfully used llvm on the Mac with 7.10.1 RC1? My problem is described below. - Which is the recommended gcc to use when building source? - GNU gcc 4.9.2 - Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn) - When using ghci with 7.10.1 RC1 I get the following errors intermittently. Is anybody else seeing these? - Too late for parseStaticFlags: call it before runGhc or runGhcT *** Exception: ExitFailure 1 - ld: library not found for -l:ghc31505_10.dylib collect2: error: ld returned 1 exit status phase `Linker' failed (exitcode = 1) ?Thanks? On Fri, Jan 2, 2015 at 9:12 AM, George Colpitts wrote: > Only problem remaining is compiling with -fllvm and running resulting > executable > > ?. > ?..? > > > > - llvm , compiling with llvm (3.4.2) gives the following warnings: > - $ ghc -fllvm cubeFast.hs > [1 of 1] Compiling Main ( cubeFast.hs, cubeFast.o ) > clang: warning: argument unused during compilation: > '-fno-stack-protector' > clang: warning: argument unused during compilation: '-D > TABLES_NEXT_TO_CODE' > clang: warning: argument unused during compilation: '-I .' > clang: warning: argument unused during compilation: '-fno-common' > clang: warning: argument unused during compilation: '-U __PIC__' > clang: warning: argument unused during compilation: '-D __PIC__' > Linking cubeFast ... > - running the resulting executable crashes (compiling without > -fllvm gives no warnings and executable works properly) > - cat bigCube.txt | ./cubeFast > /dev/null > Segmentation fault: 11 > - Exception Type: EXC_BAD_ACCESS (SIGSEGV) > Exception Codes: KERN_INVALID_ADDRESS at 0xfffffffd5bfd8460 > > >> - ?... >> >> ?Configuration details: >> >> >> - Mac OS 10.10.1 (Yosemite) >> - uname -a >> Darwin iMac27-5.local 14.0.0 Darwin Kernel Version 14.0.0: Fri Sep 19 >> 00:26:44 PDT 2014; root:xnu-2782.1.97~2/RELEASE_X86_64 x86_64 >> - llvm info: >> - opt --version >> LLVM (http://llvm.org/): >> LLVM version 3.4.2 >> Optimized build with assertions. >> Built Oct 31 2014 (23:14:30). >> Default target: x86_64-apple-darwin14.0.0 >> Host CPU: corei7 >> - gcc --version >> gcc (Homebrew gcc 4.9.1) 4.9.1 >> Copyright (C) 2014 Free Software Foundation, Inc. >> This is free software; see the source for copying conditions. There >> is NO >> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR >> PURPOSE. >> - ? /usr/bin/ghc --info >> [("Project name","The Glorious Glasgow Haskell Compilation System") >> ,("GCC extra via C opts"," -fwrapv") >> ,("C compiler command","/usr/bin/gcc") >> ,("C compiler flags"," -m64 -fno-stack-protector") >> ,("C compiler link flags"," -m64") >> ,("Haskell CPP command","/usr/bin/gcc") >> ,("Haskell CPP flags","-E -undef -traditional -Wno-invalid-pp-token >> -Wno-unicode -Wno-trigraphs") >> ,("ld command","/usr/bin/ld") >> ,("ld flags"," -arch x86_64") >> ,("ld supports compact unwind","YES") >> ,("ld supports build-id","NO") >> ,("ld supports filelist","YES") >> ,("ld is GNU ld","NO") >> ,("ar command","/usr/bin/ar") >> ,("ar flags","clqs") >> ,("ar supports at file","NO") >> ,("touch command","touch") >> ,("dllwrap command","/bin/false") >> ,("windres command","/bin/false") >> ,("libtool command","libtool") >> ,("perl command","/usr/bin/perl") >> ,("target os","OSDarwin") >> ,("target arch","ArchX86_64") >> ,("target word size","8") >> ,("target has GNU nonexec stack","False") >> ,("target has .ident directive","True") >> ,("target has subsections via symbols","True") >> ,("Unregisterised","NO") >> ,("LLVM llc command","llc") >> ,("LLVM opt command","opt") >> ,("Project version","7.8.3") >> ,("Booter version","7.6.3") >> ,("Stage","2") >> ,("Build platform","x86_64-apple-darwin") >> ,("Host platform","x86_64-apple-darwin") >> ,("Target platform","x86_64-apple-darwin") >> ,("Have interpreter","YES") >> ,("Object splitting supported","YES") >> ,("Have native code generator","YES") >> ,("Support SMP","YES") >> ,("Tables next to code","YES") >> ,("RTS ways","l debug thr thr_debug thr_l thr_p dyn debug_dyn >> thr_dyn thr_debug_dyn l_dyn thr_l_dyn") >> ,("Support dynamic-too","YES") >> ,("Support parallel --make","YES") >> ,("Dynamic by default","NO") >> ,("GHC Dynamic","YES") >> ,("Leading underscore","YES") >> ,("Debug on","False") >> >> ,("LibDir","/Library/Frameworks/GHC.framework/Versions/7.8.3-x86_64/usr/lib/ghc-7.8.3") >> ,("Global Package >> DB","/Library/Frameworks/GHC.framework/Versions/7.8.3-x86_64/usr/lib/ghc-7.8.3/package.conf.d") >> ] >> - Not sure I found the correct instructions for building from >> source, I used the following: >> - >> >> $ autoreconf >> $ ./configure >> $ make >> $ make install >> >> >> >> >> On Tue, Dec 23, 2014 at 10:36 AM, Austin Seipp >> wrote: >> >>> We are pleased to announce the first release candidate for GHC 7.10.1: >>> >>> https://downloads.haskell.org/~ghc/7.10.1-rc1/ >>> >>> This includes the source tarball and bindists for 64bit/32bit Linux >>> and Windows. Binary builds for other platforms will be available >>> shortly. (CentOS 6.5 binaries are not available at this time like they >>> were for 7.8.x). These binaries and tarballs have an accompanying >>> SHA256SUMS file signed by my GPG key id (0x3B58D86F). >>> >>> We plan to make the 7.10.1 release sometime in February of 2015. We >>> expect another RC to occur during January of 2015. >>> >>> Please test as much as possible; bugs are much cheaper if we find them >>> before the release! >>> >>> -- >>> Regards, >>> >>> Austin Seipp, Haskell Consultant >>> Well-Typed LLP, http://www.well-typed.com/ >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juhpetersen at gmail.com Sat Jan 17 13:00:50 2015 From: juhpetersen at gmail.com (Jens Petersen) Date: Sat, 17 Jan 2015 22:00:50 +0900 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 In-Reply-To: References: Message-ID: On 17 January 2015 at 00:46, Austin Seipp wrote: > This was a result of https://ghc.haskell.org/trac/ghc/ticket/9652, > which Edward fixed and I'll be merging into the 7.10 branch for RC2. Thanks Austin, looks like it is already merged in current ghc-7.10 branch: I managed to get 7.10.0.20150116 to build itself. :-) https://copr.fedoraproject.org/coprs/petersen/ghc-7.10/build/67977/ > > it seems to me that RC1 is not able to build itself? > > https://copr-be.cloud.fedoraproject.org/results/petersen/ghc-7.10/fedora-rawhide-x86_64/ghc-7.10.0.20141222-0.2.fc21/ From slyich at gmail.com Sat Jan 17 18:42:27 2015 From: slyich at gmail.com (Sergei Trofimovich) Date: Sat, 17 Jan 2015 18:42:27 +0000 Subject: cminusminus.org does not have a link to the spec In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF2221CE98@DB3PRD3001MB020.064d.mgd.msft.net> References: <20140914191604.199a3f50@sf> <618BE556AADD624C9C918AA5D5911BEF222159D1@DB3PRD3001MB020.064d.mgd.msft.net> <20140916210318.5d7b5fff@sf> <618BE556AADD624C9C918AA5D5911BEF2221CE98@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <20150117184227.35a6c9a9@sf> On Tue, 16 Sep 2014 20:23:10 +0000 Simon Peyton Jones wrote: > Thanks. This is beyond my competence, and I'm totally submerged anyway. I suggest you make a Trac ticket about it anyway. Simon Marlow will probably have an opinion. Today I've found an excuse to actually implement it :) https://phabricator.haskell.org/D622 Reused 'CLOSURE' token and added import CLOSURE id; to existing import id; > | -----Original Message----- > | From: Sergei Trofimovich [mailto:slyich at gmail.com] > | Sent: 16 September 2014 19:03 > | To: Simon Peyton Jones > | Cc: Norman Ramsey; ghc-devs; Simon Marlow > | Subject: Re: cminusminus.org does not have a link to the spec > | > | On Mon, 15 Sep 2014 12:05:27 +0000 > | Simon Peyton Jones wrote: > | > | My planned change is for GHC's .cmm files syntax/codegen. > | The idea came out after having stumbled upon a rare ia64 > | bug in GHC's C codegen: > | > | > | http://git.haskell.org/ghc.git/commitdiff/e18525fae273f4c1ad8d6cbe1dea4fc > | 074cac721 > | > | The fundamental bug here is the following: > | Suppose we have two bits of rts: one .c file and one .cmm file > | > | // rts.c defines and exports a function and a variable > | void some_rts_fun (void); > | int some_rts_var; > | > | // rts.cmm uses rts.c's function and variable > | import some_rts_fun; /* this forces C codegen to emit function-like > | 'StgFunPtr some_rts_fun ();' > | prototype, it's fine */ > | > | import some_rts_var; /* also forces C codegen to emit function-like > | 'StgFunPtr some_rts_var ();' > | prototype, it's broken */ > | // ... > | W whatever = &some_rts_var; /* will pick address not to a real > | variable, but to a > | so called > | function stub, a separate structure > | pointing to real > | 'some_rts_var' */ > | > | I plan to tweak syntax to teach Cmm to distinct between > | imported C global variables/constants, imported Cmm info > | tables(closures), maybe other cases. > | > | I thought of adding haskell-like syntax for imports: > | foreign ccall import some_rts_fun; > | foreign cdata import some_rts_var; > | > | or maybe > | import some_rts_fun; > | import "&some_rts_fun" as some_rts_fun; > | > | This sort of bugs can be easily spotted by whole-program C compiler. > | gcc can do it with -flto option. I basically added to the mk/build.mk: > | SRC_CC_OPTS += -flto > | SRC_LD_OPTS += -flto -fuse-linker-plugin > | SRC_HC_OPTS += -optc-flto > | SRC_HC_OPTS += -optl-flto -optl-fuse-linker-plugin > | and started with './configure --enable-unregisterised' > | > | It immediately shown some of current offenders: > | error: variable 'ghczmprim_GHCziTypes_False_closure' redeclared as > | function > | error: variable 'ghczmprim_GHCziTypes_True_closure' redeclared as > | function > | > | I hope this fuzzy explanation makes some sense. > | > | Thanks! > | > | > Sergei > | > > | > C-- was originally envisaged as a target language for a variety of > | compilers. But in fact LLVM, which was developed at a similar time, > | "won" that race and has built a far larger ecosystem. That's fine with > | us -- it's great how successful LLVM has been -- but it means that C-- is > | now used essentially only in GHC. > | > > | > I'm not sure where the original C-- documents now are; Norman can you > | say? (I do know that the cminusminus.org has lapsed.) > | > > | > The GHC variant of C-- is defined mainly by the Cmm data type in GHC's > | source code. It does have a concrete syntax, because some bits of GHC's > | runtime system are written in Cmm. But I fear that this concrete language > | is not well documented. (Simon Marlow may know more here.) > | > > | > Because GHC's Cmm is part of GHC, we are free to change it. Would you > | like to say more about the change you want to make, and why you want to > | make it? Is this relating directly to GHC or to some other project? > | > > | > Simon > | > > | > > | > | -----Original Message----- > | > | From: Sergei Trofimovich [mailto:slyich at gmail.com] > | > | Sent: 14 September 2014 17:16 > | > | To: Simon Peyton Jones > | > | Subject: cminusminus.org does not have a link to the spec > | > | > | > | Hello Simon! > | > | > | > | I had a plan to tweak a bit "import" statement > | > | syntax of Cmm in GHC. > | > | > | > | Namely, to distinct between > | > | import some_c_function; > | > | import some_c_global_variable; > | > | > | > | To try it I first attempted to find latest c-- spec > | > | (to find some design sketches if available) at > | > | > | > | http://www.cminusminus.org/c-downloads/ > | > | > | > | But seems the links (and images?) have gone away > | > | as well as rsync server described at: > | > | > | > | http://www.cminusminus.org/the-c-rsync-server/ > | > | > | > | Maybe you could forward it to site admins so they would > | > | tweak links or point me to working copy. > | > | > | > | Apologies for bothering you on such minor > | > | > | > | Thank you! -- Sergei -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From lampih at gmail.com Sat Jan 17 23:20:30 2015 From: lampih at gmail.com (=?UTF-8?Q?Lu=C3=ADs_Gabriel?=) Date: Sat, 17 Jan 2015 23:20:30 +0000 Subject: Playing with the profiler Message-ID: Hi there, I'm doing some experiments with the GHC time profiler and I need to add a new field to the Cost Centre structures. I managed to add the field in the *CCS.h* header as well as in *codeGen/StgCmmProf.hs* but for some reason the program is crashing during garbage collection. As I have no experience with the GHC internals, I'm having trouble to find the problem. It would be very nice if someone could give me some clue to find this bug. The patch on GHC as well as the test sample and stack traces can be found here: https://gist.github.com/luisgabriel/39d51cf4d661c7e62e22 Thanks, Lu?s Gabriel -------------- next part -------------- An HTML attachment was scrubbed... URL: From djsamperi at gmail.com Sun Jan 18 05:35:20 2015 From: djsamperi at gmail.com (Dominick Samperi) Date: Sun, 18 Jan 2015 00:35:20 -0500 Subject: Build failure under Fedora 21 In-Reply-To: <87a91hajmx.fsf@gmail.com> References: <54B24AA9.3060503@ro-che.info> <54B38702.9030505@ro-che.info> <87a91hajmx.fsf@gmail.com> Message-ID: Hello hvr, I compiled from the source for ghc-7.8.4 by first creating mk/build.mk from the supplied template build.mk.sample with BuildFlavour = quick, then the usual: ./configure --prefix=/bin/ghc-7.8.4; make install... Perhaps there is a different build configuration where all necessary shared libs are created, as in the build for ghc-7.6.3? On Sat, Jan 17, 2015 at 3:36 AM, Herbert Valerio Riedel wrote: > On 2015-01-17 at 09:22:05 +0100, Dominick Samperi wrote: >> It turns out that the undefined reference to libHSprimitive-0.5.4.0.so >> when installing pandoc is not related to the use of CentOS or Debian >> binaries. I get the same undefined reference when I try to use >> ghc-7.8.4 compiled from source under Fedora 21. Here is the output of >> 'locate libHSprimitive': >> >> /home/dsamperi/.cabal/lib/primitive-0.5.4.0/ghc-7.8.4/libHSprimitive-0.5.4.0.a >> /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1-ghc7.6.3.so >> /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1.a >> /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1_p.a >> >> >> So there is a .a library, but no .so (shared) lib in the build from >> source. Can someone explain how to get the build process to create all >> necessary shared libs? > > How did you compile GHC? Iirc `primitive` isn't supposed to be > built/installed/used unless you enable DPH > > (otherwise, it would lead to a similiar issue like > https://ghc.haskell.org/trac/ghc/ticket/8919) > > Cheers, > hvr From slyich at gmail.com Sun Jan 18 12:35:31 2015 From: slyich at gmail.com (Sergei Trofimovich) Date: Sun, 18 Jan 2015 12:35:31 +0000 Subject: Playing with the profiler In-Reply-To: References: Message-ID: <20150118123531.64bf52bc@sf> On Sat, 17 Jan 2015 23:20:30 +0000 Lu?s Gabriel wrote: > Hi there, > > I'm doing some experiments with the GHC time profiler and I need to add a > new field to the Cost Centre structures. I managed to add the field in the > *CCS.h* header as well as in *codeGen/StgCmmProf.hs* but for some reason > the program is crashing during garbage collection. > > As I have no experience with the GHC internals, I'm having trouble to find > the problem. It would be very nice if someone could give me some clue to > find this bug. > > The patch on GHC as well as the test sample and stack traces can be found > here: https://gist.github.com/luisgabriel/39d51cf4d661c7e62e22 I tried your patch as-is on current ghc-HEAD/amd64 and it works fine. (might easily be another problem) What I am suspicious about is you are using '-prof -debug' and plain 'ghc'. Could it be that you didn't add GhcRTSWays += debug_p in your build.mk after a patch was tweaked last time and some old runtime against new ghc was used? I usually use inplace/bin/ghc-stage2 right after compilation without installation. -- Sergei -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From karel.gardas at centrum.cz Sun Jan 18 14:42:05 2015 From: karel.gardas at centrum.cz (Karel Gardas) Date: Sun, 18 Jan 2015 15:42:05 +0100 Subject: integer-gmp2 issues on Solaris/SPARC In-Reply-To: <87iognrm4b.fsf@gmail.com> References: <20141022105441.GA14512@machine> <87a94n808i.fsf@gmail.com> <1414533995-sup-9843@sabre> <1420357849-sup-8899@sabre> <87iognrm4b.fsf@gmail.com> Message-ID: <54BBC63D.4060607@centrum.cz> Hello Herbert, I'm sorry to bother you, but recent GHC HEAD does have issue on Solaris/SPARC platform which shows as undefined symbols during the linkage of stage2 binaries. For example ghc-stage2 link step fails with: Undefined first referenced symbol in file __gmpn_andn_n /home/karel/src/ghc-sparc-reg_ncg-head-2015-01-17/libraries/integer-gmp2/dist-install/build/libHSinteg_21cuTlnn00eFNd4GMrxOMi.a(Type.o) __gmpn_and_n /home/karel/src/ghc-sparc-reg_ncg-head-2015-01-17/libraries/integer-gmp2/dist-install/build/libHSinteg_21cuTlnn00eFNd4GMrxOMi.a(Type.o) __gmpn_ior_n /home/karel/src/ghc-sparc-reg_ncg-head-2015-01-17/libraries/integer-gmp2/dist-install/build/libHSinteg_21cuTlnn00eFNd4GMrxOMi.a(Type.o) __gmpn_xor_n /home/karel/src/ghc-sparc-reg_ncg-head-2015-01-17/libraries/integer-gmp2/dist-install/build/libHSinteg_21cuTlnn00eFNd4GMrxOMi.a(Type.o) ld: fatal: symbol referencing errors. No output written to ghc/stage2/build/tmp/ghc-stage2 All binaries fail with the same set of unresolved symbols. I can tell you that I don't see this issue on Solaris/i386 nor on Solaris/amd64 builds as you can verify here: http://haskell.inf.elte.hu/builders/ I'm talking here about exact Solaris 11.1 on SPARC and Solaris 11.1 on AMD64 box. Both Solarises provide the same version of libgmp: $ uname -p sparc $ ls -la /usr/lib/libgmp.so* lrwxrwxrwx 1 root root 15 Feb 20 1999 /usr/lib/libgmp.so -> libgmp.so.3.5.2 lrwxrwxrwx 1 root root 15 Feb 20 1999 /usr/lib/libgmp.so.3 -> libgmp.so.3.5.2 -r-xr-xr-x 1 root bin 1093328 Sep 19 2012 /usr/lib/libgmp.so.3.5.2 $ $ uname -p i386 $ ls -la /usr/lib/libgmp.so* lrwxrwxrwx 1 root root 15 Oct 18 2012 /usr/lib/libgmp.so -> libgmp.so.3.5.2 lrwxrwxrwx 1 root root 15 Oct 18 2012 /usr/lib/libgmp.so.3 -> libgmp.so.3.5.2 -r-xr-xr-x 1 root bin 878276 Feb 5 2014 /usr/lib/libgmp.so.3.5.2 $ And yet on i386/amd64 the symbol (one from the failing set as an example) __gmpn_andn_n is defined: $ nm /usr/lib/libgmp.so|grep __gmpn_andn_n [86] | 375728| 101|FUNC |GLOB |0 |14 |__gmpn_andn_n but on SPARC it's not: $ nm /usr/lib/libgmp.so|grep __gmpn_andn_n $ Do you have any magical knob which I can switch on to work around this issue by not needing those four symbols above? Thanks! Karel From hvriedel at gmail.com Sun Jan 18 15:05:43 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sun, 18 Jan 2015 16:05:43 +0100 Subject: integer-gmp2 issues on Solaris/SPARC In-Reply-To: <54BBC63D.4060607@centrum.cz> (Karel Gardas's message of "Sun, 18 Jan 2015 15:42:05 +0100") References: <20141022105441.GA14512@machine> <87a94n808i.fsf@gmail.com> <1414533995-sup-9843@sabre> <1420357849-sup-8899@sabre> <87iognrm4b.fsf@gmail.com> <54BBC63D.4060607@centrum.cz> Message-ID: <87y4p06se0.fsf@gmail.com> On 2015-01-18 at 15:42:05 +0100, Karel Gardas wrote: > Hello Herbert, > > I'm sorry to bother you, but recent GHC HEAD does have issue on > Solaris/SPARC platform which shows as undefined symbols during the > linkage of stage2 binaries. For example ghc-stage2 link step fails > with: Btw, what GMP version is that exactly? "GMP 3.5.2" doesn't seem to be an official GMP release? [...] > All binaries fail with the same set of unresolved symbols. I can tell > you that I don't see this issue on Solaris/i386 nor on Solaris/amd64 > builds as you can verify here: http://haskell.inf.elte.hu/builders/ > > I'm talking here about exact Solaris 11.1 on SPARC and Solaris 11.1 on > AMD64 box. Both Solarises provide the same version of libgmp: [...] > And yet on i386/amd64 the symbol (one from the failing set as an > example) __gmpn_andn_n is defined: > > $ nm /usr/lib/libgmp.so|grep __gmpn_andn_n > [86] | 375728| 101|FUNC |GLOB |0 |14 |__gmpn_andn_n > > but on SPARC it's not: > > $ nm /usr/lib/libgmp.so|grep __gmpn_andn_n > $ > > > Do you have any magical knob which I can switch on to work around this > issue by not needing those four symbols above? ...does the header differ? can you create a simple C program that calls the mpn_andn operation and compare how linkage differs? Cheers, hvr From george.colpitts at gmail.com Sun Jan 18 20:04:06 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Sun, 18 Jan 2015 16:04:06 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - questions on Mac OS platform In-Reply-To: References: Message-ID: Eric, thanks for the quick response, that is good news that you are not seeing the problems I see. However after I rebuilt with Apple gcc and still see the errors when calling main from ghci. One is an open ticket, https://ghc.haskell.org/trac/ghc/ticket/9277# which was reported in versions before 7.10.1 so I updated it with the details of my experience. I am running Yosemite 10.10.1. What version are you running? I did notice some difference between our two systems. I have /usr/bin/gcc --version Configured with: *--prefix=/Applications/Xcode.app/Contents/Developer/usr* --with-gxx-include-dir=/usr/include/c++/4.2.1 Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn) Target: x86_64-apple-darwin14.0.0 Thread model: posix while you have *--prefix=/Library/Developer/CommandLineTools/usr* When I try bash-3.2$ xcode-select --install xcode-select: error: command line tools are already installed, use "Software Update" to install updates it doesn't change anything, I still have the above prefix. Also, you are right, my last email had info for the wrong ghc, when I look at the right one and yours I see the following difference. You have ,("LibDir","/Library/Frameworks/GHC.framework/Versions/7.10.0-rc1-x86_64/usr/lib/ghc-7.10.0.20141222") ,("Global Package DB","/Library/Frameworks/GHC.framework/Versions/7.10.0-rc1-x86_64/usr/lib/ghc-7.10.0.20141222/package.conf.d") I have ,("LibDir","/usr/local/lib/ghc-7.10.0.20141222") ,("Global Package DB","/usr/local/lib/ghc-7.10.0.20141222/package.conf.d") I am doing a build by doing the following: $ autoreconf $ ./configure $ make $ make install ?The correct output from ghc --info for me is: ghc --info [("Project name","The Glorious Glasgow Haskell Compilation System") ,("GCC extra via C opts"," -fwrapv") ,("C compiler command","/usr/bin/gcc") ,("C compiler flags"," -m64 -fno-stack-protector") ,("C compiler link flags"," -m64") ,("Haskell CPP command","/usr/bin/gcc") ,("Haskell CPP flags","-E -undef -traditional -Wno-invalid-pp-token -Wno-unicode -Wno-trigraphs ") ,("ld command","/usr/bin/ld") ,("ld flags"," -arch x86_64") ,("ld supports compact unwind","YES") ,("ld supports build-id","NO") ,("ld supports filelist","YES") ,("ld is GNU ld","NO") ,("ar command","/usr/bin/ar") ,("ar flags","clqs") ,("ar supports at file","NO") ,("touch command","touch") ,("dllwrap command","/bin/false") ,("windres command","/bin/false") ,("libtool command","libtool") ,("perl command","/usr/bin/perl") ,("target os","OSDarwin") ,("target arch","ArchX86_64") ,("target word size","8") ,("target has GNU nonexec stack","False") ,("target has .ident directive","True") ,("target has subsections via symbols","True") ,("Unregisterised","NO") ,("LLVM llc command","/usr/local/bin/llc") ,("LLVM opt command","/usr/local/bin/opt") ,("Project version","7.10.0.20141222") ,("Project Git commit id","a8c556dfca3eca5277615cc2bf9d6c8f1f143c9a") ,("Booter version","7.8.3") ,("Stage","2") ,("Build platform","x86_64-apple-darwin") ,("Host platform","x86_64-apple-darwin") ,("Target platform","x86_64-apple-darwin") ,("Have interpreter","YES") ,("Object splitting supported","YES") ,("Have native code generator","YES") ,("Support SMP","YES") ,("Tables next to code","YES") ,("RTS ways","l debug thr thr_debug thr_l thr_p dyn debug_dyn thr_dyn thr_debug_dyn l_dyn thr_l_dyn") ,("Support dynamic-too","YES") ,("Support parallel --make","YES") ,("Support reexported-modules","YES") ,("Support thinning and renaming package flags","YES") ,("Uses package keys","YES") ,("Dynamic by default","NO") ,("GHC Dynamic","YES") ,("Leading underscore","YES") ,("Debug on","False") ,("LibDir","/usr/local/lib/ghc-7.10.0.20141222") ,("Global Package DB","/usr/local/lib/ghc-7.10.0.20141222/package.conf.d") ]? On Sat, Jan 17, 2015 at 2:38 PM, Erik Hesselink wrote: > Hi George, > > I've not tried compiling via llvm, but I have a working Mac GHC 7.10 > build that I've been using, and haven't seen any of the other issues > you mentioned. I'm not sure what's recommended but I believe I'm using > clang (see output below). If you want to try my build, you can > download it at [1]. BTW, the info you posted seems to be from 7.8, not > 7.10. > > Regards, > > Erik > > [1] > https://docs.google.com/a/silk.co/uc?id=0B5E6EvOcuE0nNFR4WUVNZzRtbGs&export=download > > $ gcc --version > Configured with: --prefix=/Library/Developer/CommandLineTools/usr > --with-gxx-include-dir=/usr/include/c++/4.2.1 > Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn) > Target: x86_64-apple-darwin13.4.0 > Thread model: posix > > $ ghc --info > [("Project name","The Glorious Glasgow Haskell Compilation System") > ,("GCC extra via C opts"," -fwrapv") > ,("C compiler command","/usr/bin/gcc") > ,("C compiler flags"," -m64 -fno-stack-protector") > ,("C compiler link flags"," -m64") > ,("Haskell CPP command","/usr/bin/gcc") > ,("Haskell CPP flags","-E -undef -traditional -Wno-invalid-pp-token > -Wno-unicode -Wno-trigraphs ") > ,("ld command","/usr/bin/ld") > ,("ld flags"," -arch x86_64") > ,("ld supports compact unwind","YES") > ,("ld supports build-id","NO") > ,("ld supports filelist","YES") > ,("ld is GNU ld","NO") > ,("ar command","/usr/bin/ar") > ,("ar flags","clqs") > ,("ar supports at file","NO") > ,("touch command","touch") > ,("dllwrap command","/bin/false") > ,("windres command","/bin/false") > ,("libtool command","libtool") > ,("perl command","/usr/bin/perl") > ,("target os","OSDarwin") > ,("target arch","ArchX86_64") > ,("target word size","8") > ,("target has GNU nonexec stack","False") > ,("target has .ident directive","True") > ,("target has subsections via symbols","True") > ,("Unregisterised","NO") > ,("LLVM llc command","llc") > ,("LLVM opt command","opt") > ,("Project version","7.10.0.20141222") > ,("Project Git commit id","a8c556dfca3eca5277615cc2bf9d6c8f1f143c9a") > ,("Booter version","7.8.3") > ,("Stage","2") > ,("Build platform","x86_64-apple-darwin") > ,("Host platform","x86_64-apple-darwin") > ,("Target platform","x86_64-apple-darwin") > ,("Have interpreter","YES") > ,("Object splitting supported","NO") > ,("Have native code generator","YES") > ,("Support SMP","YES") > ,("Tables next to code","YES") > ,("RTS ways","l debug thr thr_debug thr_l thr_p dyn debug_dyn thr_dyn > thr_debug_dyn l_dyn thr_l_dyn") > ,("Support dynamic-too","YES") > ,("Support parallel --make","YES") > ,("Support reexported-modules","YES") > ,("Support thinning and renaming package flags","YES") > ,("Uses package keys","YES") > ,("Dynamic by default","NO") > ,("GHC Dynamic","YES") > ,("Leading underscore","YES") > ,("Debug on","False") > > ,("LibDir","/Library/Frameworks/GHC.framework/Versions/7.10.0-rc1-x86_64/usr/lib/ghc-7.10.0.20141222") > ,("Global Package > > DB","/Library/Frameworks/GHC.framework/Versions/7.10.0-rc1-x86_64/usr/lib/ghc-7.10.0.20141222/package.conf.d") > ] > > On Sat, Jan 17, 2015 at 1:36 PM, George Colpitts > wrote: > > Has anybody successfully used llvm on the Mac with 7.10.1 RC1? My > problem is > > described below. > > Which is the recommended gcc to use when building source? > > > > GNU gcc 4.9.2 > > Apple LLVM version 6.0 (clang-600.0.56) (based on LLVM 3.5svn) > > > > When using ghci with 7.10.1 RC1 I get the following errors > intermittently. > > Is anybody else seeing these? > > > > Too late for parseStaticFlags: call it before runGhc or runGhcT > > *** Exception: ExitFailure 1 > > ld: library not found for -l:ghc31505_10.dylib > > collect2: error: ld returned 1 exit status > > phase `Linker' failed (exitcode = 1) > > > > Thanks > > > > On Fri, Jan 2, 2015 at 9:12 AM, George Colpitts < > george.colpitts at gmail.com> > > wrote: > >> > >> Only problem remaining is compiling with -fllvm and running resulting > >> executable > >> > >> . > >> .. > >> > >> > >> llvm , compiling with llvm (3.4.2) gives the following warnings: > >> > >> $ ghc -fllvm cubeFast.hs > >> [1 of 1] Compiling Main ( cubeFast.hs, cubeFast.o ) > >> clang: warning: argument unused during compilation: > '-fno-stack-protector' > >> clang: warning: argument unused during compilation: '-D > >> TABLES_NEXT_TO_CODE' > >> clang: warning: argument unused during compilation: '-I .' > >> clang: warning: argument unused during compilation: '-fno-common' > >> clang: warning: argument unused during compilation: '-U __PIC__' > >> clang: warning: argument unused during compilation: '-D __PIC__' > >> Linking cubeFast ... > >> running the resulting executable crashes (compiling without -fllvm gives > >> no warnings and executable works properly) > >> cat bigCube.txt | ./cubeFast > /dev/null > >> Segmentation fault: 11 > >> Exception Type: EXC_BAD_ACCESS (SIGSEGV) > >> Exception Codes: KERN_INVALID_ADDRESS at 0xfffffffd5bfd8460 > >>> > >>> ... > >>> > >>> Configuration details: > >>> > >>> Mac OS 10.10.1 (Yosemite) > >>> uname -a > >>> Darwin iMac27-5.local 14.0.0 Darwin Kernel Version 14.0.0: Fri Sep 19 > >>> 00:26:44 PDT 2014; root:xnu-2782.1.97~2/RELEASE_X86_64 x86_64 > >>> llvm info: > >>> opt --version > >>> LLVM (http://llvm.org/): > >>> LLVM version 3.4.2 > >>> Optimized build with assertions. > >>> Built Oct 31 2014 (23:14:30). > >>> Default target: x86_64-apple-darwin14.0.0 > >>> Host CPU: corei7 > >>> gcc --version > >>> gcc (Homebrew gcc 4.9.1) 4.9.1 > >>> Copyright (C) 2014 Free Software Foundation, Inc. > >>> This is free software; see the source for copying conditions. There is > >>> NO > >>> warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR > >>> PURPOSE. > >>> /usr/bin/ghc --info > >>> [("Project name","The Glorious Glasgow Haskell Compilation System") > >>> ,("GCC extra via C opts"," -fwrapv") > >>> ,("C compiler command","/usr/bin/gcc") > >>> ,("C compiler flags"," -m64 -fno-stack-protector") > >>> ,("C compiler link flags"," -m64") > >>> ,("Haskell CPP command","/usr/bin/gcc") > >>> ,("Haskell CPP flags","-E -undef -traditional -Wno-invalid-pp-token > >>> -Wno-unicode -Wno-trigraphs") > >>> ,("ld command","/usr/bin/ld") > >>> ,("ld flags"," -arch x86_64") > >>> ,("ld supports compact unwind","YES") > >>> ,("ld supports build-id","NO") > >>> ,("ld supports filelist","YES") > >>> ,("ld is GNU ld","NO") > >>> ,("ar command","/usr/bin/ar") > >>> ,("ar flags","clqs") > >>> ,("ar supports at file","NO") > >>> ,("touch command","touch") > >>> ,("dllwrap command","/bin/false") > >>> ,("windres command","/bin/false") > >>> ,("libtool command","libtool") > >>> ,("perl command","/usr/bin/perl") > >>> ,("target os","OSDarwin") > >>> ,("target arch","ArchX86_64") > >>> ,("target word size","8") > >>> ,("target has GNU nonexec stack","False") > >>> ,("target has .ident directive","True") > >>> ,("target has subsections via symbols","True") > >>> ,("Unregisterised","NO") > >>> ,("LLVM llc command","llc") > >>> ,("LLVM opt command","opt") > >>> ,("Project version","7.8.3") > >>> ,("Booter version","7.6.3") > >>> ,("Stage","2") > >>> ,("Build platform","x86_64-apple-darwin") > >>> ,("Host platform","x86_64-apple-darwin") > >>> ,("Target platform","x86_64-apple-darwin") > >>> ,("Have interpreter","YES") > >>> ,("Object splitting supported","YES") > >>> ,("Have native code generator","YES") > >>> ,("Support SMP","YES") > >>> ,("Tables next to code","YES") > >>> ,("RTS ways","l debug thr thr_debug thr_l thr_p dyn debug_dyn thr_dyn > >>> thr_debug_dyn l_dyn thr_l_dyn") > >>> ,("Support dynamic-too","YES") > >>> ,("Support parallel --make","YES") > >>> ,("Dynamic by default","NO") > >>> ,("GHC Dynamic","YES") > >>> ,("Leading underscore","YES") > >>> ,("Debug on","False") > >>> > >>> > ,("LibDir","/Library/Frameworks/GHC.framework/Versions/7.8.3-x86_64/usr/lib/ghc-7.8.3") > >>> ,("Global Package > >>> > DB","/Library/Frameworks/GHC.framework/Versions/7.8.3-x86_64/usr/lib/ghc-7.8.3/package.conf.d") > >>> ] > >>> Not sure I found the correct instructions for building from source, I > >>> used the following: > >>> > >>> $ autoreconf > >>> $ ./configure > >>> $ make > >>> $ make install > >>> > >>> > >>> > >>> On Tue, Dec 23, 2014 at 10:36 AM, Austin Seipp > >>> wrote: > >>>> > >>>> We are pleased to announce the first release candidate for GHC 7.10.1: > >>>> > >>>> https://downloads.haskell.org/~ghc/7.10.1-rc1/ > >>>> > >>>> This includes the source tarball and bindists for 64bit/32bit Linux > >>>> and Windows. Binary builds for other platforms will be available > >>>> shortly. (CentOS 6.5 binaries are not available at this time like they > >>>> were for 7.8.x). These binaries and tarballs have an accompanying > >>>> SHA256SUMS file signed by my GPG key id (0x3B58D86F). > >>>> > >>>> We plan to make the 7.10.1 release sometime in February of 2015. We > >>>> expect another RC to occur during January of 2015. > >>>> > >>>> Please test as much as possible; bugs are much cheaper if we find them > >>>> before the release! > >>>> > >>>> -- > >>>> Regards, > >>>> > >>>> Austin Seipp, Haskell Consultant > >>>> Well-Typed LLP, http://www.well-typed.com/ > >>>> _______________________________________________ > >>>> ghc-devs mailing list > >>>> ghc-devs at haskell.org > >>>> http://www.haskell.org/mailman/listinfo/ghc-devs > >>> > >>> > >> > > > > > > _______________________________________________ > > Glasgow-haskell-users mailing list > > Glasgow-haskell-users at haskell.org > > http://www.haskell.org/mailman/listinfo/glasgow-haskell-users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Mon Jan 19 09:25:59 2015 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 19 Jan 2015 10:25:59 +0100 Subject: integer-gmp2 issues on Solaris/SPARC In-Reply-To: <87y4p06se0.fsf@gmail.com> References: <20141022105441.GA14512@machine> <87a94n808i.fsf@gmail.com> <1414533995-sup-9843@sabre> <1420357849-sup-8899@sabre> <87iognrm4b.fsf@gmail.com> <54BBC63D.4060607@centrum.cz> <87y4p06se0.fsf@gmail.com> Message-ID: <54BCCDA7.3040009@centrum.cz> On 01/18/15 04:05 PM, Herbert Valerio Riedel wrote: > On 2015-01-18 at 15:42:05 +0100, Karel Gardas wrote: >> Hello Herbert, >> >> I'm sorry to bother you, but recent GHC HEAD does have issue on >> Solaris/SPARC platform which shows as undefined symbols during the >> linkage of stage2 binaries. For example ghc-stage2 link step fails >> with: > > Btw, what GMP version is that exactly? "GMP 3.5.2" doesn't seem to be an > official GMP release? This is version 4.3.2 in both cases. > ...does the header differ? Unfortunately not. The only difference is in CFLAGS: $ gdiff -u /usr/include/gmp/gmp.h /tmp/gmp.h --- /usr/include/gmp/gmp.h 2014-02-05 14:40:13.405522327 +0100 +++ /tmp/gmp.h 2015-01-19 08:35:38.146637514 +0100 @@ -2231,7 +2231,7 @@ /* Define CC and CFLAGS which were used to build this version of GMP */ #define __GMP_CC "/ws/on11update-tools/SUNWspro/sunstudio12.1/bin/cc" -#define __GMP_CFLAGS "-m64 -xO4 -xchip=generic -Ui386 -U__i386 -xregs=no%frameptr -mt -features=extinl,extensions -xustr=ascii_utf16_ushort -xcsi -xthreadvar=%all -D_STDC_99 -xc99=all -D_XOPEN_SOURCE=600 -D__EXTENSIONS__=1 -D_XPG6 -KPIC -DPIC" +#define __GMP_CFLAGS "-m64 -xO4 -xtarget=ultra2 -xarch=sparcvis -xchip=ultra2 -Qoption cg -xregs=no%appl -W2,-xwrap_int -xmemalign=16s -mt -features=extinl,extensions -xustr=ascii_utf16_ushort -xcsi -xthreadvar=%all -D_STDC_99 -xc99=all -D_XOPEN_SOURCE=600 -D__EXTENSIONS__=1 -D_XPG6 -KPIC -DPIC" /* Major version number is the value of __GNU_MP__ too, above and in mp.h. */ #define __GNU_MP_VERSION 4 Let me also add that the gmp.h file does not define mpn_andn_n symbol at all neither it declare __gmpn_andn_n function! Since both i386 and sparc gmp.h are the same this applies to both. > can you create a simple C program > that calls the mpn_andn operation and compare how linkage differs? No, I'm not able to use "mpn_andn" nor "mpn_andn_n". What I'm able to use is "__gmpn_andn_n". On i386 this pass with implicitly declared symbol warning: gmp_test.c: In function ?main?: gmp_test.c:10:5: warning: implicit declaration of function ?__gmpn_andn_n? on sparc fails on linkage: $ gcc -Wall gmp_test.c -lgmp gmp_test.c: In function ?main?: gmp_test.c:10:5: warning: implicit declaration of function ?__gmpn_andn_n? Undefined first referenced symbol in file __gmpn_andn_n /var/tmp//ccSHaGtf.o ld: fatal: symbol referencing errors. No output written to a.out collect2: ld returned 1 exit status My testing program is: $ cat gmp_test.c #include #include int main() { __gmpn_andn_n((mp_limb_t*)NULL, (const mp_limb_t*)NULL, (const mp_limb_t*)NULL, (mp_size_t)1); return 0; } The big issue here is that i386/solaris gmp library so file provides this __gmpn_andn_n symbol but have not declared it in gmp.h at all in a form of mpn_andn_n define. So basically your: -- void mpn_andn_n (mp_limb_t *rp, const mp_limb_t *s1p, const mp_limb_t *s2p, -- mp_size_t n) foreign import ccall unsafe "gmp.h __gmpn_andn_n" c_mpn_andn_n :: MutableByteArray# s -> ByteArray# -> ByteArray# -> GmpSize# -> IO () works on i386, but not on sparc. Is it possible for you to test for those mpn_ symbols in integrer-gmp2 configure and if they are presented then you can use your __gmpn_andn_n foreigner call? Thanks! Karel From hvriedel at gmail.com Mon Jan 19 09:54:14 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Mon, 19 Jan 2015 10:54:14 +0100 Subject: integer-gmp2 issues on Solaris/SPARC In-Reply-To: <54BCCDA7.3040009@centrum.cz> (Karel Gardas's message of "Mon, 19 Jan 2015 10:25:59 +0100") References: <20141022105441.GA14512@machine> <87a94n808i.fsf@gmail.com> <1414533995-sup-9843@sabre> <1420357849-sup-8899@sabre> <87iognrm4b.fsf@gmail.com> <54BBC63D.4060607@centrum.cz> <87y4p06se0.fsf@gmail.com> <54BCCDA7.3040009@centrum.cz> Message-ID: <87lhkzgkop.fsf@gmail.com> On 2015-01-19 at 10:25:59 +0100, Karel Gardas wrote: [...] > /* Major version number is the value of __GNU_MP__ too, above and in > mp.h. */ > #define __GNU_MP_VERSION 4 > > > Let me also add that the gmp.h file does not define mpn_andn_n symbol > at all neither it declare __gmpn_andn_n function! Since both i386 and > sparc gmp.h are the same this applies to both. Oh, I just checked the documentation, and `mpn_andn_n()` is only mentioned in the GMP 5.0.x docs https://gmplib.org/manual-5.0.4/Low_002dlevel-Functions.html#Low_002dlevel-Functions but not in https://gmplib.org/manual-4.3.2/Low_002dlevel-Functions.html#Low_002dlevel-Functions [...] > The big issue here is that i386/solaris gmp library so file provides > this __gmpn_andn_n symbol but have not declared it in gmp.h at all in > a form of mpn_andn_n define. So basically your: > -- void mpn_andn_n (mp_limb_t *rp, const mp_limb_t *s1p, const > mp_limb_t *s2p, > -- mp_size_t n) > foreign import ccall unsafe "gmp.h __gmpn_andn_n" > c_mpn_andn_n :: MutableByteArray# s -> ByteArray# -> ByteArray# -> > GmpSize# > -> IO () > > > works on i386, but not on sparc. > > Is it possible for you to test for those mpn_ symbols in integrer-gmp2 > configure and if they are presented then you can use your > __gmpn_andn_n foreigner call? I'm actually rather considering not using those at all when GMP version is 4.* as they're not part of the official API of GMP 4.x Btw, how long do we need to keep supporting GMP 4.x (as it lacks a few other features)? GMP 5.0.0 has been released over 5 years ago... :-/ Cheers, hvr From karel.gardas at centrum.cz Mon Jan 19 10:55:28 2015 From: karel.gardas at centrum.cz (Karel Gardas) Date: Mon, 19 Jan 2015 11:55:28 +0100 Subject: integer-gmp2 issues on Solaris/SPARC In-Reply-To: <87lhkzgkop.fsf@gmail.com> References: <20141022105441.GA14512@machine> <87a94n808i.fsf@gmail.com> <1414533995-sup-9843@sabre> <1420357849-sup-8899@sabre> <87iognrm4b.fsf@gmail.com> <54BBC63D.4060607@centrum.cz> <87y4p06se0.fsf@gmail.com> <54BCCDA7.3040009@centrum.cz> <87lhkzgkop.fsf@gmail.com> Message-ID: <54BCE2A0.7070506@centrum.cz> On 01/19/15 10:54 AM, Herbert Valerio Riedel wrote: >> Is it possible for you to test for those mpn_ symbols in integrer-gmp2 >> configure and if they are presented then you can use your >> __gmpn_andn_n foreigner call? > > I'm actually rather considering not using those at all when GMP version > is 4.* as they're not part of the official API of GMP 4.x > > Btw, how long do we need to keep supporting GMP 4.x (as it lacks a few > other features)? > > GMP 5.0.0 has been released over 5 years ago... :-/ It depends on support policy. Speaking about RHEL 6.x and Solaris 11.x, both are supported till 2020 respectively till 2022. If however we support only newest version line of the OS, then RHEL 7.x is already in public (supporting gmp 5.x) but Solaris 12 not. Solaris 12 will probably be released sometimes in 2016 (following Oracle's SPARC/Solaris roadmap). My hope is, this will include gmp 5.x too. Are you ok waiting another year or two for transition? Thanks! Karel From peter.trommler at ohm-hochschule.de Mon Jan 19 11:04:12 2015 From: peter.trommler at ohm-hochschule.de (Peter Trommler) Date: Mon, 19 Jan 2015 12:04:12 +0100 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - questions on Mac OS platform References: Message-ID: George Colpitts wrote: [...] > - When using ghci with 7.10.1 RC1 I get the following errors > intermittently. Is anybody else seeing these? [...] > - ld: library not found for -l:ghc31505_10.dylib > collect2: error: ld returned 1 exit status > phase `Linker' failed (exitcode = 1) This is ticket #9875 (https://ghc.haskell.org/trac/ghc/ticket/9875) and it is fixed in HEAD and has been merged into the 7.10 branch. The fix will be in ghc 7.10.1 RC2. Peter From george.colpitts at gmail.com Mon Jan 19 12:35:43 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Mon, 19 Jan 2015 08:35:43 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - questions on Mac OS platform In-Reply-To: References: Message-ID: Thanks Peter On Mon, Jan 19, 2015 at 7:04 AM, Peter Trommler < peter.trommler at ohm-hochschule.de> wrote: > George Colpitts wrote: > > [...] > > - When using ghci with 7.10.1 RC1 I get the following errors > > intermittently. Is anybody else seeing these? > [...] > > - ld: library not found for -l:ghc31505_10.dylib > > collect2: error: ld returned 1 exit status > > phase `Linker' failed (exitcode = 1) > This is ticket #9875 (https://ghc.haskell.org/trac/ghc/ticket/9875) > and it is fixed in HEAD and has been merged into the 7.10 branch. The > fix will be in ghc 7.10.1 RC2. > > Peter > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Mon Jan 19 13:53:08 2015 From: austin at well-typed.com (Austin Seipp) Date: Mon, 19 Jan 2015 07:53:08 -0600 Subject: RC2 Cutoff at the end of this week Message-ID: Hi *, I just wanted to let everyone know we plan on releasing RC2 at the end of this week, on Friday. We're hoping this will be the final RC before we do the actual release, so if you want to get your fixes in for users to test - now is the time! Specifically, here are a few things: - Edward, can you please merge D614 and D603? I think we should pull these into the 7.10 branch. - Erik, can you follow up on D599? It would be nice to have this in early. - Alan, I'll get D538 & D620 into 7.10 today, it just requires a more delicate merge I think (there are probably some parent commits to pick up). - Peter W, I'll get to #9961 later today, but if you don't mind, please post future things to Phabricator - it makes it much easier for a proper review (although your fix looks good still!) - Peter T, D622 looks OK to go into 7.10 and HEAD to me, so I can merge it. I'll follow up with a curated list of Trac tickets for the final release (in another email). Let me know if you have questions. Thanks. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From alan.zimm at gmail.com Mon Jan 19 13:55:36 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Mon, 19 Jan 2015 15:55:36 +0200 Subject: RC2 Cutoff at the end of this week In-Reply-To: References: Message-ID: Thanks Alan On Mon, Jan 19, 2015 at 3:53 PM, Austin Seipp wrote: > Hi *, > > I just wanted to let everyone know we plan on releasing RC2 at the end > of this week, on Friday. We're hoping this will be the final RC before > we do the actual release, so if you want to get your fixes in for > users to test - now is the time! > > Specifically, here are a few things: > > - Edward, can you please merge D614 and D603? I think we should pull > these into the 7.10 branch. > > - Erik, can you follow up on D599? It would be nice to have this in > early. > > - Alan, I'll get D538 & D620 into 7.10 today, it just requires a > more delicate merge I think (there are probably some parent commits to > pick up). > > - Peter W, I'll get to #9961 later today, but if you don't mind, > please post future things to Phabricator - it makes it much easier for > a proper review (although your fix looks good still!) > > - Peter T, D622 looks OK to go into 7.10 and HEAD to me, so I can merge > it. > > I'll follow up with a curated list of Trac tickets for the final > release (in another email). > > Let me know if you have questions. > > Thanks. > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 19 16:21:12 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 19 Jan 2015 16:21:12 +0000 Subject: vectorisation code? In-Reply-To: <2AEE1E4B-1990-4884-8E48-FA4BC75EC231@cse.unsw.edu.au> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562A788B@DB3PRD3001MB020.064d.mgd.msft.net> <2AEE1E4B-1990-4884-8E48-FA4BC75EC231@cse.unsw.edu.au> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> Austin, (or anyone else) Manuel says: | > Would it be ok if we left it in the repo, but CPP'd it out so that | we | > didn't compile everything? (The DPH library is in the same state at | > the moment.) | > | > It might suffer bit-rot, but it?d still be there for resurrection. | | Sure, that?s ok. Could you action this? Just avoid compiling anything in 'vectorise/', using (I suppose) cpp to create a stub where necessary. Leave enough comments to explain! Simon | | I hope everything is fine in Cambridge! | Manuel | | > | -----Original Message----- | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | > | Manuel M T Chakravarty | > | Sent: 16 January 2015 02:58 | > | To: Richard Eisenberg | > | Cc: ghc-devs at haskell.org Devs | > | Subject: Re: vectorisation code? | > | | > | [Sorry, sent from the wrong account at first.] | > | | > | We currently don?t have the resources to work on DPH. I would | > | obviously prefer to leave the code in, in the hope that we will be | > | able to return to it. | > | | > | Manuel | > | | > | > Richard Eisenberg : | > | > | > | > Hi devs, | > | > | > | > There's a sizable number of modules in the `vectorise` | > | subdirectory of GHC. I'm sure these do all sorts of wonderful | > | things. But what, exactly? And, does anyone make use of these | wonderful things? | > | > | > | > A quick poking through the code shows a tiny link between the | > | vectorise code and the rest of GHC -- the function `vectorise` | > | exported from the module `Vectorise`, which is named in exactly | one | > | place from SimplCore. From what I can tell, the function will be | > | called only when `-fvectorise` is specified, and then it seems to | > | interact with a {-# VECTORISE #-} pragma. However, `{-# VECTORISE | > | #-}` doesn't appear in the manual at all, and `-fvectorise` is | > | given only a cursory explanation. It seems these work with DPH... | > | which has been disabled, no? Searching online finds several hits, | > | but nothing more recent than 2012. | > | > | > | > I hope this question doesn't offend -- it seems that | > | vectorisation probably has amazing performance gains. Yet, the | > | feature also seems unloved. In the meantime, compiling (and | > | recompiling, and | > | recompiling...) the modules takes time, as does going through | them | > | to propagate changes from elsewhere. If this feature is truly | > | orphaned, unloved, and unused at the moment, is it reasonable to | > | consider putting it on furlough? | > | > | > | > Thanks, | > | > Richard | > | > _______________________________________________ | > | > ghc-devs mailing list | > | > ghc-devs at haskell.org | > | > http://www.haskell.org/mailman/listinfo/ghc-devs | > | | > | _______________________________________________ | > | ghc-devs mailing list | > | ghc-devs at haskell.org | > | http://www.haskell.org/mailman/listinfo/ghc-devs From marlowsd at gmail.com Mon Jan 19 17:42:10 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Mon, 19 Jan 2015 17:42:10 +0000 Subject: Out of memory mystery In-Reply-To: References: Message-ID: <54BD41F2.5020009@gmail.com> There are a couple of reasons this could happen: - The -h flag causes the GC to run more often, which will reclaim memory more promptly - -h causes more old-gen GCs to happen, which can avoid some cases where generational GC has promoted something that hangs on to a lot of stuff causing the heap to grow. Cheers, Simon On 06/12/2014 23:58, Lennart Augustsson wrote: > I'm running the 32-bit Windows version of ghc-7.8.3. > > Here are two runs: > > $ RunMu +RTS -A64M -h -Sstat.log -i1 -RTS -c Strat.App.Abacus.Main > Compiling afresh Strat.App.Abacus.Main > Compiled afresh Strat.App.Abacus.Main, 1302.84s > > $ RunMu +RTS -A64M -Sstat.log -i1 -RTS -c Strat.App.Abacus.Main > Compiling afresh Strat.App.Abacus.Main > RunMu.exe: out of memory > > The binary is compiled without profiling, but in the first run I'm using > the -h flag to get the rudimentary heap profile. And with -h it works, > but without the flag it runs out of memory. > > Any bright ideas from the RTS experts on why this could happen? > > -- Lennart > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From austin at well-typed.com Mon Jan 19 18:28:11 2015 From: austin at well-typed.com (Austin Seipp) Date: Mon, 19 Jan 2015 12:28:11 -0600 Subject: HEADS UP: Enabling --slow validate on Phabricator Message-ID: Hi *, This is just a warning that soon, I'd like to enable --slow ./validate on Phabricator. What does this mean? It means that builds will take longer, but GHC will be tested much more thoroughly with each commit and with each patch that's submitted. Unfortunately, the bad news is that we've been pretty sloppy about making sure the --slow configuration always works... last I tried it (admittedly a little while ago) there were quite a few extra failures. However, this is just a pre-emptive warning that things may take longer, and a few things may break more soon. Particularly, there's now the chance Harbormaster will see failures that you did not see before! (for example, if you changed the code generator and broke the profiling build). But that's probably a good thing, since you can defer to checking slow tests afterwords. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From george.colpitts at gmail.com Mon Jan 19 19:09:37 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Mon, 19 Jan 2015 15:09:37 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - questions on Mac OS platform In-Reply-To: References: Message-ID: Thanks Peter, I think this problem is unique to 10.10.1 on MacOS. In any case, due to it 7.10.1 RC1 is relatively useless to me. Is there a script to uninstall? There is a make install so I was hoping make uninstall would do the right thing but it doesn't seem to. I think I figured out how to delete it but it would be nice if our standard process provided a script to do so. Looking forward to 7.10.1 RC2. Thanks for everybody's help Best George On Mon, Jan 19, 2015 at 7:04 AM, Peter Trommler < peter.trommler at ohm-hochschule.de> wrote: > George Colpitts wrote: > > [...] > > - When using ghci with 7.10.1 RC1 I get the following errors > > intermittently. Is anybody else seeing these? > [...] > > - ld: library not found for -l:ghc31505_10.dylib > > collect2: error: ld returned 1 exit status > > phase `Linker' failed (exitcode = 1) > This is ticket #9875 (https://ghc.haskell.org/trac/ghc/ticket/9875) > and it is fixed in HEAD and has been merged into the 7.10 branch. The > fix will be in ghc 7.10.1 RC2. > > Peter > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezyang at mit.edu Mon Jan 19 19:20:40 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Mon, 19 Jan 2015 11:20:40 -0800 Subject: RC2 Cutoff at the end of this week In-Reply-To: References: Message-ID: <1421695223-sup-254@sabre> Excerpts from Austin Seipp's message of 2015-01-19 05:53:08 -0800: > - Edward, can you please merge D614 and D603? I think we should pull > these into the 7.10 branch. OK, these two are done. Edward From austin at well-typed.com Mon Jan 19 21:37:56 2015 From: austin at well-typed.com (Austin Seipp) Date: Mon, 19 Jan 2015 15:37:56 -0600 Subject: GHC Weekly News - 2015/01/19 Message-ID: Hi *, It's time for some more GHC news! The GHC 7.10 release is closing in, which has been the primary place we're focusing our attention. In particular, we're hoping RC2 will be Real Soon Now. Some notes from the past GHC HQ meetings this week: - GHC 7.10 is still rolling along smoothly, and it's expected that RC2 will be cut this Friday, January 23rd. Austin sent out an email about this to `ghc-devs`, so we can hopefully get all the necessary fixes in. - Our status page for GHC 7.10 lists all the current bullet points and tickets we hope to address: https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.10.1 - Currently, GHC HQ isn't planning on focusing many cycles on any GHC 7.10 tickets that aren't '''highest priority'''. We're otherwise going to fix things as we see fit, at our leisure - but a highest priority bug is a showstopper for us. This means if you have something you consider a showstopper for the next release, you should bump the priority on the ticket and yell at us! - We otherwise think everything looks pretty smooth for 7.10.1 RC2 - our libraries are updated, and most of the currently queued patches (with a few minor exceptions) are done and merged. Some notes from the mailing list include: - Austin announced the GHC 7.10.1 RC2 cutoff, which will be on '''Friday the 23rd'''. https://www.haskell.org/pipermail/ghc-devs/2015-January/008026.html - Austin has alerted everyone that soon, Phabricator will run all builds with `./validate --slow`, which will increase the time taken for most builds, but will catch a wider array of bugs in commits and submitted patches - there are many cases the default `./validate` script still doesn't catch. https://www.haskell.org/pipermail/ghc-devs/2015-January/008030.html - Johan Tibell asked about some clarifications for the `HsBang` datatype inside GHC. In response, Simon came back with some clarifications, comments, and refactorings, which greatly helped Johan. ttps://www.haskell.org/pipermail/ghc-devs/2015-January/007905.html - Jens Petersen announced a Fedora Copr repo for GHC 7.8.4: https://www.haskell.org/pipermail/ghc-devs/2015-January/007978.html - Richard Eisenberg had a question about the vectoriser: can we disable it? DPH seems to have stagnated a bit recently, bringing into question the necessity of keeping it on. There hasn't been anything done yet, but it looks like the build will get lighter, with a few more modules soon: https://www.haskell.org/pipermail/ghc-devs/2015-January/007986.html - Ben Gamari has an interesting email about trying to optimize `bytestring`, but he hit a snag with small literals being floated out causing very poor assembly results. Hopefully Simon (or anyone!) can follow up soon with some help: https://www.haskell.org/pipermail/ghc-devs/2015-January/007997.html - Konrad G?dek asks: why does it seem the GHC API is slower at calling native code than a compiled executable is? Konrad asks as this issue of performance is particularly important for their work. https://www.haskell.org/pipermail/ghc-devs/2015-January/007990.html - Jan Stolarek has a simple question: what English spelling do we aim for in GHC? It seems that while GHC supports an assortment of British and American english syntactic literals (e.g. `SPECIALIZE` and `SPECIALISE`), the compiler sports an assortment of British/American identifiers on its own! https://www.haskell.org/pipermail/ghc-devs/2015-January/007999.html - Luis Gabriel has a question about modifying the compiler's profiling output, particularly adding a new CCS (Cost Centre Structure) field. He's hit a bug it seems, and is looking for help with his patch. https://www.haskell.org/pipermail/ghc-devs/2015-January/008015.html Closed tickets the past few weeks include: #9966, #9904, #9969, #9972, #9934, #9967, #9875, #9900, #9973, #9890, #5821, #9984, #9997, #9998, #9971, #10000, #10002, #9243, #9889, #9384, #8624, #9922, #9878, #9999, #9957, #7298, and #9836. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Mon Jan 19 22:42:56 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 19 Jan 2015 22:42:56 +0000 Subject: cminusminus.org does not have a link to the spec In-Reply-To: <20150117184227.35a6c9a9@sf> References: <20140914191604.199a3f50@sf> <618BE556AADD624C9C918AA5D5911BEF222159D1@DB3PRD3001MB020.064d.mgd.msft.net> <20140916210318.5d7b5fff@sf> <618BE556AADD624C9C918AA5D5911BEF2221CE98@DB3PRD3001MB020.064d.mgd.msft.net> <20150117184227.35a6c9a9@sf> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562AB1A2@DB3PRD3001MB020.064d.mgd.msft.net> Simon M: this is more your bailiwick than mine. Sergei: It's always a good idea to create a Trac ticket to accompany a Phab patch, because we have better milestone/priority support for Trac tickets. Don't forget to explain the motivation and rationale, giving examples. If it all gets too voluminous, start a Trac wiki page. Apols if you have done so already; I'm offline. Simon | -----Original Message----- | From: Sergei Trofimovich [mailto:slyich at gmail.com] | Sent: 17 January 2015 18:42 | To: Simon Peyton Jones | Cc: Norman Ramsey; ghc-devs; Simon Marlow | Subject: Re: cminusminus.org does not have a link to the spec | | On Tue, 16 Sep 2014 20:23:10 +0000 | Simon Peyton Jones wrote: | | > Thanks. This is beyond my competence, and I'm totally submerged | anyway. I suggest you make a Trac ticket about it anyway. Simon Marlow | will probably have an opinion. | | Today I've found an excuse to actually implement it :) | https://phabricator.haskell.org/D622 | | Reused 'CLOSURE' token and added | import CLOSURE id; | to existing | import id; | | > | -----Original Message----- | > | From: Sergei Trofimovich [mailto:slyich at gmail.com] | > | Sent: 16 September 2014 19:03 | > | To: Simon Peyton Jones | > | Cc: Norman Ramsey; ghc-devs; Simon Marlow | > | Subject: Re: cminusminus.org does not have a link to the spec | > | | > | On Mon, 15 Sep 2014 12:05:27 +0000 | > | Simon Peyton Jones wrote: | > | | > | My planned change is for GHC's .cmm files syntax/codegen. | > | The idea came out after having stumbled upon a rare ia64 | > | bug in GHC's C codegen: | > | | > | | > | | http://git.haskell.org/ghc.git/commitdiff/e18525fae273f4c1ad8d6cbe1dea4fc | > | 074cac721 | > | | > | The fundamental bug here is the following: | > | Suppose we have two bits of rts: one .c file and one .cmm file | > | | > | // rts.c defines and exports a function and a variable | > | void some_rts_fun (void); | > | int some_rts_var; | > | | > | // rts.cmm uses rts.c's function and variable | > | import some_rts_fun; /* this forces C codegen to emit function- | like | > | 'StgFunPtr some_rts_fun | ();' | > | prototype, it's fine */ | > | | > | import some_rts_var; /* also forces C codegen to emit function- | like | > | 'StgFunPtr some_rts_var | ();' | > | prototype, it's broken */ | > | // ... | > | W whatever = &some_rts_var; /* will pick address not to a real | > | variable, but to a | > | so called | > | function stub, a separate structure | > | pointing to | real | > | 'some_rts_var' */ | > | | > | I plan to tweak syntax to teach Cmm to distinct between | > | imported C global variables/constants, imported Cmm info | > | tables(closures), maybe other cases. | > | | > | I thought of adding haskell-like syntax for imports: | > | foreign ccall import some_rts_fun; | > | foreign cdata import some_rts_var; | > | | > | or maybe | > | import some_rts_fun; | > | import "&some_rts_fun" as some_rts_fun; | > | | > | This sort of bugs can be easily spotted by whole-program C compiler. | > | gcc can do it with -flto option. I basically added to the | mk/build.mk: | > | SRC_CC_OPTS += -flto | > | SRC_LD_OPTS += -flto -fuse-linker-plugin | > | SRC_HC_OPTS += -optc-flto | > | SRC_HC_OPTS += -optl-flto -optl-fuse-linker-plugin | > | and started with './configure --enable-unregisterised' | > | | > | It immediately shown some of current offenders: | > | error: variable 'ghczmprim_GHCziTypes_False_closure' redeclared | as | > | function | > | error: variable 'ghczmprim_GHCziTypes_True_closure' redeclared | as | > | function | > | | > | I hope this fuzzy explanation makes some sense. | > | | > | Thanks! | > | | > | > Sergei | > | > | > | > C-- was originally envisaged as a target language for a variety of | > | compilers. But in fact LLVM, which was developed at a similar time, | > | "won" that race and has built a far larger ecosystem. That's fine | with | > | us -- it's great how successful LLVM has been -- but it means that C- | - is | > | now used essentially only in GHC. | > | > | > | > I'm not sure where the original C-- documents now are; Norman can | you | > | say? (I do know that the cminusminus.org has lapsed.) | > | > | > | > The GHC variant of C-- is defined mainly by the Cmm data type in | GHC's | > | source code. It does have a concrete syntax, because some bits of | GHC's | > | runtime system are written in Cmm. But I fear that this concrete | language | > | is not well documented. (Simon Marlow may know more here.) | > | > | > | > Because GHC's Cmm is part of GHC, we are free to change it. Would | you | > | like to say more about the change you want to make, and why you want | to | > | make it? Is this relating directly to GHC or to some other project? | > | > | > | > Simon | > | > | > | > | > | > | -----Original Message----- | > | > | From: Sergei Trofimovich [mailto:slyich at gmail.com] | > | > | Sent: 14 September 2014 17:16 | > | > | To: Simon Peyton Jones | > | > | Subject: cminusminus.org does not have a link to the spec | > | > | | > | > | Hello Simon! | > | > | | > | > | I had a plan to tweak a bit "import" statement | > | > | syntax of Cmm in GHC. | > | > | | > | > | Namely, to distinct between | > | > | import some_c_function; | > | > | import some_c_global_variable; | > | > | | > | > | To try it I first attempted to find latest c-- spec | > | > | (to find some design sketches if available) at | > | > | | > | > | http://www.cminusminus.org/c-downloads/ | > | > | | > | > | But seems the links (and images?) have gone away | > | > | as well as rsync server described at: | > | > | | > | > | http://www.cminusminus.org/the-c-rsync-server/ | > | > | | > | > | Maybe you could forward it to site admins so they would | > | > | tweak links or point me to working copy. | > | > | | > | > | Apologies for bothering you on such minor | > | > | | > | > | Thank you! | | -- | | Sergei From eir at cis.upenn.edu Tue Jan 20 00:12:28 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Mon, 19 Jan 2015 17:12:28 -0700 Subject: vectorisation code? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562A788B@DB3PRD3001MB020.064d.mgd.msft.net> <2AEE1E4B-1990-4884-8E48-FA4BC75EC231@cse.unsw.edu.au> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: With all due respect to Manuel's request, could I opt for a different resolution? I frequently (several times during most minutes of GHC programming) grep the GHC source code for this or that. If the vectorisation code is CPP'd away but still present in the compiler/ directory, these greps will find hits in the code. Furthermore, without the specific knowledge that there is a `#if 0` at the top of the file, the code will look quite active. Of course, I could modify my grep macro to skip the vectorise directory, but the next dev down the road might not know to do this. Here's an alternate suggestion: in SimplCore, keep the call to vectorise around, but commented out (not just with CPP, for better syntax highlighting). Include a Note explaining what `vectorise` does and why it's not there at the moment. However, move the actual vectorisation code somewhere else in the repo, outside of the source directories (`utils`? a new `attic` directory?). Manuel, is this acceptable to you? Other devs, thoughts? Perhaps we should also make a Trac ticket asking for some love to be given to this feature. Thanks, Richard On Jan 19, 2015, at 9:21 AM, Simon Peyton Jones wrote: > Austin, (or anyone else) > > Manuel says: > > | > Would it be ok if we left it in the repo, but CPP'd it out so that > | we > | > didn't compile everything? (The DPH library is in the same state at > | > the moment.) > | > > | > It might suffer bit-rot, but it?d still be there for resurrection. > | > | Sure, that?s ok. > > Could you action this? Just avoid compiling anything in 'vectorise/', using (I suppose) cpp to create a stub where necessary. > > Leave enough comments to explain! > > Simon > > | > | I hope everything is fine in Cambridge! > | Manuel > | > | > | -----Original Message----- > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > | > | Manuel M T Chakravarty > | > | Sent: 16 January 2015 02:58 > | > | To: Richard Eisenberg > | > | Cc: ghc-devs at haskell.org Devs > | > | Subject: Re: vectorisation code? > | > | > | > | [Sorry, sent from the wrong account at first.] > | > | > | > | We currently don?t have the resources to work on DPH. I would > | > | obviously prefer to leave the code in, in the hope that we will be > | > | able to return to it. > | > | > | > | Manuel > | > | > | > | > Richard Eisenberg : > | > | > > | > | > Hi devs, > | > | > > | > | > There's a sizable number of modules in the `vectorise` > | > | subdirectory of GHC. I'm sure these do all sorts of wonderful > | > | things. But what, exactly? And, does anyone make use of these > | wonderful things? > | > | > > | > | > A quick poking through the code shows a tiny link between the > | > | vectorise code and the rest of GHC -- the function `vectorise` > | > | exported from the module `Vectorise`, which is named in exactly > | one > | > | place from SimplCore. From what I can tell, the function will be > | > | called only when `-fvectorise` is specified, and then it seems to > | > | interact with a {-# VECTORISE #-} pragma. However, `{-# VECTORISE > | > | #-}` doesn't appear in the manual at all, and `-fvectorise` is > | > | given only a cursory explanation. It seems these work with DPH... > | > | which has been disabled, no? Searching online finds several hits, > | > | but nothing more recent than 2012. > | > | > > | > | > I hope this question doesn't offend -- it seems that > | > | vectorisation probably has amazing performance gains. Yet, the > | > | feature also seems unloved. In the meantime, compiling (and > | > | recompiling, and > | > | recompiling...) the modules takes time, as does going through > | them > | > | to propagate changes from elsewhere. If this feature is truly > | > | orphaned, unloved, and unused at the moment, is it reasonable to > | > | consider putting it on furlough? > | > | > > | > | > Thanks, > | > | > Richard > | > | > _______________________________________________ > | > | > ghc-devs mailing list > | > | > ghc-devs at haskell.org > | > | > http://www.haskell.org/mailman/listinfo/ghc-devs > | > | > | > | _______________________________________________ > | > | ghc-devs mailing list > | > | ghc-devs at haskell.org > | > | http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From howard_b_golden at yahoo.com Tue Jan 20 00:15:06 2015 From: howard_b_golden at yahoo.com (Howard B. Golden) Date: Tue, 20 Jan 2015 00:15:06 +0000 (UTC) Subject: cminusminus.org does not have a link to the spec In-Reply-To: <20150117184227.35a6c9a9@sf> References: <20150117184227.35a6c9a9@sf> Message-ID: <995695563.2131388.1421712906440.JavaMail.yahoo@jws100122.mail.ne1.yahoo.com> Hi Sergei, See http://www.cs.tufts.edu/~nr/c--/extern/man2.ps Google is your friend! Howard From david.feuer at gmail.com Tue Jan 20 00:29:17 2015 From: david.feuer at gmail.com (David Feuer) Date: Mon, 19 Jan 2015 19:29:17 -0500 Subject: vectorisation code? In-Reply-To: References: Message-ID: Richard Eisenberg wrote: > Here's an alternate suggestion: in SimplCore, keep the call to vectorise around, but commented out (not just with CPP, for better syntax highlighting). Include a Note explaining what `vectorise` does and why it's not there at the moment. However, move the actual vectorisation code somewhere else in the repo, outside of the source directories (`utils`? a new `attic` directory?). I don't know too much about git, but I would think we'd want to remove it from master and add a commit putting it back in to a dph branch. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chak at cse.unsw.edu.au Tue Jan 20 02:50:58 2015 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Tue, 20 Jan 2015 13:50:58 +1100 Subject: vectorisation code? In-Reply-To: References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562A788B@DB3PRD3001MB020.064d.mgd.msft.net> <2AEE1E4B-1990-4884-8E48-FA4BC75EC231@cse.unsw.edu.au> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <30A42B9C-7A26-4A0F-872A-92D11E7ACA31@cse.unsw.edu.au> Given the vectorisation code is in its own subdirectory already, it?s quite easy to spot in a grep, I would say. Manuel > Richard Eisenberg : > > With all due respect to Manuel's request, could I opt for a different resolution? I frequently (several times during most minutes of GHC programming) grep the GHC source code for this or that. If the vectorisation code is CPP'd away but still present in the compiler/ directory, these greps will find hits in the code. Furthermore, without the specific knowledge that there is a `#if 0` at the top of the file, the code will look quite active. Of course, I could modify my grep macro to skip the vectorise directory, but the next dev down the road might not know to do this. > > Here's an alternate suggestion: in SimplCore, keep the call to vectorise around, but commented out (not just with CPP, for better syntax highlighting). Include a Note explaining what `vectorise` does and why it's not there at the moment. However, move the actual vectorisation code somewhere else in the repo, outside of the source directories (`utils`? a new `attic` directory?). > > Manuel, is this acceptable to you? Other devs, thoughts? Perhaps we should also make a Trac ticket asking for some love to be given to this feature. > > Thanks, > Richard > > On Jan 19, 2015, at 9:21 AM, Simon Peyton Jones wrote: > >> Austin, (or anyone else) >> >> Manuel says: >> >> | > Would it be ok if we left it in the repo, but CPP'd it out so that >> | we >> | > didn't compile everything? (The DPH library is in the same state at >> | > the moment.) >> | > >> | > It might suffer bit-rot, but it?d still be there for resurrection. >> | >> | Sure, that?s ok. >> >> Could you action this? Just avoid compiling anything in 'vectorise/', using (I suppose) cpp to create a stub where necessary. >> >> Leave enough comments to explain! >> >> Simon >> >> | >> | I hope everything is fine in Cambridge! >> | Manuel >> | >> | > | -----Original Message----- >> | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of >> | > | Manuel M T Chakravarty >> | > | Sent: 16 January 2015 02:58 >> | > | To: Richard Eisenberg >> | > | Cc: ghc-devs at haskell.org Devs >> | > | Subject: Re: vectorisation code? >> | > | >> | > | [Sorry, sent from the wrong account at first.] >> | > | >> | > | We currently don?t have the resources to work on DPH. I would >> | > | obviously prefer to leave the code in, in the hope that we will be >> | > | able to return to it. >> | > | >> | > | Manuel >> | > | >> | > | > Richard Eisenberg : >> | > | > >> | > | > Hi devs, >> | > | > >> | > | > There's a sizable number of modules in the `vectorise` >> | > | subdirectory of GHC. I'm sure these do all sorts of wonderful >> | > | things. But what, exactly? And, does anyone make use of these >> | wonderful things? >> | > | > >> | > | > A quick poking through the code shows a tiny link between the >> | > | vectorise code and the rest of GHC -- the function `vectorise` >> | > | exported from the module `Vectorise`, which is named in exactly >> | one >> | > | place from SimplCore. From what I can tell, the function will be >> | > | called only when `-fvectorise` is specified, and then it seems to >> | > | interact with a {-# VECTORISE #-} pragma. However, `{-# VECTORISE >> | > | #-}` doesn't appear in the manual at all, and `-fvectorise` is >> | > | given only a cursory explanation. It seems these work with DPH... >> | > | which has been disabled, no? Searching online finds several hits, >> | > | but nothing more recent than 2012. >> | > | > >> | > | > I hope this question doesn't offend -- it seems that >> | > | vectorisation probably has amazing performance gains. Yet, the >> | > | feature also seems unloved. In the meantime, compiling (and >> | > | recompiling, and >> | > | recompiling...) the modules takes time, as does going through >> | them >> | > | to propagate changes from elsewhere. If this feature is truly >> | > | orphaned, unloved, and unused at the moment, is it reasonable to >> | > | consider putting it on furlough? >> | > | > >> | > | > Thanks, >> | > | > Richard >> | > | > _______________________________________________ >> | > | > ghc-devs mailing list >> | > | > ghc-devs at haskell.org >> | > | > http://www.haskell.org/mailman/listinfo/ghc-devs >> | > | >> | > | _______________________________________________ >> | > | ghc-devs mailing list >> | > | ghc-devs at haskell.org >> | > | http://www.haskell.org/mailman/listinfo/ghc-devs >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From carter.schonwald at gmail.com Tue Jan 20 03:47:20 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 19 Jan 2015 22:47:20 -0500 Subject: vectorisation code? In-Reply-To: References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562A788B@DB3PRD3001MB020.064d.mgd.msft.net> <2AEE1E4B-1990-4884-8E48-FA4BC75EC231@cse.unsw.edu.au> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: relatedly: wont the source be preserved in the git history if we remove it? the CPP etc solution is no simpler than just keeping the code cached in the git history right? Or will having it in the repo, but CPP'd/commented out somehow preserve some invariant that cant be maintained by resuscitating it later from the git history? the mater branch doesn't allow rebasing or force pushes AFAIK anyways, so the history truly is immutable, right? tl;dr our git repo is immutable, if we just deleted the dir, we still have it in the git history right? Esp if its not being maintained / type checked either way? On Mon, Jan 19, 2015 at 7:12 PM, Richard Eisenberg wrote: > With all due respect to Manuel's request, could I opt for a different > resolution? I frequently (several times during most minutes of GHC > programming) grep the GHC source code for this or that. If the > vectorisation code is CPP'd away but still present in the compiler/ > directory, these greps will find hits in the code. Furthermore, without the > specific knowledge that there is a `#if 0` at the top of the file, the code > will look quite active. Of course, I could modify my grep macro to skip the > vectorise directory, but the next dev down the road might not know to do > this. > > Here's an alternate suggestion: in SimplCore, keep the call to vectorise > around, but commented out (not just with CPP, for better syntax > highlighting). Include a Note explaining what `vectorise` does and why it's > not there at the moment. However, move the actual vectorisation code > somewhere else in the repo, outside of the source directories (`utils`? a > new `attic` directory?). > > Manuel, is this acceptable to you? Other devs, thoughts? Perhaps we should > also make a Trac ticket asking for some love to be given to this feature. > > Thanks, > Richard > > On Jan 19, 2015, at 9:21 AM, Simon Peyton Jones > wrote: > > > Austin, (or anyone else) > > > > Manuel says: > > > > | > Would it be ok if we left it in the repo, but CPP'd it out so that > > | we > > | > didn't compile everything? (The DPH library is in the same state at > > | > the moment.) > > | > > > | > It might suffer bit-rot, but it?d still be there for resurrection. > > | > > | Sure, that?s ok. > > > > Could you action this? Just avoid compiling anything in 'vectorise/', > using (I suppose) cpp to create a stub where necessary. > > > > Leave enough comments to explain! > > > > Simon > > > > | > > | I hope everything is fine in Cambridge! > > | Manuel > > | > > | > | -----Original Message----- > > | > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf > Of > > | > | Manuel M T Chakravarty > > | > | Sent: 16 January 2015 02:58 > > | > | To: Richard Eisenberg > > | > | Cc: ghc-devs at haskell.org Devs > > | > | Subject: Re: vectorisation code? > > | > | > > | > | [Sorry, sent from the wrong account at first.] > > | > | > > | > | We currently don?t have the resources to work on DPH. I would > > | > | obviously prefer to leave the code in, in the hope that we will be > > | > | able to return to it. > > | > | > > | > | Manuel > > | > | > > | > | > Richard Eisenberg : > > | > | > > > | > | > Hi devs, > > | > | > > > | > | > There's a sizable number of modules in the `vectorise` > > | > | subdirectory of GHC. I'm sure these do all sorts of wonderful > > | > | things. But what, exactly? And, does anyone make use of these > > | wonderful things? > > | > | > > > | > | > A quick poking through the code shows a tiny link between the > > | > | vectorise code and the rest of GHC -- the function `vectorise` > > | > | exported from the module `Vectorise`, which is named in exactly > > | one > > | > | place from SimplCore. From what I can tell, the function will be > > | > | called only when `-fvectorise` is specified, and then it seems to > > | > | interact with a {-# VECTORISE #-} pragma. However, `{-# VECTORISE > > | > | #-}` doesn't appear in the manual at all, and `-fvectorise` is > > | > | given only a cursory explanation. It seems these work with DPH... > > | > | which has been disabled, no? Searching online finds several hits, > > | > | but nothing more recent than 2012. > > | > | > > > | > | > I hope this question doesn't offend -- it seems that > > | > | vectorisation probably has amazing performance gains. Yet, the > > | > | feature also seems unloved. In the meantime, compiling (and > > | > | recompiling, and > > | > | recompiling...) the modules takes time, as does going through > > | them > > | > | to propagate changes from elsewhere. If this feature is truly > > | > | orphaned, unloved, and unused at the moment, is it reasonable to > > | > | consider putting it on furlough? > > | > | > > > | > | > Thanks, > > | > | > Richard > > | > | > _______________________________________________ > > | > | > ghc-devs mailing list > > | > | > ghc-devs at haskell.org > > | > | > http://www.haskell.org/mailman/listinfo/ghc-devs > > | > | > > | > | _______________________________________________ > > | > | ghc-devs mailing list > > | > | ghc-devs at haskell.org > > | > | http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Tue Jan 20 04:35:15 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Mon, 19 Jan 2015 23:35:15 -0500 Subject: vectorisation code? In-Reply-To: References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562A788B@DB3PRD3001MB020.064d.mgd.msft.net> <2AEE1E4B-1990-4884-8E48-FA4BC75EC231@cse.unsw.edu.au> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Mon, Jan 19, 2015 at 10:47 PM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > relatedly: wont the source be preserved in the git history if we remove > it? the CPP etc solution is > Indeed; most of the projects I'm involved with have a specific policy to *not* keep commented-out or otherwise disabled features in the active tree, because they can be pulled back later from history or branches as appropriate. You might want to either save it to a new branch or set a tag on HEAD before removing it, so you can more easily find it later. You've got a revision control system; let *it* do the work of keeping track of stuff like this. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From jan.stolarek at p.lodz.pl Tue Jan 20 08:37:25 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Tue, 20 Jan 2015 09:37:25 +0100 Subject: vectorisation code? In-Reply-To: References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <201501200937.25031.jan.stolarek@p.lodz.pl> > Here's an alternate suggestion: in SimplCore, keep the call to vectorise > around, but commented out Yuck. Carter and Brandon are right here - we have git, let it do the job. I propose that we remove vectorization code, create a Trac ticket about vectorization & DPH needing love and record the commit hash in the ticket so that we can revert it easily in the future. Janek From hvriedel at gmail.com Tue Jan 20 08:47:13 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 20 Jan 2015 09:47:13 +0100 Subject: vectorisation code? In-Reply-To: <201501200937.25031.jan.stolarek@p.lodz.pl> (Jan Stolarek's message of "Tue, 20 Jan 2015 09:37:25 +0100") References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> <201501200937.25031.jan.stolarek@p.lodz.pl> Message-ID: <8761c1ltym.fsf@gmail.com> On 2015-01-20 at 09:37:25 +0100, Jan Stolarek wrote: >> Here's an alternate suggestion: in SimplCore, keep the call to vectorise >> around, but commented out > Yuck. Carter and Brandon are right here - we have git, let it do the > job. I propose that we remove vectorization code, create a Trac ticket > about vectorization & DPH needing love and record the commit hash in > the ticket so that we can revert it easily in the future. I'm also against commenting out dead code in the presence of a VCS. Btw, here's two links discussing the issues related to commenting out if anyone's interested in knowing more: - http://programmers.stackexchange.com/questions/190096/can-commented-out-code-be-valuable-documentation - http://programmers.stackexchange.com/questions/45378/is-commented-out-code-really-always-bad Cheers, hvr From dev at rodlogic.net Tue Jan 20 10:39:31 2015 From: dev at rodlogic.net (RodLogic) Date: Tue, 20 Jan 2015 08:39:31 -0200 Subject: vectorisation code? In-Reply-To: <8761c1ltym.fsf@gmail.com> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> <201501200937.25031.jan.stolarek@p.lodz.pl> <8761c1ltym.fsf@gmail.com> Message-ID: (disclaimer: I know nothing about the vectorization code) Now, is the vectorization code really dead code or it is code that needs love to come back to life? By removing it from the code base, you are probably sealing it's fate as dead code as we are limiting new or existing contributors to act on it (even if it's a commit hash away). If it is code that needs love to come back to life, grep noise or conditional compilation is a small price to pay here, imho. As a compromise, is it possible to move vectorization code into it's own submodule in git or is it too intertwined with core GHC? So that it can be worked on independent of GHC? On Tue, Jan 20, 2015 at 6:47 AM, Herbert Valerio Riedel wrote: > On 2015-01-20 at 09:37:25 +0100, Jan Stolarek wrote: > >> Here's an alternate suggestion: in SimplCore, keep the call to vectorise > >> around, but commented out > > > Yuck. Carter and Brandon are right here - we have git, let it do the > > job. I propose that we remove vectorization code, create a Trac ticket > > about vectorization & DPH needing love and record the commit hash in > > the ticket so that we can revert it easily in the future. > > I'm also against commenting out dead code in the presence of a VCS. > > Btw, here's two links discussing the issues related to commenting out if > anyone's interested in knowing more: > > - > http://programmers.stackexchange.com/questions/190096/can-commented-out-code-be-valuable-documentation > > - > http://programmers.stackexchange.com/questions/45378/is-commented-out-code-really-always-bad > > > Cheers, > hvr > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lampih at gmail.com Tue Jan 20 12:54:15 2015 From: lampih at gmail.com (=?UTF-8?Q?Lu=C3=ADs_Gabriel?=) Date: Tue, 20 Jan 2015 12:54:15 +0000 Subject: Playing with the profiler References: <20150118123531.64bf52bc@sf> Message-ID: I just did a clean build on top of the current master (I was building on top of ghc-7.8) and it worked! Thanks! -- Lu?s Gabriel On Sun Jan 18 2015 at 9:35:35 AM Sergei Trofimovich wrote: > On Sat, 17 Jan 2015 23:20:30 +0000 > Lu?s Gabriel wrote: > > > Hi there, > > > > I'm doing some experiments with the GHC time profiler and I need to add a > > new field to the Cost Centre structures. I managed to add the field in > the > > *CCS.h* header as well as in *codeGen/StgCmmProf.hs* but for some reason > > the program is crashing during garbage collection. > > > > As I have no experience with the GHC internals, I'm having trouble to > find > > the problem. It would be very nice if someone could give me some clue to > > find this bug. > > > > The patch on GHC as well as the test sample and stack traces can be found > > here: https://gist.github.com/luisgabriel/39d51cf4d661c7e62e22 > > I tried your patch as-is on current ghc-HEAD/amd64 and it works fine. > (might easily be another problem) > > What I am suspicious about is you are using > '-prof -debug' and plain 'ghc'. > > Could it be that you didn't add > GhcRTSWays += debug_p > in your build.mk after a patch was tweaked last time > and some old runtime against new ghc was used? > > I usually use inplace/bin/ghc-stage2 right > after compilation without installation. > > -- > > Sergei > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ggreif at gmail.com Tue Jan 20 13:45:19 2015 From: ggreif at gmail.com (Gabor Greif) Date: Tue, 20 Jan 2015 14:45:19 +0100 Subject: [commit: ghc] master: comments only (9894f6a) In-Reply-To: <20150120123701.A53F93A300@ghc.haskell.org> References: <20150120123701.A53F93A300@ghc.haskell.org> Message-ID: Hi Simon, JFTR, you seem to be after the trailing zeros in the code you commented. If the bitmap is *really* that sparse then it might be profitable to rewrite it in terms of __builtin_ctz (when present). Some CPUs even have instructions for this. http://hardwarebug.org/2010/01/14/beware-the-builtins/ Possibly one could even switch to checking *leading* zeros by reformulating the algorithm and eliminate a few more instructions. http://www.hackersdelight.org/ might be another source for inspiration. Cheers, Gabor On 1/20/15, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/ghc > > On branch : master > Link : > http://ghc.haskell.org/trac/ghc/changeset/9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b/ghc > >>--------------------------------------------------------------- > > commit 9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b > Author: Simon Marlow > Date: Wed Jan 14 08:45:07 2015 +0000 > > comments only > > >>--------------------------------------------------------------- > > 9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b > rts/sm/Scav.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/rts/sm/Scav.c b/rts/sm/Scav.c > index 2ecb23b..781840c 100644 > --- a/rts/sm/Scav.c > +++ b/rts/sm/Scav.c > @@ -285,6 +285,8 @@ scavenge_large_srt_bitmap( StgLargeSRT *large_srt ) > > for (i = 0; i < size / BITS_IN(W_); i++) { > bitmap = large_srt->l.bitmap[i]; > + // skip zero words: bitmaps can be very sparse, and this helps > + // performance a lot in some cases. > if (bitmap != 0) { > for (j = 0; j < BITS_IN(W_); j++) { > if ((bitmap & 1) != 0) { > > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-commits > From marlowsd at gmail.com Tue Jan 20 14:10:14 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 20 Jan 2015 14:10:14 +0000 Subject: [commit: ghc] master: comments only (9894f6a) In-Reply-To: References: <20150120123701.A53F93A300@ghc.haskell.org> Message-ID: <54BE61C6.3090802@gmail.com> No, it checks for and skips over complete zero words, not just trailing zeroes. The bitmaps are really that sparse sometimes. Like hundreds of zero words with a few non-zero bits at either end. The change I made improved perf enough that this isn't a blocking issue for me any more, and I suspect that any further optimisation won't result in significant wins. The real problem is that the code generator generates huge SRT bitmaps, and I bet there are bigger wins to be had there. Cheers, Simon On 20/01/2015 13:45, Gabor Greif wrote: > Hi Simon, > > JFTR, you seem to be after the trailing zeros in the code you commented. > > If the bitmap is *really* that sparse then it might be profitable to > rewrite it in terms of > __builtin_ctz (when present). Some CPUs even have instructions for this. > > http://hardwarebug.org/2010/01/14/beware-the-builtins/ > > Possibly one could even switch to checking *leading* zeros by > reformulating the algorithm and eliminate a few more instructions. > > http://www.hackersdelight.org/ might be another source for inspiration. > > Cheers, > > Gabor > > On 1/20/15, git at git.haskell.org wrote: >> Repository : ssh://git at git.haskell.org/ghc >> >> On branch : master >> Link : >> http://ghc.haskell.org/trac/ghc/changeset/9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b/ghc >> >>> --------------------------------------------------------------- >> >> commit 9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b >> Author: Simon Marlow >> Date: Wed Jan 14 08:45:07 2015 +0000 >> >> comments only >> >> >>> --------------------------------------------------------------- >> >> 9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b >> rts/sm/Scav.c | 2 ++ >> 1 file changed, 2 insertions(+) >> >> diff --git a/rts/sm/Scav.c b/rts/sm/Scav.c >> index 2ecb23b..781840c 100644 >> --- a/rts/sm/Scav.c >> +++ b/rts/sm/Scav.c >> @@ -285,6 +285,8 @@ scavenge_large_srt_bitmap( StgLargeSRT *large_srt ) >> >> for (i = 0; i < size / BITS_IN(W_); i++) { >> bitmap = large_srt->l.bitmap[i]; >> + // skip zero words: bitmaps can be very sparse, and this helps >> + // performance a lot in some cases. >> if (bitmap != 0) { >> for (j = 0; j < BITS_IN(W_); j++) { >> if ((bitmap & 1) != 0) { >> >> _______________________________________________ >> ghc-commits mailing list >> ghc-commits at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-commits >> From hvriedel at gmail.com Tue Jan 20 14:28:03 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Tue, 20 Jan 2015 15:28:03 +0100 Subject: [commit: ghc] master: comments only (9894f6a) In-Reply-To: (Gabor Greif's message of "Tue, 20 Jan 2015 14:45:19 +0100") References: <20150120123701.A53F93A300@ghc.haskell.org> Message-ID: <87k30hh6h8.fsf@gmail.com> Hello Gabor, Fyi, Alex (cc'ed) already spent some brain-cycles on that, but it's not clear yet if it's worth optimising: https://gist.github.com/axman6/46edae58cc4e8242bdac Cheers, hvr On 2015-01-20 at 14:45:19 +0100, Gabor Greif wrote: > Hi Simon, > > JFTR, you seem to be after the trailing zeros in the code you commented. > > If the bitmap is *really* that sparse then it might be profitable to > rewrite it in terms of > __builtin_ctz (when present). > Some CPUs even have instructions for this. > > http://hardwarebug.org/2010/01/14/beware-the-builtins/ > > Possibly one could even switch to checking *leading* zeros by > reformulating the algorithm and eliminate a few more instructions. > > http://www.hackersdelight.org/ might be another source for inspiration. > > Cheers, > > Gabor > > On 1/20/15, git at git.haskell.org wrote: >> Repository : ssh://git at git.haskell.org/ghc >> >> On branch : master >> Link : >> http://ghc.haskell.org/trac/ghc/changeset/9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b/ghc >> >>>--------------------------------------------------------------- >> >> commit 9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b >> Author: Simon Marlow >> Date: Wed Jan 14 08:45:07 2015 +0000 >> >> comments only >> >> >>>--------------------------------------------------------------- >> >> 9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b >> rts/sm/Scav.c | 2 ++ >> 1 file changed, 2 insertions(+) >> >> diff --git a/rts/sm/Scav.c b/rts/sm/Scav.c >> index 2ecb23b..781840c 100644 >> --- a/rts/sm/Scav.c >> +++ b/rts/sm/Scav.c >> @@ -285,6 +285,8 @@ scavenge_large_srt_bitmap( StgLargeSRT *large_srt ) >> >> for (i = 0; i < size / BITS_IN(W_); i++) { >> bitmap = large_srt->l.bitmap[i]; >> + // skip zero words: bitmaps can be very sparse, and this helps >> + // performance a lot in some cases. >> if (bitmap != 0) { >> for (j = 0; j < BITS_IN(W_); j++) { >> if ((bitmap & 1) != 0) { >> >> _______________________________________________ >> ghc-commits mailing list >> ghc-commits at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-commits >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -- "Elegance is not optional" -- Richard O'Keefe From ggreif at gmail.com Tue Jan 20 14:46:13 2015 From: ggreif at gmail.com (Gabor Greif) Date: Tue, 20 Jan 2015 15:46:13 +0100 Subject: [commit: ghc] master: comments only (9894f6a) In-Reply-To: <54BE61C6.3090802@gmail.com> References: <20150120123701.A53F93A300@ghc.haskell.org> <54BE61C6.3090802@gmail.com> Message-ID: On 1/20/15, Simon Marlow wrote: > No, it checks for and skips over complete zero words, not just trailing > zeroes. Sure, I mean the for-loops after your if-condition, in lines 291 and 304 https://git.haskell.org/ghc.git/blob/9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b:/rts/sm/Scav.c#l291 Cheers, Gabor > > The bitmaps are really that sparse sometimes. Like hundreds of zero > words with a few non-zero bits at either end. The change I made > improved perf enough that this isn't a blocking issue for me any more, > and I suspect that any further optimisation won't result in significant > wins. The real problem is that the code generator generates huge SRT > bitmaps, and I bet there are bigger wins to be had there. > > Cheers, > Simon > > On 20/01/2015 13:45, Gabor Greif wrote: >> Hi Simon, >> >> JFTR, you seem to be after the trailing zeros in the code you commented. >> >> If the bitmap is *really* that sparse then it might be profitable to >> rewrite it in terms of >> __builtin_ctz (when present). Some CPUs even have instructions for this. >> >> http://hardwarebug.org/2010/01/14/beware-the-builtins/ >> >> Possibly one could even switch to checking *leading* zeros by >> reformulating the algorithm and eliminate a few more instructions. >> >> http://www.hackersdelight.org/ might be another source for inspiration. >> >> Cheers, >> >> Gabor >> >> On 1/20/15, git at git.haskell.org wrote: >>> Repository : ssh://git at git.haskell.org/ghc >>> >>> On branch : master >>> Link : >>> http://ghc.haskell.org/trac/ghc/changeset/9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b/ghc >>> >>>> --------------------------------------------------------------- >>> >>> commit 9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b >>> Author: Simon Marlow >>> Date: Wed Jan 14 08:45:07 2015 +0000 >>> >>> comments only >>> >>> >>>> --------------------------------------------------------------- >>> >>> 9894f6a5b4883ea87fd5f280a2eb4a8abfbd2a6b >>> rts/sm/Scav.c | 2 ++ >>> 1 file changed, 2 insertions(+) >>> >>> diff --git a/rts/sm/Scav.c b/rts/sm/Scav.c >>> index 2ecb23b..781840c 100644 >>> --- a/rts/sm/Scav.c >>> +++ b/rts/sm/Scav.c >>> @@ -285,6 +285,8 @@ scavenge_large_srt_bitmap( StgLargeSRT *large_srt ) >>> >>> for (i = 0; i < size / BITS_IN(W_); i++) { >>> bitmap = large_srt->l.bitmap[i]; >>> + // skip zero words: bitmaps can be very sparse, and this helps >>> + // performance a lot in some cases. >>> if (bitmap != 0) { >>> for (j = 0; j < BITS_IN(W_); j++) { >>> if ((bitmap & 1) != 0) { >>> >>> _______________________________________________ >>> ghc-commits mailing list >>> ghc-commits at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-commits >>> > From marlowsd at gmail.com Tue Jan 20 21:44:37 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Tue, 20 Jan 2015 21:44:37 +0000 Subject: GHC support for the new "record" package Message-ID: <54BECC45.6010906@gmail.com> For those who haven't seen this, Nikita Volkov proposed a new approach to anonymous records, which can be found in the "record" package on Hackage: http://hackage.haskell.org/package/record It had a *lot* of attention on Reddit: http://nikita-volkov.github.io/record/ Now, the solution is very nice and lightweight, but because it is implemented outside GHC it relies on quasi-quotation (amazing that it can be done at all!). It has some limitations because it needs to parse Haskell syntax, and Haskell is big. So we could make this a lot smoother, both for the implementation and the user, by directly supporting anonymous record syntax in GHC. Obviously we'd have to move the library code into base too. This message is by way of kicking off the discussion, since nobody else seems to have done so yet. Can we agree that this is the right thing and should be directly supported by GHC? At this point we'd be aiming for 7.12. Who is interested in working on this? Nikita? There are various design decisions to think about. For example, when the quasi-quote brackets are removed, the syntax will conflict with the existing record syntax. The syntax ends up being similar to Simon's 2003 proposal http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html (there are major differences though, notably the use of lenses for selection and update). I created a template wiki page: https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov Cheers, Simon From austin at well-typed.com Tue Jan 20 21:45:31 2015 From: austin at well-typed.com (Austin Seipp) Date: Tue, 20 Jan 2015 15:45:31 -0600 Subject: Request for assistance from Haskell-oriented startup: GHCi performance In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562A5D89@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF562A5D89@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hi Konrad, I was spending a little bit of time examining this just this morning, and during my investigation, I followed your example from Stack Overflow, but I find myself needing a little guidance. One question I have about your example: Are your snippets so small that they are prohibitively impacted by dynamic linking? In short, I was testing your little example here. I don't have an NVidia card, but I used your example from StackOverflow and slightly modified it: --------------------------------------------- {-# LANGUAGE OverloadedStrings #-} module Main where import Data.Array.Accelerate as A --import Data.Array.Accelerate.CUDA as C import Data.Array.Accelerate.Interpreter as C import Data.Time.Clock (diffUTCTime, getCurrentTime) main :: IO () main = do start <- getCurrentTime print $ C.run $ A.maximum $ A.map (+1) $ A.use (fromList (Z:.1000000) [1..1000000] :: Vector Double) end <- getCurrentTime print $ diffUTCTime end start --------------------------------------------- OK, so now compile it (GHC 7.8.4; note I use -dynamic for consistency): $ ghc -O2 -fforce-recomp -threaded Test1.hs -dynamic [1 of 1] Compiling Main ( Test1.hs, Test1.o ) Linking Test1 ... $ ./Test1 Array (Z) [100001.0] 0.391819s OK, .39 seconds. Now try it interpreted: $ ghci Test1 GHCi, version 7.8.4: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Ok, modules loaded: Main. Prelude Main> :main ... linking messages ... Array (Z) [100001.0] 0.462821s OK, .46seconds, 30% slower. But now run `:main` again without terminating GHCi: Prelude Main> :main Array (Z) [100001.0] 0.000471s It got much faster! This is probably because GHC (when optimizing) lifted out the constant expression from your accelerate program (the `C.run ...` part) into a CAF, which was already evaluated once; so subsequent evaluations are much simpler (you can fix this by modifying the example to take a command line argument representing the `Exp Double` to use `A.map` over, to make sure GHC can't float it out). So this compelled me to run with -v3, when loading GHCi (I didn't suspect the CAF behavior at first). And you get a very informative message when you run `:main` - $ ghci Test1 GHCi, version 7.8.4: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Ok, modules loaded: Main. Prelude Main> :set -v3 wired-in package ghc-prim mapped to ghc-prim-0.3.1.0-a24f9c14c632d75b683d0f93283aea37 wired-in package integer-gmp mapped to integer-gmp-0.5.1.0-26579559b3647acf4f01d5edd9491a46 wired-in package base mapped to base-4.7.0.2-bfd89587617e381ae01b8dd7b6c7f1c1 wired-in package rts mapped to builtin_rts wired-in package template-haskell mapped to template-haskell-2.9.0.0-6d27c2b362b15abb1822f2f34b9ae7f9 wired-in package dph-seq not found. wired-in package dph-par not found. Prelude Main> :main *** Parser: *** Desugar: *** Simplify: *** CorePrep: *** ByteCodeGen: Loading package pretty-1.1.1.1 ... linking ... done. Loading package array-0.5.0.0 ... linking ... done. Loading package deepseq-1.3.0.2 ... linking ... done. Loading package old-locale-1.0.0.6 ... linking ... done. Loading package time-1.4.2 ... linking ... done. Loading package containers-0.5.5.1 ... linking ... done. Loading package bytestring-0.10.4.0 ... linking ... done. Loading package transformers-0.3.0.0 ... linking ... done. Loading package mtl-2.1.3.1 ... linking ... done. Loading package template-haskell ... linking ... done. Loading package fclabels-2.0.2.2 ... linking ... done. Loading package text-1.2.0.4 ... linking ... done. Loading package hashable-1.2.3.1 ... linking ... done. Loading package primitive-0.5.4.0 ... linking ... done. Loading package vector-0.10.12.2 ... linking ... done. Loading package hashtables-1.2.0.2 ... linking ... done. Loading package unordered-containers-0.2.5.1 ... linking ... done. Loading package accelerate-0.15.0.0 ... linking ... done. *** Linker: /usr/bin/gcc -fno-stack-protector -DTABLES_NEXT_TO_CODE -o /tmp/ghc16243_0/ghc16243_2.so Test1.o -shared -Wl,-Bsymbolic -Wl,-h,ghc16243_2.so -L/home/a/ghc-7.8.4/lib/ghc-7.8.4/base-4.7.0.2 -Wl,-rpath -Wl,/home/a/ghc-7.8.4/lib/ghc-7.8.4/base-4.7.0.2 -L/home/a/ghc-7.8.4/lib/ghc-7.8.4/integer-gmp-0.5.1.0 -Wl,-rpath -Wl,/home/a/ghc-7.8.4/lib/ghc-7.8.4/integer-gmp-0.5.1.0 -L/home/a/ghc-7.8.4/lib/ghc-7.8.4/ghc-prim-0.3.1.0 -Wl,-rpath -Wl,/home/a/ghc-7.8.4/lib/ghc-7.8.4/ghc-prim-0.3.1.0 -L/home/a/ghc-7.8.4/lib/ghc-7.8.4/rts-1.0 -Wl,-rpath -Wl,/home/a/ghc-7.8.4/lib/ghc-7.8.4/rts-1.0 -lHSbase-4.7.0.2-ghc7.8.4 -lHSinteger-gmp-0.5.1.0-ghc7.8.4 -lHSghc-prim-0.3.1.0-ghc7.8.4 -lgmp '-Wl,--hash-size=31' -Wl,--reduce-memory-overheads Array (Z) [100001.0] 0.410921s So with GHCi, we have to dynamically invoke GCC to create a shared object file which GHC then loads (as it contains the expression we entered to evaluate). OK, so how *much* overhead is this? Well, it takes long enough for me to see a pause, and we don't need to recompile this object file in the background multiple times, only once. So we can modify the test: --------------------------------------------- {-# LANGUAGE OverloadedStrings #-} module Main where import System.Environment (getArgs) import Data.Array.Accelerate as A --import Data.Array.Accelerate.CUDA as C import Data.Array.Accelerate.Interpreter as C import Data.Time.Clock (diffUTCTime, getCurrentTime) main :: IO () main = do n <- (constant . read . head) `fmap` getArgs :: IO (Exp Double) start <- getCurrentTime print $ C.run $ A.maximum $ A.map (+n) $ A.use (fromList (Z:.100000) [1..100000] :: Vector Double) end <- getCurrentTime print $ diffUTCTime end start --------------------------------------------- Compile, run, interpret twice: $ ghc -O2 -fforce-recomp -threaded Test2.hs -dynamic [1 of 1] Compiling Main ( Test2.hs, Test2.o ) Linking Test2 ... $ ./Test2 10 Array (Z) [100010.0] 0.29926s $ ghci Test2 GHCi, version 7.8.4: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... done. Ok, modules loaded: Main. Prelude Main> :main 10 ... Array (Z) [100010.0] 0.37202s Prelude Main> :main 10 Array (Z) [100010.0] 0.299433s Which is about 100ms slower or somewhere thereabouts, within the threshold of human perception I'd guess. However, this benchmark is very unscientific (`diffUTCTime` isn't a very reliable metric to be quite honest), and the Core is really hard to read due to it being so verbose from Accelerate (although you can pretty easily see the CAF-ified `Array Double` at the top-level wrapped in an `unsafeDupablePerformIO`). So what this is telling me is that I think I need more information I'm afraid. There are some key points to enumerate: - I'm not sure if the example program is actually very representative of the slowdowns you're seeing; what we'd really need to see is what output your compiler produces and GHC then consumes through the API. - Also, GHCi is dynamically linked, but GHC generally builds static programs; I'm not sure if you've built your loader (the program that uses the GHC API) statically or dynamically. I'm not sure what the speed difference would be here between dynamically linked GHC loader and a statically linked one, but it's not like ours is free. It may be faster though (but less correct). - NB: I'd guess this probably hurts a lot of you interactively run lots of fresh Stmts, since each of these may be generating dynamic files in the background (if you're linking dynamically). - It's clear the overhead of dynamic loading is nowhere near free; even with a statically linked GHC RTS, I don't think it's free either. To answer your original question, there's not really a smaller API to handle a lot of this I'm afraid; and the current one has really only been 'designed' for GHCi's use - things like fast, long-term interactive loading of foreign code for dynamic applications have not been the priority; a normally-short-lived REPL for users has (for example, we couldn't even *unload* object files until very recently). - It's also not entirely clear if this slowdown is always constant or it grows with the input. For the interpreter, it didn't seem to, but like I said above, I think the original example was slightly busted anyway. Can you provide any more information about your build, configuration, or perhaps a boiled down test case? (Even machine-generated test cases are fine, if they work for you!) Maybe I can get further with an Amazon GPU instance and a little more information. Let me know if you need any help getting information from GHC itself. On Wed, Jan 14, 2015 at 5:16 PM, Simon Peyton Jones wrote: > Konrad > > > That does sound frustrating. > > > > I think your first port of call should be Manuel Chakravarty, the author of > accelerate. The example you give in your stackoverflow post can only be > some weird systems thing. After all, you are executing precisely the same > code (namely compiled Accelerate code); it?s just that in one case it?s > dynamically linked and excecuted from GHCi and in the other it?s linked and > executed by the shell. I have no clue what could cause that. I wonder if > you are using a GPU and whether that might somehow behave differently. > Could it be the difference between static linking and dynamic linking (which > could plausibly account for some startup delay)? Is it a fixed overhead (eg > takes 100ms extra) or does it run a factor of two slower (increase the size > of your test case to see)? > > > > I?d be happy to have a Skype call with you, but I am rather unlikely to know > anything helpful because it doesn?t sound like a core Haskell issue at all. > You are executing the very same machine instructions! > > > > The overheads of the GHC API to compile and run the expression ?main? are > pretty small. > > > > I?m copying ghc-devs in case anyone else has any ideas. > > > > Simon > > > > > > > > From: Konrad G?dek [mailto:kgadek at gmail.com] > Sent: 14 January 2015 13:59 > To: Simon Peyton Jones > Cc: Piotr M?odawski; kgadek at flowbox.io > Subject: Request for assistance from Haskell-oriented startup: GHCi > performance > > > > Dear Mr Jones, > > > > My name is Konrad G?dek and I'm one of the programmers at Flowbox ( > http://flowbox.io ), a startup that is to bring a fresh view on image > composition in movie industry. We proudly use Haskell in nearly all of our > development. I believe you may remember our CEO, Wojciech Dani?o, from > discussions like in this thread: https://phabricator.haskell.org/D69 . > > What can be interesting for you is that to achieve our goals as a company, > we started developing a new programming language - Luna. Long story short, > we believe that Luna could be as beneficial for the Haskell community as > Elixir is for Erlang. > > However, we found some major performance problems with the code that are as > critical for us as they are cryptic. We have found difficulties in > pinpointing the actual issue, not to mention solving it. We're getting a bit > desperate about that, nobody so far has been able to help us, and so we > would like to ask you for help. We would be really really grateful if you > could take a look, maybe your fresh ideas could shed some light on the > issue. Details are attached below. > > Is there any chance we could arrange eg. a Skype call so we could further > discuss the matter? > > > > Thank you in advance! > > > > Background > > Currently Luna is trans-compiled to Haskell and then compiled to bytecode by > GHC. Furthermore, we use ghci to evaluate expressions (the flow graph) > interactively. We use accelerate library to perform high-performance > computations with the help of graphic cards. > > The problem > > Executing some of the functions from libraries compiled with -O2 (especially > from accelerate) is much slower than calling it from compiled executable > (see > http://stackoverflow.com/questions/27541609/difference-in-performance-of-compiled-accelerate-code-ran-from-ghci-and-shell > and https://github.com/AccelerateHS/accelerate/issues/227). > > Maybe there is some other way to interactively evaluate Haskell code, which > is more lightweight/more customizable ie. would not require all ghc-api > features which are probably slowing down the whole process? Is it possible > to just use ghc linker and make function calls simpler and more time > efficient? > > > > Details > > We feed ghci with statements (using ghc-api) and declarations (using runStmt > and runDecls). We can also change imports and language extensions before > each call. The overall process is as follows: > > on init: > > ? > > set ghcpath to one with our custom installation of ghc with preinstalled > graphic libraries > set imports to our libraries > enable/disable appropriate language extensions > > for each run: > > ? > > generate haskell code (including datatype declarations, using lenses and > TemplateHaskell) and load it to ghci using runDecls > for each expression: > > o > > run statements that use freshly generated code > bind (lazy) results to variables > evaluate values from bound variables, and get it from GhcMonad to runtime of > our interpreter (see > http://hackage.haskell.org/package/hint?0.4.2.1/docs/Language-Haskell-Interpreter.html#v:interpret) > > This behaviour was observed when using GHC 7.8.3 (with D69 patch) on Fedora > 20 (x86-64), Intel(R) Core(TM) i5-4570 CPU @ 3.20GHz > > Tried so far > > Specializing nearly everything in accelerate library, specializing calls to > accelearate methods (no speedup). > Load precompiled, optimised code to ghci (no speedup). > Truth to be told, we have no idea what to try next. > > > > > > > > -- > > Konrad G?dek > > typechecker team-leader in Flowbox > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From roma at ro-che.info Tue Jan 20 23:07:10 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Wed, 21 Jan 2015 01:07:10 +0200 Subject: GHC support for the new "record" package In-Reply-To: <54BECC45.6010906@gmail.com> References: <54BECC45.6010906@gmail.com> Message-ID: <54BEDF9E.1090105@ro-che.info> How would that be different from the ORF? The library as it stands is worse than the ORF: translation into (essentially) tuples hurts error messages; no good story for sum types; no way to make fields strict/unpacked etc. Hopefully, if this is to become a ghc extension, these problems will be addressed; but then I don't see much difference with the ORF (and thus it wouldn't be any easier to implement). On 20/01/15 23:44, Simon Marlow wrote: > For those who haven't seen this, Nikita Volkov proposed a new approach > to anonymous records, which can be found in the "record" package on > Hackage: http://hackage.haskell.org/package/record > > It had a *lot* of attention on Reddit: > http://nikita-volkov.github.io/record/ > > Now, the solution is very nice and lightweight, but because it is > implemented outside GHC it relies on quasi-quotation (amazing that it > can be done at all!). It has some limitations because it needs to parse > Haskell syntax, and Haskell is big. So we could make this a lot > smoother, both for the implementation and the user, by directly > supporting anonymous record syntax in GHC. Obviously we'd have to move > the library code into base too. > > This message is by way of kicking off the discussion, since nobody else > seems to have done so yet. Can we agree that this is the right thing > and should be directly supported by GHC? At this point we'd be aiming > for 7.12. > > Who is interested in working on this? Nikita? > > There are various design decisions to think about. For example, when > the quasi-quote brackets are removed, the syntax will conflict with the > existing record syntax. The syntax ends up being similar to Simon's > 2003 proposal > http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html > (there are major differences though, notably the use of lenses for > selection and update). > > I created a template wiki page: > https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov > > Cheers, > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From ekmett at gmail.com Tue Jan 20 23:07:51 2015 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 20 Jan 2015 18:07:51 -0500 Subject: GHC support for the new "record" package In-Reply-To: <54BECC45.6010906@gmail.com> References: <54BECC45.6010906@gmail.com> Message-ID: I'm generally positive on the goal of figuring out better record support in GHC. That said, it isn't clear that Nikita's work here directly gives rise to how the syntax of such a thing would work in GHC proper. Simon's original proposal overloaded (.) in yet more ways that collide with the uses in lens and really drastically contribute to confusion in the language we have. This is why over the summer of 2013 Adam Gundry's proposal evolved away from that design. Nikita on the other hand gets away with using foo.bar syntax in a more lens-like fashion precisely because he has a quasi-quoter isolating it from the rest of the language. If you strip away that layer, it isn't clear what syntactic mechanism can be used to convey the distinction that isn't taken or just as obtrusive as the quasi-quoter. But, "it isn't clear" is just code for "hey this makes me nervous", so let's spitball a couple ideas: Nikita's proposal has two things that need addressing: 1.) The syntax for record types themselves 2.) The syntax for how to get a lens for a field Re #1 The main term and type level bits of syntax that could be coopted that aren't already in use are @ and (~ at the term level) and things like banana brackets (| ... |), while that already has some other, unrelated, connotations for folks, something related like {| ... |}. We use such bananas for our row types in Ermine to good effect. The latter {| ... |} might serve as a solid syntax suggestion for the anonymous row type syntax. Re #2 That leaves the means for how to talk about a lens for a given field open. Under Adam's proposal that had evolved into making a really complicated instance that we could extract a lens from. This had the benefit over the current state of the `record` package that we could support full type changing lenses. Losing type-changing assignment would be a big step back from the previous proposal / the current state of development for folks who just use makeClassy or custom lens production rules with lens to get something similar, though. But the thing we never found was a nice short syntax for talking about the lens you get from a given field (or possibly chain of fields); Gundry's solution was 90% library and almost no syntax. On the other hand Adam was shackled by having to let the accessor be used as a normal function as well as a lens. Nikita's records don't have that problem. Having no syntax at all for extracting the lens from a field accessor, but rather to having it just be the lens, could directly address that concern. This raises some questions about scope, where do these names live? What happens when you have a module A that defines a record with a field, and a module B that does the same for a different record, and a module C that imports both, but, really, we had those before with Adam's proposal, so there is nothing new there. And for what it is worth, I've seen users in the wild using makeLenses on records with several hundred fields (!!), so we'd need to figure out something that doesn't cap a record at 24 fields. This feedback came in because we made the lenses lazier and it caused some folks a great deal of pain in terms of time spent in code gen! It is a long trek from "this is plausible" to "hey, let's bet the future of records and bunch of syntax in the language on this". It would also necessarily entail moving a middling-sized chunk of lens into base so that you can actually do something with these accessors. I've been trying to draw lines around a "lens-core" for multiple years now. Everyone has a different belief of what it should be, and trust me, I've heard, and had to shoot down basically all of the pitches. We're going to be stuck with the warts of whatever solution we build for a very long time. So with those caveats in mind, I'd encourage us to take our time rather than rush into trying to get this 7.12 ready. Personally I would be happy if by the time we ship 7.12 we had a plan for how we could proceed, that we could then judge on its merits. -Edward On Tue, Jan 20, 2015 at 4:44 PM, Simon Marlow wrote: > For those who haven't seen this, Nikita Volkov proposed a new approach to > anonymous records, which can be found in the "record" package on Hackage: > http://hackage.haskell.org/package/record > > It had a *lot* of attention on Reddit: http://nikita-volkov.github. > io/record/ > > Now, the solution is very nice and lightweight, but because it is > implemented outside GHC it relies on quasi-quotation (amazing that it can > be done at all!). It has some limitations because it needs to parse > Haskell syntax, and Haskell is big. So we could make this a lot smoother, > both for the implementation and the user, by directly supporting anonymous > record syntax in GHC. Obviously we'd have to move the library code into > base too. > > This message is by way of kicking off the discussion, since nobody else > seems to have done so yet. Can we agree that this is the right thing and > should be directly supported by GHC? At this point we'd be aiming for 7.12. > > Who is interested in working on this? Nikita? > > There are various design decisions to think about. For example, when the > quasi-quote brackets are removed, the syntax will conflict with the > existing record syntax. The syntax ends up being similar to Simon's 2003 > proposal http://research.microsoft.com/en-us/um/people/simonpj/ > Haskell/records.html (there are major differences though, notably the use > of lenses for selection and update). > > I created a template wiki page: > https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov > > Cheers, > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Jan 21 03:31:56 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 20 Jan 2015 22:31:56 -0500 Subject: vectorisation code? In-Reply-To: References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> <201501200937.25031.jan.stolarek@p.lodz.pl> <8761c1ltym.fsf@gmail.com> Message-ID: moving it to its own submodule is just a complicated version of cutting a branch that has the code Right before deleting it from master. afaik, the amount of love needed is roughly "one or more full time grad students really owning it", though i could be wrong. On Tue, Jan 20, 2015 at 5:39 AM, RodLogic wrote: > (disclaimer: I know nothing about the vectorization code) > > Now, is the vectorization code really dead code or it is code that needs > love to come back to life? By removing it from the code base, you are > probably sealing it's fate as dead code as we are limiting new or existing > contributors to act on it (even if it's a commit hash away). If it is code > that needs love to come back to life, grep noise or conditional compilation > is a small price to pay here, imho. > > As a compromise, is it possible to move vectorization code into it's own > submodule in git or is it too intertwined with core GHC? So that it can be > worked on independent of GHC? > > > On Tue, Jan 20, 2015 at 6:47 AM, Herbert Valerio Riedel < > hvriedel at gmail.com> wrote: > >> On 2015-01-20 at 09:37:25 +0100, Jan Stolarek wrote: >> >> Here's an alternate suggestion: in SimplCore, keep the call to >> vectorise >> >> around, but commented out >> >> > Yuck. Carter and Brandon are right here - we have git, let it do the >> > job. I propose that we remove vectorization code, create a Trac ticket >> > about vectorization & DPH needing love and record the commit hash in >> > the ticket so that we can revert it easily in the future. >> >> I'm also against commenting out dead code in the presence of a VCS. >> >> Btw, here's two links discussing the issues related to commenting out if >> anyone's interested in knowing more: >> >> - >> http://programmers.stackexchange.com/questions/190096/can-commented-out-code-be-valuable-documentation >> >> - >> http://programmers.stackexchange.com/questions/45378/is-commented-out-code-really-always-bad >> >> >> Cheers, >> hvr >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Wed Jan 21 09:36:19 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 21 Jan 2015 09:36:19 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> Message-ID: <54BF7313.6030006@gmail.com> On 20/01/2015 23:07, Edward Kmett wrote: > It is a long trek from "this is plausible" to "hey, let's bet the > future of records and bunch of syntax in the language on this". Absolutely. On the other hand, this is the first proposal I've seen that really hits (for me) a point in the design space that has an acceptable power to weight ratio. Yes there are some corners cut, and it remains to be seen whether, after we've decided which corners we want to uncut, the design retains the same P2W ratio. A couple of answers to specific points: > Re #1 > > The main term and type level bits of syntax that could be coopted > that aren't already in use are @ and (~ at the term level) and things > like banana brackets (| ... |), while that already has some other, > unrelated, connotations for folks, something related like {| ... |}. > We use such bananas for our row types in Ermine to good effect. > > The latter {| ... |} might serve as a solid syntax suggestion for the > anonymous row type syntax. Why not just use { ... } ? > Re #2 > > That leaves the means for how to talk about a lens for a given field > open. Under Adam's proposal that had evolved into making a really > complicated instance that we could extract a lens from. This had the > benefit over the current state of the `record` package that we could > support full type changing lenses. Losing type-changing assignment > would be a big step back from the previous proposal / the current > state of development for folks who just use makeClassy or custom lens > production rules with lens to get something similar, though. > > But the thing we never found was a nice short syntax for talking > about the lens you get from a given field (or possibly chain of > fields); Gundry's solution was 90% library and almost no syntax. On > the other hand Adam was shackled by having to let the accessor be > used as a normal function as well as a lens. Nikita's records don't > have that problem. > > Having no syntax at all for extracting the lens from a field > accessor, but rather to having it just be the lens, could directly > address that concern. This raises some questions about scope, where > do these names live? What happens when you have a module A that > defines a record with a field, and a module B that does the same for > a different record, and a module C that imports both, but, really, we > had those before with Adam's proposal, so there is nothing new > there. Right. So either (a) A field name is a bare identifier that is bound to the lens, or (b) There is special syntax for the lens of a field name If (a) there needs to be a declaration of the name in order that we can talk about scoping. That makes (b) a lot more attractive; and if you really find the syntax awkward then you can always bind a local variable to the lens, or export the names from your library. > And for what it is worth, I've seen users in the wild using > makeLenses on records with several hundred fields (!!), so we'd need > to figure out something that doesn't cap a record at 24 fields. This > feedback came in because we made the lenses lazier and it caused some > folks a great deal of pain in terms of time spent in code gen! Sure. We deal with large tuples by nesting, and I imagine something similar could be done for records (I haven't worked through the details though). Cheers, Simmon > It would also necessarily entail moving a middling-sized chunk of > lens into base so that you can actually do something with these > accessors. I've been trying to draw lines around a "lens-core" for > multiple years now. Everyone has a different belief of what it should > be, and trust me, I've heard, and had to shoot down basically all of > the pitches. > > We're going to be stuck with the warts of whatever solution we build > for a very long time. > > So with those caveats in mind, I'd encourage us to take our time > rather than rush into trying to get this 7.12 ready. > > Personally I would be happy if by the time we ship 7.12 we had a plan > for how we could proceed, that we could then judge on its merits. > > -Edward > > > On Tue, Jan 20, 2015 at 4:44 PM, Simon Marlow > wrote: > > For those who haven't seen this, Nikita Volkov proposed a new > approach to anonymous records, which can be found in the "record" > package on Hackage: http://hackage.haskell.org/__package/record > > > It had a *lot* of attention on Reddit: > http://nikita-volkov.github.__io/record/ > > > Now, the solution is very nice and lightweight, but because it is > implemented outside GHC it relies on quasi-quotation (amazing that it > can be done at all!). It has some limitations because it needs to > parse Haskell syntax, and Haskell is big. So we could make this a > lot smoother, both for the implementation and the user, by directly > supporting anonymous record syntax in GHC. Obviously we'd have to > move the library code into base too. > > This message is by way of kicking off the discussion, since nobody > else seems to have done so yet. Can we agree that this is the right > thing and should be directly supported by GHC? At this point we'd be > aiming for 7.12. > > Who is interested in working on this? Nikita? > > There are various design decisions to think about. For example, when > the quasi-quote brackets are removed, the syntax will conflict with > the existing record syntax. The syntax ends up being similar to > Simon's 2003 proposal > http://research.microsoft.com/__en-us/um/people/simonpj/__Haskell/records.html > > > (there are major differences though, notably the use of lenses for > selection and update). > > I created a template wiki page: > https://ghc.haskell.org/trac/__ghc/wiki/Records/Volkov > > > Cheers, Simon _________________________________________________ > ghc-devs mailing list ghc-devs at haskell.org > > http://www.haskell.org/__mailman/listinfo/ghc-devs > > > From adam at well-typed.com Wed Jan 21 10:05:59 2015 From: adam at well-typed.com (Adam Gundry) Date: Wed, 21 Jan 2015 10:05:59 +0000 Subject: GHC support for the new "record" package In-Reply-To: <54BECC45.6010906@gmail.com> References: <54BECC45.6010906@gmail.com> Message-ID: <54BF7A07.50502@well-typed.com> As someone with quite a lot of skin in this game, I thought it might be useful to give my perspective on how this relates to ORF. Apologies that this drags on a bit... Both approaches use essentially the same mechanism for resolving overloaded field names (typeclasses indexed by type-level strings, called Has/Upd or FieldOwner). ORF allows fields to be both selectors and various types of lenses, whereas the record library always makes them van Laarhoven lenses, but this isn't really a fundamental difference. The crucial difference is that ORF adds no new syntax, and solves Has/Upd constraints for existing datatypes, whereas the record library adds a new syntax for anonymous records and their fields that is completely separate from existing datatypes, and solves FieldOwner constraints only for these anonymous records (well, their desugaring). On the one hand, anonymous records are a very desirable feature, and in some ways making them separate is a nice simplification. However, they are not as expressive as the existing Haskell record datatypes (sums, strict/unpacked fields, higher-rank fields), and having two records mechanisms is a little unsatisfying. Do we really want to distinguish data Foo = MkFoo { bar :: Int, baz :: Bool } data Foo = MkFoo {| bar :: Int, baz :: Bool |} (where the first is the traditional approach, and the second is a single-argument constructor taking an anonymous record in Edward's proposed syntax)? It might be nice to have a syntactic distinction between record fields and normal functions (the [l|...|] in the record library), because it makes name resolution much simpler. ORF was going down the route of adding no syntax, so name resolution becomes more complex, but we could revisit that decision and perhaps make ORF simpler. But I don't know what the syntax should be. I would note that if we go ahead with ORF, the record library could potentially take advantage of it (working with ORF's Has/Upd classes instead of defining its own FieldOwner class). Then we could subsequently add anonymous records to GHC if there is enough interest and implementation effort. However, I don't want to limit the discussion: if there's consensus that ORF is not the right approach, then I'm happy to let it go the way of all the earth. ;-) (Regarding the status of ORF, Simon PJ and I had a useful meeting last week where we identified a plan for getting it back on track, including some key simplifications to the sticking points in the implementation. So there might be some hope for getting it in after all.) Adam On 20/01/15 21:44, Simon Marlow wrote: > For those who haven't seen this, Nikita Volkov proposed a new approach > to anonymous records, which can be found in the "record" package on > Hackage: http://hackage.haskell.org/package/record > > It had a *lot* of attention on Reddit: > http://nikita-volkov.github.io/record/ > > Now, the solution is very nice and lightweight, but because it is > implemented outside GHC it relies on quasi-quotation (amazing that it > can be done at all!). It has some limitations because it needs to parse > Haskell syntax, and Haskell is big. So we could make this a lot > smoother, both for the implementation and the user, by directly > supporting anonymous record syntax in GHC. Obviously we'd have to move > the library code into base too. > > This message is by way of kicking off the discussion, since nobody else > seems to have done so yet. Can we agree that this is the right thing > and should be directly supported by GHC? At this point we'd be aiming > for 7.12. > > Who is interested in working on this? Nikita? > > There are various design decisions to think about. For example, when > the quasi-quote brackets are removed, the syntax will conflict with the > existing record syntax. The syntax ends up being similar to Simon's > 2003 proposal > http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html > (there are major differences though, notably the use of lenses for > selection and update). > > I created a template wiki page: > https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov > > Cheers, > Simon -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ekmett at gmail.com Wed Jan 21 12:56:47 2015 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 21 Jan 2015 07:56:47 -0500 Subject: GHC support for the new "record" package In-Reply-To: <54BF7313.6030006@gmail.com> References: <54BECC45.6010906@gmail.com> <54BF7313.6030006@gmail.com> Message-ID: On Wed, Jan 21, 2015 at 4:36 AM, Simon Marlow wrote: > On 20/01/2015 23:07, Edward Kmett wrote: > > It is a long trek from "this is plausible" to "hey, let's bet the >> future of records and bunch of syntax in the language on this". >> > > Absolutely. On the other hand, this is the first proposal I've seen > that really hits (for me) a point in the design space that has an > acceptable power to weight ratio. Yes there are some corners cut, and > it remains to be seen whether, after we've decided which corners we want > to uncut, the design retains the same P2W ratio. > > A couple of answers to specific points: > > Re #1 >> >> The main term and type level bits of syntax that could be coopted >> that aren't already in use are @ and (~ at the term level) and things >> like banana brackets (| ... |), while that already has some other, >> unrelated, connotations for folks, something related like {| ... |}. >> We use such bananas for our row types in Ermine to good effect. >> >> The latter {| ... |} might serve as a solid syntax suggestion for the >> anonymous row type syntax. >> > > Why not just use { ... } ? Mostly because it would conflict with the existing record syntax when used as a member of a data type. Using { ... } would break all existing code, while {| ... |} could peacefully co-exist. data Foo = Foo { bar :: Bar } vs. data Foo = Foo {| bar :: Bar |} You could, I suppose manually distinguish them using ()'s data Foo = Foo ({bar :: Bar }) might be something folks could grow to accept. Another reason that comes to mind is that it causes a further divergence between the way terms and types behave/look, complicated stuff like Richard Eisenberg's work on giving us something closer to real dependent types. Re #2 >> >> That leaves the means for how to talk about a lens for a given field >> open. Under Adam's proposal that had evolved into making a really >> complicated instance that we could extract a lens from. This had the >> benefit over the current state of the `record` package that we could >> support full type changing lenses. Losing type-changing assignment >> would be a big step back from the previous proposal / the current >> state of development for folks who just use makeClassy or custom lens >> production rules with lens to get something similar, though. >> >> But the thing we never found was a nice short syntax for talking >> about the lens you get from a given field (or possibly chain of >> fields); Gundry's solution was 90% library and almost no syntax. On >> the other hand Adam was shackled by having to let the accessor be >> used as a normal function as well as a lens. Nikita's records don't >> have that problem. >> >> Having no syntax at all for extracting the lens from a field >> accessor, but rather to having it just be the lens, could directly >> address that concern. This raises some questions about scope, where >> do these names live? What happens when you have a module A that >> defines a record with a field, and a module B that does the same for >> a different record, and a module C that imports both, but, really, we >> had those before with Adam's proposal, so there is nothing new >> there. >> > > Right. So either > (a) A field name is a bare identifier that is bound to the lens, or > (b) There is special syntax for the lens of a field name > > If (a) there needs to be a declaration of the name in order that we can > talk about scoping. That makes (b) a lot more attractive; and if you > really find the syntax awkward then you can always bind a local variable > to the lens, or export the names from your library. Alternately (c) we could play games with ensuring the "name" is shared despite coming from different fields. As a half-baked idea, if we pretended all field accessors were names from some magic internal GHC.Record.Fields module, so that using data Foo = Foo {| bar :: Bar, baz :: Baz |} would add an `import GHC.Record.Fields (bar, baz)` to the module. These would all expand to the same Symbol-based representation, behind the scenes, so that if two record types were used that used the same names, they'd just work together, with no scoping issues. This has the benefit that users could write such import statements by hand to use fields themselves, no sigils get used up, and the resulting code is the cleanest it can be. -Edward -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Wed Jan 21 13:11:01 2015 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 21 Jan 2015 08:11:01 -0500 Subject: GHC support for the new "record" package In-Reply-To: <54BF7A07.50502@well-typed.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> Message-ID: Personally, I think the two proposals, ORF and Nikita's record approach address largely differing needs. The ORF proposal has the benefit that it doesn't require GHC itself to know anything about lenses in order to work and is mostly compatible with the existing field accessor combinators. Nikita's proposal on the other hand builds a form of Trex-like records where it has its own little universe to play in, and doesn't have to contort itself to make the field accessors backwards compatible. As its own little world, the fact that the ORF can't deal with certain types of fields just becomes a limitation on this little universe, and all existing code would continue to work. I, too, have a lot of skin in the game with the existing ORF proposal, but ultimately we're going to be stuck with whatever solution we build for a long time, and it is, we both have to confess, admittedly quite complicated, so it seems exploring the consequences of a related design which has different constraints on its design does little harm. I'm mostly paying the work the courtesy it deserves by considering to its logical conclusion what such a design would look like fleshed out in a way that maximized how nice the result could be to use. I'm curious, as mostly a thought experiment, how nice a design we could get in the end under these slightly different assumptions. If, in the end, having an anonymous record syntax that is distinct from the existing one is too ugly, it is okay for us to recoil from it and go back to committing to the existing proposal, but I for one would prefer to " steelman " Nikita's trick first. Thus far, all of this is but words in a handful of emails. I happen to think the existing ORF implementation is about as good as we can get operating under the assumptions it does. That said, operating under different assumptions may get us a nicer user experience. I'm not sure, though, hence the thought experiment. -Edward On Wed, Jan 21, 2015 at 5:05 AM, Adam Gundry wrote: > As someone with quite a lot of skin in this game, I thought it might be > useful to give my perspective on how this relates to ORF. Apologies that > this drags on a bit... > > Both approaches use essentially the same mechanism for resolving > overloaded field names (typeclasses indexed by type-level strings, > called Has/Upd or FieldOwner). ORF allows fields to be both selectors > and various types of lenses, whereas the record library always makes > them van Laarhoven lenses, but this isn't really a fundamental difference. > > The crucial difference is that ORF adds no new syntax, and solves > Has/Upd constraints for existing datatypes, whereas the record library > adds a new syntax for anonymous records and their fields that is > completely separate from existing datatypes, and solves FieldOwner > constraints only for these anonymous records (well, their desugaring). > > On the one hand, anonymous records are a very desirable feature, and in > some ways making them separate is a nice simplification. However, they > are not as expressive as the existing Haskell record datatypes (sums, > strict/unpacked fields, higher-rank fields), and having two records > mechanisms is a little unsatisfying. Do we really want to distinguish > > data Foo = MkFoo { bar :: Int, baz :: Bool } > data Foo = MkFoo {| bar :: Int, baz :: Bool |} > > (where the first is the traditional approach, and the second is a > single-argument constructor taking an anonymous record in Edward's > proposed syntax)? > > It might be nice to have a syntactic distinction between record fields > and normal functions (the [l|...|] in the record library), because it > makes name resolution much simpler. ORF was going down the route of > adding no syntax, so name resolution becomes more complex, but we could > revisit that decision and perhaps make ORF simpler. But I don't know > what the syntax should be. > > I would note that if we go ahead with ORF, the record library could > potentially take advantage of it (working with ORF's Has/Upd classes > instead of defining its own FieldOwner class). Then we could > subsequently add anonymous records to GHC if there is enough interest > and implementation effort. However, I don't want to limit the > discussion: if there's consensus that ORF is not the right approach, > then I'm happy to let it go the way of all the earth. ;-) > > (Regarding the status of ORF, Simon PJ and I had a useful meeting last > week where we identified a plan for getting it back on track, including > some key simplifications to the sticking points in the implementation. > So there might be some hope for getting it in after all.) > > Adam > > > On 20/01/15 21:44, Simon Marlow wrote: > > For those who haven't seen this, Nikita Volkov proposed a new approach > > to anonymous records, which can be found in the "record" package on > > Hackage: http://hackage.haskell.org/package/record > > > > It had a *lot* of attention on Reddit: > > http://nikita-volkov.github.io/record/ > > > > Now, the solution is very nice and lightweight, but because it is > > implemented outside GHC it relies on quasi-quotation (amazing that it > > can be done at all!). It has some limitations because it needs to parse > > Haskell syntax, and Haskell is big. So we could make this a lot > > smoother, both for the implementation and the user, by directly > > supporting anonymous record syntax in GHC. Obviously we'd have to move > > the library code into base too. > > > > This message is by way of kicking off the discussion, since nobody else > > seems to have done so yet. Can we agree that this is the right thing > > and should be directly supported by GHC? At this point we'd be aiming > > for 7.12. > > > > Who is interested in working on this? Nikita? > > > > There are various design decisions to think about. For example, when > > the quasi-quote brackets are removed, the syntax will conflict with the > > existing record syntax. The syntax ends up being similar to Simon's > > 2003 proposal > > > http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html > > (there are major differences though, notably the use of lenses for > > selection and update). > > > > I created a template wiki page: > > https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov > > > > Cheers, > > Simon > > > -- > Adam Gundry, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.vershilov at gmail.com Wed Jan 21 15:51:55 2015 From: alexander.vershilov at gmail.com (Alexander V Vershilov) Date: Wed, 21 Jan 2015 19:51:55 +0400 Subject: Proposal for removing transformers dependency Message-ID: Hello. I'm coming with a proposal for removing transformers dependency from ghc library. The reason for this proposal that it's not possible to build consistent environment where a modern libraries (that depend on a newer transformers or mtl-2.2) and libraries that use ghc API are used together. And often people are tend to use version that is bundled with ghc, even if newer are available. As transformers usage are quite limited in ghc, and it's really relevant in ghc-bin, it's possible to duplicate the code, and provide required fixes in ghc-bin.cabal. As a result ghc uses it's own MonadIO, MonadControl and Strict State, and ghc-bin.cabal (ghc/*) uses ones from transformers. I have prepared a proof of concept [1], however it doesn't look very clean and it's quite possible that will require some generalization, for example introduction of the ghc-transformers-instances package that will have all required instances. Should I continue doing this? Are there any things to consider and fix? [1] https://github.com/qnikst/ghc/compare/wip/remove-tf -- Alexander -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Wed Jan 21 16:01:31 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Wed, 21 Jan 2015 17:01:31 +0100 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> Message-ID: My thoughts mostly mirror those of Adam and Edward. 1) I want something that is backwards compatible. 2) Anonymous records are nice to have, but I don't want to have all records be anonymous (and have to jump through newtype hoops to get back non-anonymous records.) 3) I don't think it's a good idea to have lots of functions be polymorphic in the record types of their arguments. If that falls out for free (like it does both in ORF and Nikita's proposals) that's nice, but I think anonymous records should be used sparsely. To me, anonymous records look a lot like Go's interfaces, which structural typing I don't think is a great idea. Go's interfaces give the appearance of giving you more polymorphic functions (i.e. functions with arguments of type { f :: T, ... }), but you have to express the required laws on these record fields purely in terms of comments. With type class-based polymorphism you're somewhat more specific and deliberate when you state what kind of values your functions are polymorphic over. You don't just say "this value must support a function f :: T" but instead "this value must support a function f :: T, where the behavior of f is specified by the type class it's defined in". I also have extensive experience of duck typing from Python and there I think duck typing has not played out well (somewhat collaborate by the fact that Python is adding base classes so it's possible to talk about the laws I mentioned above.) 4) There are issues with strictness and unpacking. 5) I don't know if I want to commit the *language* to a particular lens type. On Wed, Jan 21, 2015 at 2:11 PM, Edward Kmett wrote: > Personally, I think the two proposals, ORF and Nikita's record approach > address largely differing needs. > > The ORF proposal has the benefit that it doesn't require GHC itself to > know anything about lenses in order to work and is mostly compatible with > the existing field accessor combinators. > > Nikita's proposal on the other hand builds a form of Trex-like records > where it has its own little universe to play in, and doesn't have to > contort itself to make the field accessors backwards compatible. As its own > little world, the fact that the ORF can't deal with certain types of fields > just becomes a limitation on this little universe, and all existing code > would continue to work. > > I, too, have a lot of skin in the game with the existing ORF proposal, but > ultimately we're going to be stuck with whatever solution we build for a > long time, and it is, we both have to confess, admittedly quite > complicated, so it seems exploring the consequences of a related design > which has different constraints on its design does little harm. > > I'm mostly paying the work the courtesy it deserves by considering to its > logical conclusion what such a design would look like fleshed out in a way > that maximized how nice the result could be to use. I'm curious, as mostly > a thought experiment, how nice a design we could get in the end under these > slightly different assumptions. > > If, in the end, having an anonymous record syntax that is distinct from > the existing one is too ugly, it is okay for us to recoil from it and go > back to committing to the existing proposal, but I for one would prefer to " > steelman " > Nikita's trick first. > > Thus far, all of this is but words in a handful of emails. I happen to > think the existing ORF implementation is about as good as we can get > operating under the assumptions it does. That said, operating under > different assumptions may get us a nicer user experience. I'm not sure, > though, hence the thought experiment. > > -Edward > > On Wed, Jan 21, 2015 at 5:05 AM, Adam Gundry wrote: > >> As someone with quite a lot of skin in this game, I thought it might be >> useful to give my perspective on how this relates to ORF. Apologies that >> this drags on a bit... >> >> Both approaches use essentially the same mechanism for resolving >> overloaded field names (typeclasses indexed by type-level strings, >> called Has/Upd or FieldOwner). ORF allows fields to be both selectors >> and various types of lenses, whereas the record library always makes >> them van Laarhoven lenses, but this isn't really a fundamental difference. >> >> The crucial difference is that ORF adds no new syntax, and solves >> Has/Upd constraints for existing datatypes, whereas the record library >> adds a new syntax for anonymous records and their fields that is >> completely separate from existing datatypes, and solves FieldOwner >> constraints only for these anonymous records (well, their desugaring). >> >> On the one hand, anonymous records are a very desirable feature, and in >> some ways making them separate is a nice simplification. However, they >> are not as expressive as the existing Haskell record datatypes (sums, >> strict/unpacked fields, higher-rank fields), and having two records >> mechanisms is a little unsatisfying. Do we really want to distinguish >> >> data Foo = MkFoo { bar :: Int, baz :: Bool } >> data Foo = MkFoo {| bar :: Int, baz :: Bool |} >> >> (where the first is the traditional approach, and the second is a >> single-argument constructor taking an anonymous record in Edward's >> proposed syntax)? >> >> It might be nice to have a syntactic distinction between record fields >> and normal functions (the [l|...|] in the record library), because it >> makes name resolution much simpler. ORF was going down the route of >> adding no syntax, so name resolution becomes more complex, but we could >> revisit that decision and perhaps make ORF simpler. But I don't know >> what the syntax should be. >> >> I would note that if we go ahead with ORF, the record library could >> potentially take advantage of it (working with ORF's Has/Upd classes >> instead of defining its own FieldOwner class). Then we could >> subsequently add anonymous records to GHC if there is enough interest >> and implementation effort. However, I don't want to limit the >> discussion: if there's consensus that ORF is not the right approach, >> then I'm happy to let it go the way of all the earth. ;-) >> >> (Regarding the status of ORF, Simon PJ and I had a useful meeting last >> week where we identified a plan for getting it back on track, including >> some key simplifications to the sticking points in the implementation. >> So there might be some hope for getting it in after all.) >> >> Adam >> >> >> On 20/01/15 21:44, Simon Marlow wrote: >> > For those who haven't seen this, Nikita Volkov proposed a new approach >> > to anonymous records, which can be found in the "record" package on >> > Hackage: http://hackage.haskell.org/package/record >> > >> > It had a *lot* of attention on Reddit: >> > http://nikita-volkov.github.io/record/ >> > >> > Now, the solution is very nice and lightweight, but because it is >> > implemented outside GHC it relies on quasi-quotation (amazing that it >> > can be done at all!). It has some limitations because it needs to parse >> > Haskell syntax, and Haskell is big. So we could make this a lot >> > smoother, both for the implementation and the user, by directly >> > supporting anonymous record syntax in GHC. Obviously we'd have to move >> > the library code into base too. >> > >> > This message is by way of kicking off the discussion, since nobody else >> > seems to have done so yet. Can we agree that this is the right thing >> > and should be directly supported by GHC? At this point we'd be aiming >> > for 7.12. >> > >> > Who is interested in working on this? Nikita? >> > >> > There are various design decisions to think about. For example, when >> > the quasi-quote brackets are removed, the syntax will conflict with the >> > existing record syntax. The syntax ends up being similar to Simon's >> > 2003 proposal >> > >> http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html >> > (there are major differences though, notably the use of lenses for >> > selection and update). >> > >> > I created a template wiki page: >> > https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov >> > >> > Cheers, >> > Simon >> >> >> -- >> Adam Gundry, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Wed Jan 21 16:14:53 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Wed, 21 Jan 2015 17:14:53 +0100 Subject: Proposal for removing transformers dependency In-Reply-To: (Alexander V. Vershilov's message of "Wed, 21 Jan 2015 19:51:55 +0400") References: Message-ID: <87iog0un42.fsf@gmail.com> On 2015-01-21 at 16:51:55 +0100, Alexander V Vershilov wrote: [...] > As transformers usage are quite limited in ghc, and it's really relevant > in ghc-bin, it's possible to duplicate the code, and provide required > fixes in ghc-bin.cabal. As a result ghc uses it's own MonadIO, MonadControl > and Strict State, and ghc-bin.cabal (ghc/*) uses ones from > transformers. ...but doesn't this effectively mean that the instances provided by the `ghc` package are of little use or rather can't be easily combined with code that builds on top of the original `transformers` classes everyone else uses? Cheers, hvr From alexander.vershilov at gmail.com Wed Jan 21 16:19:42 2015 From: alexander.vershilov at gmail.com (Alexander V Vershilov) Date: Wed, 21 Jan 2015 20:19:42 +0400 Subject: Proposal for removing transformers dependency In-Reply-To: <87iog0un42.fsf@gmail.com> References: <87iog0un42.fsf@gmail.com> Message-ID: I thought about providing package ghc-transformers-instances, that will provide instances for transformers's type classes for user. So ghc-tf-instances will depend on current ghc, and current transformers that could be provided by user environment. On Jan 21, 2015 7:15 PM, "Herbert Valerio Riedel" wrote: > On 2015-01-21 at 16:51:55 +0100, Alexander V Vershilov wrote: > > [...] > > > As transformers usage are quite limited in ghc, and it's really relevant > > in ghc-bin, it's possible to duplicate the code, and provide required > > fixes in ghc-bin.cabal. As a result ghc uses it's own MonadIO, > MonadControl > > and Strict State, and ghc-bin.cabal (ghc/*) uses ones from > > transformers. > > ...but doesn't this effectively mean that the instances provided by the > `ghc` package are of little use or rather can't be easily combined with > code that builds on top of the original `transformers` classes everyone > else uses? > > Cheers, > hvr > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marlowsd at gmail.com Wed Jan 21 16:48:48 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 21 Jan 2015 16:48:48 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> Message-ID: <54BFD870.7070602@gmail.com> On 21/01/2015 16:01, Johan Tibell wrote: > My thoughts mostly mirror those of Adam and Edward. > > 1) I want something that is backwards compatible. Backwards compatible in what sense? Extension flags provide backwards compatibility, because you just don't turn on the extension until you want to use it. That's how all the other extensions work; most of them change syntax in some way or other that breaks existing code. > 2) Anonymous records are nice to have, but I don't want to have all > records be anonymous (and have to jump through newtype hoops to get back > non-anonymous records.) So right now you have to say data T = R { a :: Int } and with anonymous records you could say data T = R {| a :: Int |} (or something similar). That doesn't seem like jumping through hoops, it's exactly the same amount of syntax. If you're worried about the extra layer of boxing (quite reasonable) then either (a) use a newtype, if possible, or (b) we could consider automatic UNPACKing of records used in a constructor argument. > 3) I don't think it's a good idea to have lots of functions be > polymorphic in the record types of their arguments. If that falls out > for free (like it does both in ORF and Nikita's proposals) that's nice, > but I think anonymous records should be used sparsely. There are stylistic issues with the use of anonymous records, I agree. But I don't consider anonymous records to be the main feature here, it's just a nice way to factor the design. [..] > 4) There are issues with strictness and unpacking. Yes - probably the major drawbacks, along with the lack of type-changing updates. > 5) I don't know if I want to commit the *language* to a particular lens > type. Fair point. Cheers, Simon > On Wed, Jan 21, 2015 at 2:11 PM, Edward Kmett > wrote: > > Personally, I think the two proposals, ORF and Nikita's record > approach address largely differing needs. > > The ORF proposal has the benefit that it doesn't require GHC itself > to know anything about lenses in order to work and is mostly > compatible with the existing field accessor combinators. > > Nikita's proposal on the other hand builds a form of Trex-like > records where it has its own little universe to play in, and doesn't > have to contort itself to make the field accessors backwards > compatible. As its own little world, the fact that the ORF can't > deal with certain types of fields just becomes a limitation on this > little universe, and all existing code would continue to work. > > I, too, have a lot of skin in the game with the existing ORF > proposal, but ultimately we're going to be stuck with whatever > solution we build for a long time, and it is, we both have to > confess, admittedly quite complicated, so it seems exploring the > consequences of a related design which has different constraints on > its design does little harm. > > I'm mostly paying the work the courtesy it deserves by considering > to its logical conclusion what such a design would look like fleshed > out in a way that maximized how nice the result could be to use. I'm > curious, as mostly a thought experiment, how nice a design we could > get in the end under these slightly different assumptions. > > If, in the end, having an anonymous record syntax that is distinct > from the existing one is too ugly, it is okay for us to recoil from > it and go back to committing to the existing proposal, but I for one > would prefer to "steelman > " > Nikita's trick first. > > Thus far, all of this is but words in a handful of emails. I happen > to think the existing ORF implementation is about as good as we can > get operating under the assumptions it does. That said, operating > under different assumptions may get us a nicer user experience. I'm > not sure, though, hence the thought experiment. > > -Edward > > On Wed, Jan 21, 2015 at 5:05 AM, Adam Gundry > wrote: > > As someone with quite a lot of skin in this game, I thought it > might be > useful to give my perspective on how this relates to ORF. > Apologies that > this drags on a bit... > > Both approaches use essentially the same mechanism for resolving > overloaded field names (typeclasses indexed by type-level strings, > called Has/Upd or FieldOwner). ORF allows fields to be both > selectors > and various types of lenses, whereas the record library always makes > them van Laarhoven lenses, but this isn't really a fundamental > difference. > > The crucial difference is that ORF adds no new syntax, and solves > Has/Upd constraints for existing datatypes, whereas the record > library > adds a new syntax for anonymous records and their fields that is > completely separate from existing datatypes, and solves FieldOwner > constraints only for these anonymous records (well, their > desugaring). > > On the one hand, anonymous records are a very desirable feature, > and in > some ways making them separate is a nice simplification. > However, they > are not as expressive as the existing Haskell record datatypes > (sums, > strict/unpacked fields, higher-rank fields), and having two records > mechanisms is a little unsatisfying. Do we really want to > distinguish > > data Foo = MkFoo { bar :: Int, baz :: Bool } > data Foo = MkFoo {| bar :: Int, baz :: Bool |} > > (where the first is the traditional approach, and the second is a > single-argument constructor taking an anonymous record in Edward's > proposed syntax)? > > It might be nice to have a syntactic distinction between record > fields > and normal functions (the [l|...|] in the record library), > because it > makes name resolution much simpler. ORF was going down the route of > adding no syntax, so name resolution becomes more complex, but > we could > revisit that decision and perhaps make ORF simpler. But I don't know > what the syntax should be. > > I would note that if we go ahead with ORF, the record library could > potentially take advantage of it (working with ORF's Has/Upd classes > instead of defining its own FieldOwner class). Then we could > subsequently add anonymous records to GHC if there is enough > interest > and implementation effort. However, I don't want to limit the > discussion: if there's consensus that ORF is not the right approach, > then I'm happy to let it go the way of all the earth. ;-) > > (Regarding the status of ORF, Simon PJ and I had a useful > meeting last > week where we identified a plan for getting it back on track, > including > some key simplifications to the sticking points in the > implementation. > So there might be some hope for getting it in after all.) > > Adam > > > On 20/01/15 21:44, Simon Marlow wrote: > > For those who haven't seen this, Nikita Volkov proposed a new > approach > > to anonymous records, which can be found in the "record" > package on > > Hackage: http://hackage.haskell.org/package/record > > > > It had a *lot* of attention on Reddit: > > http://nikita-volkov.github.io/record/ > > > > Now, the solution is very nice and lightweight, but because it is > > implemented outside GHC it relies on quasi-quotation (amazing > that it > > can be done at all!). It has some limitations because it > needs to parse > > Haskell syntax, and Haskell is big. So we could make this a lot > > smoother, both for the implementation and the user, by directly > > supporting anonymous record syntax in GHC. Obviously we'd > have to move > > the library code into base too. > > > > This message is by way of kicking off the discussion, since > nobody else > > seems to have done so yet. Can we agree that this is the > right thing > > and should be directly supported by GHC? At this point we'd > be aiming > > for 7.12. > > > > Who is interested in working on this? Nikita? > > > > There are various design decisions to think about. For > example, when > > the quasi-quote brackets are removed, the syntax will > conflict with the > > existing record syntax. The syntax ends up being similar to > Simon's > > 2003 proposal > > > http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html > > (there are major differences though, notably the use of > lenses for > > selection and update). > > > > I created a template wiki page: > > https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov > > > > Cheers, > > Simon > > > -- > Adam Gundry, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > From mail at joachim-breitner.de Wed Jan 21 16:52:50 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Wed, 21 Jan 2015 17:52:50 +0100 Subject: Proposal for removing transformers dependency In-Reply-To: References: Message-ID: <1421859170.30140.7.camel@joachim-breitner.de> Hi, Am Mittwoch, den 21.01.2015, 19:51 +0400 schrieb Alexander V Vershilov: > Should I continue doing this? with my Debian packaging maintainer hat on: Yes, please do! Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From michael at snoyman.com Wed Jan 21 17:01:16 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 21 Jan 2015 17:01:16 +0000 Subject: Proposal for removing transformers dependency References: <1421859170.30140.7.camel@joachim-breitner.de> Message-ID: Huge +1 from a Stackage standpoint. The transformers dependency (and other such dependencies) causes me a huge amount of pain. On Wed Jan 21 2015 at 8:52:57 AM Joachim Breitner wrote: > Hi, > > Am Mittwoch, den 21.01.2015, 19:51 +0400 schrieb Alexander V Vershilov: > > Should I continue doing this? > > with my Debian packaging maintainer hat on: Yes, please do! > > Greetings, > Joachim > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spam at scientician.net Wed Jan 21 17:23:50 2015 From: spam at scientician.net (Bardur Arantsson) Date: Wed, 21 Jan 2015 18:23:50 +0100 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> Message-ID: On 2015-01-21 17:01, Johan Tibell wrote: > My thoughts mostly mirror those of Adam and Edward. [--snip--] > > 3) I don't think it's a good idea to have lots of functions be polymorphic > in the record types of their arguments. If that falls out for free (like it > does both in ORF and Nikita's proposals) that's nice, but I think anonymous > records should be used sparsely. > > To me, anonymous records look a lot like Go's interfaces, which structural > typing I don't think is a great idea. Go's interfaces give the appearance > of giving you more polymorphic functions (i.e. functions with arguments of > type { f :: T, ... }), but you have to express the required laws on these > record fields purely in terms of comments. With type class-based > polymorphism you're somewhat more specific and deliberate when you state > what kind of values your functions are polymorphic over. You don't just say > "this value must support a function f :: T" but instead "this value must > support a function f :: T, where the behavior of f is specified by the type > class it's defined in". I also have extensive experience of duck typing > from Python and there I think duck typing has not played out well (somewhat > collaborate by the fact that Python is adding base classes so it's possible > to talk about the laws I mentioned above.) I don't think anyone's saying that type classes are going anywhere...?!? As a counterpoint to duck-typing-in-Python, IME *statically* checked duck typing works just fine. It's been ages since I programmed in O'Caml, but I cannot recall a single instance where a problem was caused by accidentally passing the incorrect wrong duck-ish parameter. (Other people's experience may differ, of course.) Do you have concrete experience with Go? I'd of course be skeptical of taking any lessons from Go in this regard due to the pervasiveness of the "empty interface" idiom as a replacement for parametric polymorphism -- there are usually lots of things that can match the empty interface -- but it'd be interesting to hear, nonetheless :). Regards, From dan.doel at gmail.com Wed Jan 21 17:36:05 2015 From: dan.doel at gmail.com (Dan Doel) Date: Wed, 21 Jan 2015 12:36:05 -0500 Subject: GHC support for the new "record" package In-Reply-To: <54BFD870.7070602@gmail.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> Message-ID: On Wed, Jan 21, 2015 at 11:48 AM, Simon Marlow wrote: > 2) Anonymous records are nice to have, but I don't want to have all >> records be anonymous (and have to jump through newtype hoops to get back >> non-anonymous records.) >> > > So right now you have to say > > data T = R { a :: Int } > > and with anonymous records you could say > > data T = R {| a :: Int |} > > (or something similar). That doesn't seem like jumping through hoops, > it's exactly the same amount of syntax. If you're worried about the extra > layer of boxing (quite reasonable) then either (a) use a newtype, if > possible, or (b) we could consider automatic UNPACKing of records used in a > constructor argument. > ?In the first of these declarations, we automatically have: a :: T -> Int Do we get that automatically in the second? Or do we have to write out all the field accessors, etc. we want to derive on T (and, can we do that)? I assume things like that are the concern for "non-anonymous records." -- Dan? -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam at well-typed.com Wed Jan 21 18:06:17 2015 From: adam at well-typed.com (Adam Gundry) Date: Wed, 21 Jan 2015 18:06:17 +0000 Subject: GHC support for the new "record" package In-Reply-To: <54BFD870.7070602@gmail.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> Message-ID: <54BFEA99.9090001@well-typed.com> [I should say, in case it wasn't clear from my previous email, that I'm impressed by Nikita's work, excited to see this discussion revived, and very keen to find the best solution, whether that builds on ORF or not. Anyway, back down the rabbit hole...] On 21/01/15 16:48, Simon Marlow wrote: > On 21/01/2015 16:01, Johan Tibell wrote: >> My thoughts mostly mirror those of Adam and Edward. >> >> 1) I want something that is backwards compatible. > > Backwards compatible in what sense? Extension flags provide backwards > compatibility, because you just don't turn on the extension until you > want to use it. That's how all the other extensions work; most of them > change syntax in some way or other that breaks existing code. Well, it's nice if turning on an extension flag doesn't break existing code (as far as possible, and stolen keywords etc. excepted). In ORF-as-is, this is mostly true, except for corner cases involving higher-rank fields or very polymorphic code. I think it's something to aim for in any records design, anonymous or not. >> 4) There are issues with strictness and unpacking. > > Yes - probably the major drawbacks, along with the lack of type-changing > updates. Is there any reason why Nikita's proposal couldn't be extended to support type-changing update, just like ORF? Though the cases in which it can and cannot apply are inevitably subtle. Also, I'd add fields with higher-rank types to the list of missing features. From a user's perspective, it might seem a bit odd if data T = MkT { foo :: forall a . a } was fine but data T = MkT {| foo :: forall a . a |} would not be a valid declaration. (Of course, ORF can't overload "foo" either, and maybe this is an inevitable wart.) >> 5) I don't know if I want to commit the *language* to a particular lens >> type. Agreed, but I don't think this need be an issue for either proposal. We know from ORF that we can make fields sufficiently polymorphic to be treated as selector functions or arbitrary types of lenses (though treating them as van Laarhoven lenses requires either some clever typeclass trickery in the base library, or using a combinator to make a field into a lens at use sites). Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ekmett at gmail.com Wed Jan 21 18:14:25 2015 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 21 Jan 2015 13:14:25 -0500 Subject: GHC support for the new "record" package In-Reply-To: <54BFEA99.9090001@well-typed.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54BFEA99.9090001@well-typed.com> Message-ID: On Wed, Jan 21, 2015 at 1:06 PM, Adam Gundry wrote: > Also, I'd add fields with higher-rank types to the list of missing > features. From a user's perspective, it might seem a bit odd if > > data T = MkT { foo :: forall a . a } > > was fine but > > data T = MkT {| foo :: forall a . a |} > > would not be a valid declaration. (Of course, ORF can't overload "foo" > either, and maybe this is an inevitable wart.) I'm slowly coming around to thinking that this is inevitable without a bunch of changes in the way we work with classes. You otherwise need to allow impredicative types in some contexts, which raises all sorts of questions. In the latter case we can at least be clear about why it doesn't work in the error message, in the ORF case it has to just not generate a lens. =( > > >> 5) I don't know if I want to commit the *language* to a particular lens > >> type. > > Agreed, but I don't think this need be an issue for either proposal. We > know from ORF that we can make fields sufficiently polymorphic to be > treated as selector functions or arbitrary types of lenses (though > treating them as van Laarhoven lenses requires either some clever > typeclass trickery in the base library, or using a combinator to make a > field into a lens at use sites). Admittedly that has also been the source of 90% of the complexity of the ORF proposal. There we _had_ to do this in order to support the use as a regular function. There is a large design space here, and the main thing Nikita's proposal brings to the table is slightly different assumptions about what such records should mean. This _could_ let us shed some of the rougher edges of ORF, at the price of having to lock in a notion of lenses. I'm on the fence about whether it would be a good idea to burden Nikita's proposal in the same fashion, but I think it'd be wise to explore it in both fashions. My gut feeling though is that if we tie it up with the same restrictions as ORF you just mostly get a less useful ORF with anonymous record sugar thrown in. -Edward -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 21 21:11:23 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 21 Jan 2015 21:11:23 +0000 Subject: vectorisation code? In-Reply-To: References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> <201501200937.25031.jan.stolarek@p.lodz.pl> <8761c1ltym.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562AE247@DB3PRD3001MB020.064d.mgd.msft.net> I?ve had a chat to Manuel. He is content for us to remove DPH code altogether (not just CPP/comment it out), provided we are careful to signpost what has gone and how to get it back. I am no Git expert, so can I leave it to you guys to work out what to do? The specification is: ? It should be clear how to revert the change; that is, to re-introduce the deleted code. I guess that might be ?git revert ? ? If someone trips over more DPH code later, and wants to remove that too, it should be clear how to add it to the list of things to be revertred. ? We should have a Trac ticket ?Resume work on DPH and vectorisation? or something like that, which summarises the reversion process. Just to be clear, this does not indicate any lack of interest in DPH on my part. (Quite the reverse.) It?s just that while no one is actually working on it, we should use our source code control system to move it out of the way, as others on this thread have persuasively argued. Manuel, yell if I got anything wrong. Thanks! Simon From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Carter Schonwald Sent: 21 January 2015 03:32 To: RodLogic Cc: Manuel M T Chakravarty; ghc-devs at haskell.org Subject: Re: vectorisation code? moving it to its own submodule is just a complicated version of cutting a branch that has the code Right before deleting it from master. afaik, the amount of love needed is roughly "one or more full time grad students really owning it", though i could be wrong. On Tue, Jan 20, 2015 at 5:39 AM, RodLogic > wrote: (disclaimer: I know nothing about the vectorization code) Now, is the vectorization code really dead code or it is code that needs love to come back to life? By removing it from the code base, you are probably sealing it's fate as dead code as we are limiting new or existing contributors to act on it (even if it's a commit hash away). If it is code that needs love to come back to life, grep noise or conditional compilation is a small price to pay here, imho. As a compromise, is it possible to move vectorization code into it's own submodule in git or is it too intertwined with core GHC? So that it can be worked on independent of GHC? On Tue, Jan 20, 2015 at 6:47 AM, Herbert Valerio Riedel > wrote: On 2015-01-20 at 09:37:25 +0100, Jan Stolarek wrote: >> Here's an alternate suggestion: in SimplCore, keep the call to vectorise >> around, but commented out > Yuck. Carter and Brandon are right here - we have git, let it do the > job. I propose that we remove vectorization code, create a Trac ticket > about vectorization & DPH needing love and record the commit hash in > the ticket so that we can revert it easily in the future. I'm also against commenting out dead code in the presence of a VCS. Btw, here's two links discussing the issues related to commenting out if anyone's interested in knowing more: - http://programmers.stackexchange.com/questions/190096/can-commented-out-code-be-valuable-documentation - http://programmers.stackexchange.com/questions/45378/is-commented-out-code-really-always-bad Cheers, hvr _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam at well-typed.com Wed Jan 21 21:34:05 2015 From: adam at well-typed.com (Adam Gundry) Date: Wed, 21 Jan 2015 21:34:05 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54BFEA99.9090001@well-typed.com> Message-ID: <54C01B4D.6010606@well-typed.com> On 21/01/15 18:14, Edward Kmett wrote: > >> 5) I don't know if I want to commit the *language* to a particular lens > >> type. > > Agreed, but I don't think this need be an issue for either proposal. We > know from ORF that we can make fields sufficiently polymorphic to be > treated as selector functions or arbitrary types of lenses (though > treating them as van Laarhoven lenses requires either some clever > typeclass trickery in the base library, or using a combinator to make a > field into a lens at use sites). > > > Admittedly that has also been the source of 90% of the complexity of the > ORF proposal. There we _had_ to do this in order to support the use as a > regular function. I'm surprised and interested that you view this as a major source of complexity. From my point of view, I quite liked how the ability to overload fields as both selector functions and arbitrary lenses turned out. Compared to some of the hairy GHC internal details relating to name resolution, it feels really quite straightforward. Also, I've recently worked out how to simplify and generalise it somewhat (see [1] and [2] if you're curious). > There is a large design space here, and the main thing Nikita's proposal > brings to the table is slightly different assumptions about what such > records should mean. This _could_ let us shed some of the rougher edges > of ORF, at the price of having to lock in a notion of lenses. Yes. It's good to explore the options. For what it's worth, I'm sceptical about blessing a particular notion of lenses unless it's really necessary, but I'm prepared to be convinced otherwise. > I'm on the fence about whether it would be a good idea to burden > Nikita's proposal in the same fashion, but I think it'd be wise to > explore it in both fashions. My gut feeling though is that if we tie it > up with the same restrictions as ORF you just mostly get a less useful > ORF with anonymous record sugar thrown in. I think there's a sensible story to tell about an incremental plan that starts with something like ORF and ends up with something like Nikita's anonymous records. I'll try to tell this story when I can rub a few more braincells together... Adam [1] https://github.com/adamgundry/records-prototype/blob/master/NewPrototype.hs [2] https://github.com/adamgundry/records-prototype/blob/master/CoherentPrototype.hs -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From marlowsd at gmail.com Wed Jan 21 21:51:24 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Wed, 21 Jan 2015 21:51:24 +0000 Subject: GHC support for the new "record" package In-Reply-To: <54C01B4D.6010606@well-typed.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54BFEA99.9090001@well-typed.com> <54C01B4D.6010606@well-typed.com> Message-ID: <54C01F5C.8030609@gmail.com> Adam, do you have any measurements for how much code gets generated for a record declaration with ORF, compared with a record declaration in GHC right now? That's one thing that has been a nagging worry for me with ORF, but I just don't have any idea if it's a real problem or not. Under Nikita's proposal, zero code is generated for a record "declaration" (since there isn't one), and the lenses are tiny expressions too. There's some boilerplate in the library, but it's generated once and for all, and isn't that huge anyway. The lightweightness of it from a code-size standpoint is very attractive. Cheers, Simon On 21/01/15 21:34, Adam Gundry wrote: > On 21/01/15 18:14, Edward Kmett wrote: >> >> 5) I don't know if I want to commit the *language* to a particular lens >> >> type. >> >> Agreed, but I don't think this need be an issue for either proposal. We >> know from ORF that we can make fields sufficiently polymorphic to be >> treated as selector functions or arbitrary types of lenses (though >> treating them as van Laarhoven lenses requires either some clever >> typeclass trickery in the base library, or using a combinator to make a >> field into a lens at use sites). >> >> >> Admittedly that has also been the source of 90% of the complexity of the >> ORF proposal. There we _had_ to do this in order to support the use as a >> regular function. > > I'm surprised and interested that you view this as a major source of > complexity. From my point of view, I quite liked how the ability to > overload fields as both selector functions and arbitrary lenses turned > out. Compared to some of the hairy GHC internal details relating to name > resolution, it feels really quite straightforward. Also, I've recently > worked out how to simplify and generalise it somewhat (see [1] and [2] > if you're curious). > > >> There is a large design space here, and the main thing Nikita's proposal >> brings to the table is slightly different assumptions about what such >> records should mean. This _could_ let us shed some of the rougher edges >> of ORF, at the price of having to lock in a notion of lenses. > > Yes. It's good to explore the options. For what it's worth, I'm > sceptical about blessing a particular notion of lenses unless it's > really necessary, but I'm prepared to be convinced otherwise. > > >> I'm on the fence about whether it would be a good idea to burden >> Nikita's proposal in the same fashion, but I think it'd be wise to >> explore it in both fashions. My gut feeling though is that if we tie it >> up with the same restrictions as ORF you just mostly get a less useful >> ORF with anonymous record sugar thrown in. > > I think there's a sensible story to tell about an incremental plan that > starts with something like ORF and ends up with something like Nikita's > anonymous records. I'll try to tell this story when I can rub a few more > braincells together... > > Adam > > [1] > https://github.com/adamgundry/records-prototype/blob/master/NewPrototype.hs > [2] > https://github.com/adamgundry/records-prototype/blob/master/CoherentPrototype.hs > From mainland at apeiron.net Wed Jan 21 23:44:16 2015 From: mainland at apeiron.net (Geoffrey Mainland) Date: Wed, 21 Jan 2015 18:44:16 -0500 Subject: vectorisation code? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562AE247@DB3PRD3001MB020.064d.mgd.msft.net> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> <201501200937.25031.jan.stolarek@p.lodz.pl> <8761c1ltym.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF562AE247@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <54C039D0.4020206@apeiron.net> I'm sorry I'm a bit late to the game here, but there is also the option of reconnecting DPH to the build. When I patched DPH for the new version of the vector library, I did not perform this step---now I'm sorry I didn't. I am willing to get DPH in working order again---I believe the required work will be minimal. However, that only makes sense if we 1) re-enable DPH in the nightly builds (and also by default for validate?), and 2) folks will not object too strenuously to having DPH stick around. My fear is that without making it part of the nightly builds, accumulated bitrot will make it extremely difficult to ever re-integrate DPH. I would hate to see that happen. Geoff On 01/21/2015 04:11 PM, Simon Peyton Jones wrote: > > I?ve had a chat to Manuel. He is content for us to remove DPH code > altogether (not just CPP/comment it out), provided we are careful to > signpost what has gone and how to get it back. > > > > I am no Git expert, so can I leave it to you guys to work out what to > do? The specification is: > > ? It should be clear how to revert the change; that is, to > re-introduce the deleted code. I guess that might be ?git revert > ? > > ? If someone trips over more DPH code later, and wants to > remove that too, it should be clear how to add it to the list of > things to be revertred. > > ? We should have a Trac ticket ?Resume work on DPH and > vectorisation? or something like that, which summarises the reversion > process. > > > > Just to be clear, this does not indicate any lack of interest in DPH > on my part. (Quite the reverse.) It?s just that while no one is > actually working on it, we should use our source code control system > to move it out of the way, as others on this thread have persuasively > argued. > > > > Manuel, yell if I got anything wrong. > > > > Thanks! > > > > Simon > > > > > > > > > > > > *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of > *Carter Schonwald > *Sent:* 21 January 2015 03:32 > *To:* RodLogic > *Cc:* Manuel M T Chakravarty; ghc-devs at haskell.org > *Subject:* Re: vectorisation code? > > > > moving it to its own submodule is just a complicated version of > cutting a branch that has the code Right before deleting it from master. > > afaik, the amount of love needed is roughly "one or more full time > grad students really owning it", though i could be wrong. > > > > > > On Tue, Jan 20, 2015 at 5:39 AM, RodLogic > wrote: > > (disclaimer: I know nothing about the vectorization code) > > > > Now, is the vectorization code really dead code or it is code that > needs love to come back to life? By removing it from the code > base, you are probably sealing it's fate as dead code as we are > limiting new or existing contributors to act on it (even if it's a > commit hash away). If it is code that needs love to come back to > life, grep noise or conditional compilation is a small price to > pay here, imho. > > > > As a compromise, is it possible to move vectorization code into > it's own submodule in git or is it too intertwined with core GHC? > So that it can be worked on independent of GHC? > > > > > > On Tue, Jan 20, 2015 at 6:47 AM, Herbert Valerio Riedel > > wrote: > > On 2015-01-20 at 09:37:25 +0100, Jan Stolarek wrote: > >> Here's an alternate suggestion: in SimplCore, keep the call > to vectorise > >> around, but commented out > > > Yuck. Carter and Brandon are right here - we have git, let > it do the > > job. I propose that we remove vectorization code, create a > Trac ticket > > about vectorization & DPH needing love and record the commit > hash in > > the ticket so that we can revert it easily in the future. > > I'm also against commenting out dead code in the presence of a > VCS. > > Btw, here's two links discussing the issues related to > commenting out if > anyone's interested in knowing more: > > - > http://programmers.stackexchange.com/questions/190096/can-commented-out-code-be-valuable-documentation > > - > http://programmers.stackexchange.com/questions/45378/is-commented-out-code-really-always-bad > > > Cheers, > hvr > > From kazu at iij.ad.jp Thu Jan 22 00:04:58 2015 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Thu, 22 Jan 2015 09:04:58 +0900 (JST) Subject: Proposal for removing transformers dependency In-Reply-To: References: Message-ID: <20150122.090458.492127693514961606.kazu@iij.ad.jp> Alexander, > I'm coming with a proposal for removing transformers dependency > from ghc library. Big +1 from me. doctest is suffering from dependency hell because ghc lib depends on transformers. --Kazu From kazu at iij.ad.jp Thu Jan 22 00:11:22 2015 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Thu, 22 Jan 2015 09:11:22 +0900 (JST) Subject: Proposal for removing transformers dependency In-Reply-To: References: Message-ID: <20150122.091122.794598617153532424.kazu@iij.ad.jp> Hi, I also hope that this is integrated into GHC 7.10. --Kazu > Hello. > > I'm coming with a proposal for removing transformers dependency > from ghc library. The reason for this proposal that it's not possible > to build consistent environment where a modern libraries (that depend > on a newer transformers or mtl-2.2) and libraries that use ghc API > are used together. And often people are tend to use version that is > bundled with ghc, even if newer are available. > > As transformers usage are quite limited in ghc, and it's really relevant > in ghc-bin, it's possible to duplicate the code, and provide required > fixes in ghc-bin.cabal. As a result ghc uses it's own MonadIO, MonadControl > and Strict State, and ghc-bin.cabal (ghc/*) uses ones from transformers. > > I have prepared a proof of concept [1], however it doesn't look very clean > and it's quite possible that will require some generalization, for example > introduction of the ghc-transformers-instances package that will have > all required instances. > > Should I continue doing this? Are there any things to consider and fix? > > [1] https://github.com/qnikst/ghc/compare/wip/remove-tf > -- > Alexander From chak at cse.unsw.edu.au Thu Jan 22 04:08:22 2015 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Thu, 22 Jan 2015 15:08:22 +1100 Subject: vectorisation code? In-Reply-To: <54C039D0.4020206@apeiron.net> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> <201501200937.25031.jan.stolarek@p.lodz.pl> <8761c1ltym.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF562AE247@DB3PRD3001MB020.064d.mgd.msft.net> <54C039D0.4020206@apeiron.net> Message-ID: <6AA9EDDD-2444-4ADE-8675-793745BBA374@cse.unsw.edu.au> Thanks for the offer, Geoff. Under these circumstances, I would also very much prefer for Geoff getting the code in order and leaving it in GHC. Manuel > Geoffrey Mainland : > > I'm sorry I'm a bit late to the game here, but there is also the option > of reconnecting DPH to the build. > > When I patched DPH for the new version of the vector library, I did not > perform this step---now I'm sorry I didn't. > > I am willing to get DPH in working order again---I believe the required > work will be minimal. However, that only makes sense if we 1) re-enable > DPH in the nightly builds (and also by default for validate?), and 2) > folks will not object too strenuously to having DPH stick around. > > My fear is that without making it part of the nightly builds, > accumulated bitrot will make it extremely difficult to ever re-integrate > DPH. I would hate to see that happen. > > Geoff > > On 01/21/2015 04:11 PM, Simon Peyton Jones wrote: >> >> I?ve had a chat to Manuel. He is content for us to remove DPH code >> altogether (not just CPP/comment it out), provided we are careful to >> signpost what has gone and how to get it back. >> >> >> >> I am no Git expert, so can I leave it to you guys to work out what to >> do? The specification is: >> >> ? It should be clear how to revert the change; that is, to >> re-introduce the deleted code. I guess that might be ?git revert >> ? >> >> ? If someone trips over more DPH code later, and wants to >> remove that too, it should be clear how to add it to the list of >> things to be revertred. >> >> ? We should have a Trac ticket ?Resume work on DPH and >> vectorisation? or something like that, which summarises the reversion >> process. >> >> >> >> Just to be clear, this does not indicate any lack of interest in DPH >> on my part. (Quite the reverse.) It?s just that while no one is >> actually working on it, we should use our source code control system >> to move it out of the way, as others on this thread have persuasively >> argued. >> >> >> >> Manuel, yell if I got anything wrong. >> >> >> >> Thanks! >> >> >> >> Simon >> >> >> >> >> >> >> >> >> >> >> >> *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of >> *Carter Schonwald >> *Sent:* 21 January 2015 03:32 >> *To:* RodLogic >> *Cc:* Manuel M T Chakravarty; ghc-devs at haskell.org >> *Subject:* Re: vectorisation code? >> >> >> >> moving it to its own submodule is just a complicated version of >> cutting a branch that has the code Right before deleting it from master. >> >> afaik, the amount of love needed is roughly "one or more full time >> grad students really owning it", though i could be wrong. >> >> >> >> >> >> On Tue, Jan 20, 2015 at 5:39 AM, RodLogic > > wrote: >> >> (disclaimer: I know nothing about the vectorization code) >> >> >> >> Now, is the vectorization code really dead code or it is code that >> needs love to come back to life? By removing it from the code >> base, you are probably sealing it's fate as dead code as we are >> limiting new or existing contributors to act on it (even if it's a >> commit hash away). If it is code that needs love to come back to >> life, grep noise or conditional compilation is a small price to >> pay here, imho. >> >> >> >> As a compromise, is it possible to move vectorization code into >> it's own submodule in git or is it too intertwined with core GHC? >> So that it can be worked on independent of GHC? >> >> >> >> >> >> On Tue, Jan 20, 2015 at 6:47 AM, Herbert Valerio Riedel >> > wrote: >> >> On 2015-01-20 at 09:37:25 +0100, Jan Stolarek wrote: >>>> Here's an alternate suggestion: in SimplCore, keep the call >> to vectorise >>>> around, but commented out >> >>> Yuck. Carter and Brandon are right here - we have git, let >> it do the >>> job. I propose that we remove vectorization code, create a >> Trac ticket >>> about vectorization & DPH needing love and record the commit >> hash in >>> the ticket so that we can revert it easily in the future. >> >> I'm also against commenting out dead code in the presence of a >> VCS. >> >> Btw, here's two links discussing the issues related to >> commenting out if >> anyone's interested in knowing more: >> >> - >> http://programmers.stackexchange.com/questions/190096/can-commented-out-code-be-valuable-documentation >> >> - >> http://programmers.stackexchange.com/questions/45378/is-commented-out-code-really-always-bad >> >> >> Cheers, >> hvr >> >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From ekmett at gmail.com Thu Jan 22 05:38:39 2015 From: ekmett at gmail.com (Edward Kmett) Date: Thu, 22 Jan 2015 00:38:39 -0500 Subject: GHC support for the new "record" package In-Reply-To: <54C01B4D.6010606@well-typed.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54BFEA99.9090001@well-typed.com> <54C01B4D.6010606@well-typed.com> Message-ID: On Wed, Jan 21, 2015 at 4:34 PM, Adam Gundry wrote: > I'm surprised and interested that you view this as a major source of > complexity. From my point of view, I quite liked how the ability to > overload fields as both selector functions and arbitrary lenses turned > out. Compared to some of the hairy GHC internal details relating to name > resolution, it feels really quite straightforward. Also, I've recently > worked out how to simplify and generalise it somewhat (see [1] and [2] > if you're curious). I'm actually reasonably happy with the design we wound up with, but the need to mangle all the uses of the accessor with a combinator to extract from the data type is a perpetual tax paid, that by giving in and picking a lens type and not having to _also_ provide a normal field accessor we could avoid. > There is a large design space here, and the main thing Nikita's proposal > > brings to the table is slightly different assumptions about what such > > records should mean. This _could_ let us shed some of the rougher edges > > of ORF, at the price of having to lock in a notion of lenses. > > Yes. It's good to explore the options. For what it's worth, I'm > sceptical about blessing a particular notion of lenses unless it's > really necessary, but I'm prepared to be convinced otherwise. For users this means the difference between set (foo.bar) 12 and set (le foo.le bar) 12 -- for some combinator that is hard to pick a name for that turns an accessor into a lens. It means they always have to be cognizant of that distinction. The inability to shed that tax in the other design is the major pain point it has always had for me. The user experience for it is / was going to be bad enough that I have remained concerned about how well the adoption for it would be compared to existing approaches, which have more set up but offer cleaner usage. > I'm on the fence about whether it would be a good idea to burden > > Nikita's proposal in the same fashion, but I think it'd be wise to > > explore it in both fashions. My gut feeling though is that if we tie it > > up with the same restrictions as ORF you just mostly get a less useful > > ORF with anonymous record sugar thrown in. > > I think there's a sensible story to tell about an incremental plan that > starts with something like ORF and ends up with something like Nikita's > anonymous records. I'll try to tell this story when I can rub a few more > braincells together... > I definitely think there is a coherent story there, but I'm not sure I see a way that such a story could end that avoids the concerns above. I very much agree that it makes sense to keep both options on the table though so that we can work through the attendant issues and trade-offs. -Edward -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.vershilov at gmail.com Thu Jan 22 06:16:11 2015 From: alexander.vershilov at gmail.com (Alexander V Vershilov) Date: Thu, 22 Jan 2015 10:16:11 +0400 Subject: Proposal for removing transformers dependency In-Reply-To: <20150122.091122.794598617153532424.kazu@iij.ad.jp> References: <20150122.091122.794598617153532424.kazu@iij.ad.jp> Message-ID: Ok. In this case, I'm interested in main users of ghc library, as I have to make sure that they will not be broken by any changes that I'm introduce? I can take few from hackage but maybe there are ones that users care about. So I can validate my solution. -- Alexander On 22 January 2015 at 03:11, Kazu Yamamoto wrote: > Hi, > > I also hope that this is integrated into GHC 7.10. > > --Kazu > >> Hello. >> >> I'm coming with a proposal for removing transformers dependency >> from ghc library. The reason for this proposal that it's not possible >> to build consistent environment where a modern libraries (that depend >> on a newer transformers or mtl-2.2) and libraries that use ghc API >> are used together. And often people are tend to use version that is >> bundled with ghc, even if newer are available. >> >> As transformers usage are quite limited in ghc, and it's really relevant >> in ghc-bin, it's possible to duplicate the code, and provide required >> fixes in ghc-bin.cabal. As a result ghc uses it's own MonadIO, MonadControl >> and Strict State, and ghc-bin.cabal (ghc/*) uses ones from transformers. >> >> I have prepared a proof of concept [1], however it doesn't look very clean >> and it's quite possible that will require some generalization, for example >> introduction of the ghc-transformers-instances package that will have >> all required instances. >> >> Should I continue doing this? Are there any things to consider and fix? >> >> [1] https://github.com/qnikst/ghc/compare/wip/remove-tf >> -- >> Alexander > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -- Alexander From ezyang at mit.edu Thu Jan 22 06:26:12 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 21 Jan 2015 22:26:12 -0800 Subject: Proposal for removing transformers dependency In-Reply-To: References: <20150122.091122.794598617153532424.kazu@iij.ad.jp> Message-ID: <1421907946-sup-5427@sabre> Don't worry, we've never given any promise about GHC API stability across major versions (or minor versions, for that matter.) Edward Excerpts from Alexander V Vershilov's message of 2015-01-21 22:16:11 -0800: > Ok. In this case, I'm interested in main users of ghc library, > as I have to make sure that they will not be broken by > any changes that I'm introduce? I can take few from hackage > but maybe there are ones that users care about. So I can > validate my solution. > > -- > Alexander > > On 22 January 2015 at 03:11, Kazu Yamamoto wrote: > > Hi, > > > > I also hope that this is integrated into GHC 7.10. > > > > --Kazu > > > >> Hello. > >> > >> I'm coming with a proposal for removing transformers dependency > >> from ghc library. The reason for this proposal that it's not possible > >> to build consistent environment where a modern libraries (that depend > >> on a newer transformers or mtl-2.2) and libraries that use ghc API > >> are used together. And often people are tend to use version that is > >> bundled with ghc, even if newer are available. > >> > >> As transformers usage are quite limited in ghc, and it's really relevant > >> in ghc-bin, it's possible to duplicate the code, and provide required > >> fixes in ghc-bin.cabal. As a result ghc uses it's own MonadIO, MonadControl > >> and Strict State, and ghc-bin.cabal (ghc/*) uses ones from transformers. > >> > >> I have prepared a proof of concept [1], however it doesn't look very clean > >> and it's quite possible that will require some generalization, for example > >> introduction of the ghc-transformers-instances package that will have > >> all required instances. > >> > >> Should I continue doing this? Are there any things to consider and fix? > >> > >> [1] https://github.com/qnikst/ghc/compare/wip/remove-tf > >> -- > >> Alexander > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > From hvriedel at gmail.com Thu Jan 22 07:37:25 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 22 Jan 2015 08:37:25 +0100 Subject: Proposal for removing transformers dependency In-Reply-To: (Alexander V. Vershilov's message of "Wed, 21 Jan 2015 20:19:42 +0400") References: <87iog0un42.fsf@gmail.com> Message-ID: <877fwfqn9m.fsf@gmail.com> On 2015-01-21 at 17:19:42 +0100, Alexander V Vershilov wrote: > I thought about providing package ghc-transformers-instances, that will > provide instances for transformers's type classes for user. So > ghc-tf-instances will depend on current ghc, and current transformers that > could be provided by user environment. So the price is seems to be mostly to tolerate orphan instances (and potentially having to copy more code from transformers into ghc, if one wants to use more features in ghc's code). One would have to make sure to keep that package updated everytime a new transformers or ghc package version is released, as well as making sure to always test all supported combinations of ghc/transformers versions before making a new releases. One thing to keep in mind though is that then 'haskeline' (which is needed by GHCi) still remains a consumer of 'transformers', so we'd still have to bundle a 'transformers' package version with GHC even if `ghc` doesn't depend on it anymore. Somewhat related, the `ghc` -> `Cabal` dependency was broken up in GHC 7.10 but we'll still bundle `Cabal` with GHC 7.10. I'm not sure how much this helps Stackage which I assume would constraint transformers to a single version, and therefore force a reinstall of the haskeline version shipped with GHC with a different version of transformers. Fwiw, I welcome decoupling libraries from the GHC distribution as every exposed library adds to the synchronisation-with-upstream-overhead when preparing new GHC releases, in addition to adding implicit version constraints to the package database. With a GHC release-management hat on though: As for GHC 7.10.1, it's rather late... at the very least it needs to get into RC2 (whose cut off is tomorrow) for that to happen. Cheers, hvr From mail at joachim-breitner.de Thu Jan 22 08:34:13 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 22 Jan 2015 09:34:13 +0100 Subject: Proposal for removing transformers dependency In-Reply-To: <877fwfqn9m.fsf@gmail.com> References: <87iog0un42.fsf@gmail.com> <877fwfqn9m.fsf@gmail.com> Message-ID: <1421915653.1910.2.camel@joachim-breitner.de> Hi, Am Donnerstag, den 22.01.2015, 08:37 +0100 schrieb Herbert Valerio Riedel: > One thing to keep in mind though is that then 'haskeline' (which is > needed by GHCi) still remains a consumer of 'transformers', so we'd > still have to bundle a 'transformers' package version with GHC even if > `ghc` doesn't depend on it anymore. Somewhat related, the `ghc` -> > `Cabal` dependency was broken up in GHC 7.10 but we'll still bundle > `Cabal` with GHC 7.10. although there has been talk about only shipping the .so file with a clash-free name or path... was this not done simply because noone bothered enough to actually do it, or was there a bigger problem? Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From alexander.vershilov at gmail.com Thu Jan 22 08:36:18 2015 From: alexander.vershilov at gmail.com (Alexander V Vershilov) Date: Thu, 22 Jan 2015 12:36:18 +0400 Subject: Proposal for removing transformers dependency In-Reply-To: <877fwfqn9m.fsf@gmail.com> References: <87iog0un42.fsf@gmail.com> <877fwfqn9m.fsf@gmail.com> Message-ID: On 22 January 2015 at 10:37, Herbert Valerio Riedel wrote: > On 2015-01-21 at 17:19:42 +0100, Alexander V Vershilov wrote: >> I thought about providing package ghc-transformers-instances, that will >> provide instances for transformers's type classes for user. So >> ghc-tf-instances will depend on current ghc, and current transformers that >> could be provided by user environment. > > So the price is seems to be mostly to tolerate orphan instances (and > potentially having to copy more code from transformers into ghc, if one > wants to use more features in ghc's code). Yes, if I didn't miss any other relevant issue. It's possible that we will need some coercion between Strict State that comes with GHC and one that is in transformers. > One would have to make sure to keep that package updated everytime a new > transformers or ghc package version is released, as well as making sure > to always test all supported combinations of ghc/transformers versions > before making a new releases. > > One thing to keep in mind though is that then 'haskeline' (which is > needed by GHCi) still remains a consumer of 'transformers', so we'd > still have to bundle a 'transformers' package version with GHC even if > `ghc` doesn't depend on it anymore. Somewhat related, the `ghc` -> > `Cabal` dependency was broken up in GHC 7.10 but we'll still bundle > `Cabal` with GHC 7.10. >From a distro-developer perspective (gentoo) the only relevant library is ghc library, and ghc-bin is an executable, so in the worst case user will end up with having 2 different transformers package. But there will be no transformers madness as no library pulls concrete version. > I'm not sure how much this helps Stackage which I assume would > constraint transformers to a single version, and therefore force a > reinstall of the haskeline version shipped with GHC with a different > version of transformers. I don't know about a Stackage but at least Gentoo allow to install additional version for packages that are shipped with ghc and ghc library do not depend on it, it also handles a case when versions are equal, and then installation is a noop. Thinking of haskeline user will have problems only in case if user's haskeline version is equal to shipped with GHC but depends on a different tf version. I think that in this case force installing will lead to some level of breakage, however all executables (ghc-bin) and new libraries will continue working. Workaround is to make a minor version bump in haskeline and this can be done purely on a distro-level. > Fwiw, I welcome decoupling libraries from the GHC distribution as every > exposed library adds to the synchronisation-with-upstream-overhead when > preparing new GHC releases, in addition to adding implicit version > constraints to the package database. > > With a GHC release-management hat on though: As for GHC 7.10.1, it's > rather late... at the very least it needs to get into RC2 (whose cut off > is tomorrow) for that to happen. Great. I'll try to cleanup code, and prepare ghc-tf-instances today, test of the packages that depend on the ghc library in order to understand if ghc-tf-instances solution works and then send diff to a Phab. Every tips, suggestions, or reminders of things that I should check will be gladly accepter. > Cheers, > hvr -- Alexander From jan.stolarek at p.lodz.pl Thu Jan 22 08:55:30 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Thu, 22 Jan 2015 09:55:30 +0100 Subject: vectorisation code? In-Reply-To: <6AA9EDDD-2444-4ADE-8675-793745BBA374@cse.unsw.edu.au> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <54C039D0.4020206@apeiron.net> <6AA9EDDD-2444-4ADE-8675-793745BBA374@cse.unsw.edu.au> Message-ID: <201501220955.31040.jan.stolarek@p.lodz.pl> Would it be possible to turn vectorisation into a compiler plugin? This would kill two birds with one stone: vectorisation would be removed from GHC sources and at the same time the code could be maintained by Geoffrey or anyone else who would want to take it up. I'm not sure what would happen with DPH in that scenario. Janek Dnia czwartek, 22 stycznia 2015, Manuel M T Chakravarty napisa?: > Thanks for the offer, Geoff. > > Under these circumstances, I would also very much prefer for Geoff getting > the code in order and leaving it in GHC. > > Manuel > > > Geoffrey Mainland : > > > > I'm sorry I'm a bit late to the game here, but there is also the option > > of reconnecting DPH to the build. > > > > When I patched DPH for the new version of the vector library, I did not > > perform this step---now I'm sorry I didn't. > > > > I am willing to get DPH in working order again---I believe the required > > work will be minimal. However, that only makes sense if we 1) re-enable > > DPH in the nightly builds (and also by default for validate?), and 2) > > folks will not object too strenuously to having DPH stick around. > > > > My fear is that without making it part of the nightly builds, > > accumulated bitrot will make it extremely difficult to ever re-integrate > > DPH. I would hate to see that happen. > > > > Geoff > > > > On 01/21/2015 04:11 PM, Simon Peyton Jones wrote: > >> I?ve had a chat to Manuel. He is content for us to remove DPH code > >> altogether (not just CPP/comment it out), provided we are careful to > >> signpost what has gone and how to get it back. > >> > >> > >> > >> I am no Git expert, so can I leave it to you guys to work out what to > >> do? The specification is: > >> > >> ? It should be clear how to revert the change; that is, to > >> re-introduce the deleted code. I guess that might be ?git revert > >> ? > >> > >> ? If someone trips over more DPH code later, and wants to > >> remove that too, it should be clear how to add it to the list of > >> things to be revertred. > >> > >> ? We should have a Trac ticket ?Resume work on DPH and > >> vectorisation? or something like that, which summarises the reversion > >> process. > >> > >> > >> > >> Just to be clear, this does not indicate any lack of interest in DPH > >> on my part. (Quite the reverse.) It?s just that while no one is > >> actually working on it, we should use our source code control system > >> to move it out of the way, as others on this thread have persuasively > >> argued. > >> > >> > >> > >> Manuel, yell if I got anything wrong. > >> > >> > >> > >> Thanks! > >> > >> > >> > >> Simon > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of > >> *Carter Schonwald > >> *Sent:* 21 January 2015 03:32 > >> *To:* RodLogic > >> *Cc:* Manuel M T Chakravarty; ghc-devs at haskell.org > >> *Subject:* Re: vectorisation code? > >> > >> > >> > >> moving it to its own submodule is just a complicated version of > >> cutting a branch that has the code Right before deleting it from master. > >> > >> afaik, the amount of love needed is roughly "one or more full time > >> grad students really owning it", though i could be wrong. > >> > >> > >> > >> > >> > >> On Tue, Jan 20, 2015 at 5:39 AM, RodLogic >> > wrote: > >> > >> (disclaimer: I know nothing about the vectorization code) > >> > >> > >> > >> Now, is the vectorization code really dead code or it is code that > >> needs love to come back to life? By removing it from the code > >> base, you are probably sealing it's fate as dead code as we are > >> limiting new or existing contributors to act on it (even if it's a > >> commit hash away). If it is code that needs love to come back to > >> life, grep noise or conditional compilation is a small price to > >> pay here, imho. > >> > >> > >> > >> As a compromise, is it possible to move vectorization code into > >> it's own submodule in git or is it too intertwined with core GHC? > >> So that it can be worked on independent of GHC? > >> > >> > >> > >> > >> > >> On Tue, Jan 20, 2015 at 6:47 AM, Herbert Valerio Riedel > >> > wrote: > >> > >> On 2015-01-20 at 09:37:25 +0100, Jan Stolarek wrote: > >>>> Here's an alternate suggestion: in SimplCore, keep the call > >> > >> to vectorise > >> > >>>> around, but commented out > >>> > >>> Yuck. Carter and Brandon are right here - we have git, let > >> > >> it do the > >> > >>> job. I propose that we remove vectorization code, create a > >> > >> Trac ticket > >> > >>> about vectorization & DPH needing love and record the commit > >> > >> hash in > >> > >>> the ticket so that we can revert it easily in the future. > >> > >> I'm also against commenting out dead code in the presence of a > >> VCS. > >> > >> Btw, here's two links discussing the issues related to > >> commenting out if > >> anyone's interested in knowing more: > >> > >> - > >> > >> http://programmers.stackexchange.com/questions/190096/can-commented-out- > >>code-be-valuable-documentation > >> > >> - > >> > >> http://programmers.stackexchange.com/questions/45378/is-commented-out-co > >>de-really-always-bad > >> > >> > >> Cheers, > >> hvr > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From adam at well-typed.com Thu Jan 22 09:22:36 2015 From: adam at well-typed.com (Adam Gundry) Date: Thu, 22 Jan 2015 09:22:36 +0000 Subject: GHC support for the new "record" package In-Reply-To: <54C01F5C.8030609@gmail.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54BFEA99.9090001@well-typed.com> <54C01B4D.6010606@well-typed.com> <54C01F5C.8030609@gmail.com> Message-ID: <54C0C15C.9080908@well-typed.com> On 21/01/15 21:51, Simon Marlow wrote: > Adam, do you have any measurements for how much code gets generated for > a record declaration with ORF, compared with a record declaration in GHC > right now? That's one thing that has been a nagging worry for me with > ORF, but I just don't have any idea if it's a real problem or not. Yes, that was something that was a bit unsatisfying about the original implementation, though unfortunately I don't have hard numbers comparing the relative code sizes. But Simon PJ and I have realised that we can be much more efficient: the only things that need to be generated for record declarations are selector functions (as at present) and updater functions (one per field, per type). Everything else (typeclass dfuns, type family axioms) can be made up on-the-fly in the typechecker. So I don't think it will make a practical difference. > Under Nikita's proposal, zero code is generated for a record > "declaration" (since there isn't one), and the lenses are tiny > expressions too. There's some boilerplate in the library, but it's > generated once and for all, and isn't that huge anyway. The > lightweightness of it from a code-size standpoint is very attractive. Agreed. I'm coming to see how much of a virtue it is to be lightweight! I'm a bit worried, however, that if we want newtype T = MkT {| foo :: Int |} x = view [l|foo|] (MkT 42) -- whatever our syntax for [l|...|] is to be well-typed, we need an instance for FieldOwner "foo" Int T to be generated somewhere (perhaps via GND), so I suspect the code generation cost for non-anonymous overloaded records is about the same as with ORF. Nikita's proposal would let you choose whether to pay that cost at declaration time, which has advantages and disadvantages, of course. Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From adam at well-typed.com Thu Jan 22 09:31:03 2015 From: adam at well-typed.com (Adam Gundry) Date: Thu, 22 Jan 2015 09:31:03 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54BFEA99.9090001@well-typed.com> <54C01B4D.6010606@well-typed.com> Message-ID: <54C0C357.4040209@well-typed.com> On 22/01/15 05:38, Edward Kmett wrote: > On Wed, Jan 21, 2015 at 4:34 PM, Adam Gundry wrote: > > I'm surprised and interested that you view this as a major source of > complexity. From my point of view, I quite liked how the ability to > overload fields as both selector functions and arbitrary lenses turned > out. Compared to some of the hairy GHC internal details relating to name > resolution, it feels really quite straightforward. Also, I've recently > worked out how to simplify and generalise it somewhat (see [1] and [2] > if you're curious). > > > I'm actually reasonably happy with the design we wound up with, but the > need to mangle all the uses of the accessor with a combinator to extract > from the data type is a perpetual tax paid, that by giving in and > picking a lens type and not having to _also_ provide a normal field > accessor we could avoid. That's a fair point, at least provided one is happy to work with the canonical lens type we choose, because all others will require a combinator. ;-) Actually, the simplifications I recently came up with could allow us to make uses of the field work as van Laarhoven lenses, other lenses *and* selector functions. In practice, however, I suspect this might lead to somewhat confusing error messages, so it might not be desirable. Adam > [1] https://github.com/adamgundry/records-prototype/blob/master/NewPrototype.hs > [2] https://github.com/adamgundry/records-prototype/blob/master/CoherentPrototype.hs -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From alan.zimm at gmail.com Thu Jan 22 09:50:29 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Thu, 22 Jan 2015 11:50:29 +0200 Subject: vectorisation code? In-Reply-To: <201501220955.31040.jan.stolarek@p.lodz.pl> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <54C039D0.4020206@apeiron.net> <6AA9EDDD-2444-4ADE-8675-793745BBA374@cse.unsw.edu.au> <201501220955.31040.jan.stolarek@p.lodz.pl> Message-ID: I think there is significant infrastructure in the parser, not sure how that could be managed via a plugin. Alan On Thu, Jan 22, 2015 at 10:55 AM, Jan Stolarek wrote: > Would it be possible to turn vectorisation into a compiler plugin? This > would kill two birds with > one stone: vectorisation would be removed from GHC sources and at the same > time the code could be > maintained by Geoffrey or anyone else who would want to take it up. I'm > not sure what would > happen with DPH in that scenario. > > Janek > > Dnia czwartek, 22 stycznia 2015, Manuel M T Chakravarty napisa?: > > Thanks for the offer, Geoff. > > > > Under these circumstances, I would also very much prefer for Geoff > getting > > the code in order and leaving it in GHC. > > > > Manuel > > > > > Geoffrey Mainland : > > > > > > I'm sorry I'm a bit late to the game here, but there is also the option > > > of reconnecting DPH to the build. > > > > > > When I patched DPH for the new version of the vector library, I did not > > > perform this step---now I'm sorry I didn't. > > > > > > I am willing to get DPH in working order again---I believe the required > > > work will be minimal. However, that only makes sense if we 1) re-enable > > > DPH in the nightly builds (and also by default for validate?), and 2) > > > folks will not object too strenuously to having DPH stick around. > > > > > > My fear is that without making it part of the nightly builds, > > > accumulated bitrot will make it extremely difficult to ever > re-integrate > > > DPH. I would hate to see that happen. > > > > > > Geoff > > > > > > On 01/21/2015 04:11 PM, Simon Peyton Jones wrote: > > >> I?ve had a chat to Manuel. He is content for us to remove DPH code > > >> altogether (not just CPP/comment it out), provided we are careful to > > >> signpost what has gone and how to get it back. > > >> > > >> > > >> > > >> I am no Git expert, so can I leave it to you guys to work out what to > > >> do? The specification is: > > >> > > >> ? It should be clear how to revert the change; that is, to > > >> re-introduce the deleted code. I guess that might be ?git revert > > >> ? > > >> > > >> ? If someone trips over more DPH code later, and wants to > > >> remove that too, it should be clear how to add it to the list of > > >> things to be revertred. > > >> > > >> ? We should have a Trac ticket ?Resume work on DPH and > > >> vectorisation? or something like that, which summarises the reversion > > >> process. > > >> > > >> > > >> > > >> Just to be clear, this does not indicate any lack of interest in DPH > > >> on my part. (Quite the reverse.) It?s just that while no one is > > >> actually working on it, we should use our source code control system > > >> to move it out of the way, as others on this thread have persuasively > > >> argued. > > >> > > >> > > >> > > >> Manuel, yell if I got anything wrong. > > >> > > >> > > >> > > >> Thanks! > > >> > > >> > > >> > > >> Simon > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> > > >> *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of > > >> *Carter Schonwald > > >> *Sent:* 21 January 2015 03:32 > > >> *To:* RodLogic > > >> *Cc:* Manuel M T Chakravarty; ghc-devs at haskell.org > > >> *Subject:* Re: vectorisation code? > > >> > > >> > > >> > > >> moving it to its own submodule is just a complicated version of > > >> cutting a branch that has the code Right before deleting it from > master. > > >> > > >> afaik, the amount of love needed is roughly "one or more full time > > >> grad students really owning it", though i could be wrong. > > >> > > >> > > >> > > >> > > >> > > >> On Tue, Jan 20, 2015 at 5:39 AM, RodLogic > >> > wrote: > > >> > > >> (disclaimer: I know nothing about the vectorization code) > > >> > > >> > > >> > > >> Now, is the vectorization code really dead code or it is code that > > >> needs love to come back to life? By removing it from the code > > >> base, you are probably sealing it's fate as dead code as we are > > >> limiting new or existing contributors to act on it (even if it's a > > >> commit hash away). If it is code that needs love to come back to > > >> life, grep noise or conditional compilation is a small price to > > >> pay here, imho. > > >> > > >> > > >> > > >> As a compromise, is it possible to move vectorization code into > > >> it's own submodule in git or is it too intertwined with core GHC? > > >> So that it can be worked on independent of GHC? > > >> > > >> > > >> > > >> > > >> > > >> On Tue, Jan 20, 2015 at 6:47 AM, Herbert Valerio Riedel > > >> > wrote: > > >> > > >> On 2015-01-20 at 09:37:25 +0100, Jan Stolarek wrote: > > >>>> Here's an alternate suggestion: in SimplCore, keep the call > > >> > > >> to vectorise > > >> > > >>>> around, but commented out > > >>> > > >>> Yuck. Carter and Brandon are right here - we have git, let > > >> > > >> it do the > > >> > > >>> job. I propose that we remove vectorization code, create a > > >> > > >> Trac ticket > > >> > > >>> about vectorization & DPH needing love and record the commit > > >> > > >> hash in > > >> > > >>> the ticket so that we can revert it easily in the future. > > >> > > >> I'm also against commenting out dead code in the presence of a > > >> VCS. > > >> > > >> Btw, here's two links discussing the issues related to > > >> commenting out if > > >> anyone's interested in knowing more: > > >> > > >> - > > >> > > >> > http://programmers.stackexchange.com/questions/190096/can-commented-out- > > >>code-be-valuable-documentation > > >> > > >> - > > >> > > >> > http://programmers.stackexchange.com/questions/45378/is-commented-out-co > > >>de-really-always-bad > > >> > > >> > > >> Cheers, > > >> hvr > > > > > > _______________________________________________ > > > ghc-devs mailing list > > > ghc-devs at haskell.org > > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jan 22 10:41:48 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 22 Jan 2015 10:41:48 +0000 Subject: Proposal for removing transformers dependency In-Reply-To: <877fwfqn9m.fsf@gmail.com> References: <87iog0un42.fsf@gmail.com> <877fwfqn9m.fsf@gmail.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562AEDE9@DB3PRD3001MB020.064d.mgd.msft.net> | One thing to keep in mind though is that then 'haskeline' (which is | needed by GHCi) still remains a consumer of 'transformers', so we'd | still have to bundle a 'transformers' package version with GHC even if | `ghc` doesn't depend on it anymore. Somewhat related, the `ghc` -> couldn't we invite Judah to remove the transformers dependency for haskeline? It all seems a bit late for 7.10. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Herbert Valerio Riedel | Sent: 22 January 2015 07:37 | To: Alexander V Vershilov | Cc: ghc-devs at haskell.org | Subject: Re: Proposal for removing transformers dependency | | On 2015-01-21 at 17:19:42 +0100, Alexander V Vershilov wrote: | > I thought about providing package ghc-transformers-instances, that | > will provide instances for transformers's type classes for user. So | > ghc-tf-instances will depend on current ghc, and current | transformers | > that could be provided by user environment. | | So the price is seems to be mostly to tolerate orphan instances (and | potentially having to copy more code from transformers into ghc, if | one wants to use more features in ghc's code). | | One would have to make sure to keep that package updated everytime a | new transformers or ghc package version is released, as well as making | sure to always test all supported combinations of ghc/transformers | versions before making a new releases. | | One thing to keep in mind though is that then 'haskeline' (which is | needed by GHCi) still remains a consumer of 'transformers', so we'd | still have to bundle a 'transformers' package version with GHC even if | `ghc` doesn't depend on it anymore. Somewhat related, the `ghc` -> | `Cabal` dependency was broken up in GHC 7.10 but we'll still bundle | `Cabal` with GHC 7.10. | | I'm not sure how much this helps Stackage which I assume would | constraint transformers to a single version, and therefore force a | reinstall of the haskeline version shipped with GHC with a different | version of transformers. | | Fwiw, I welcome decoupling libraries from the GHC distribution as | every exposed library adds to the synchronisation-with-upstream- | overhead when preparing new GHC releases, in addition to adding | implicit version constraints to the package database. | | With a GHC release-management hat on though: As for GHC 7.10.1, it's | rather late... at the very least it needs to get into RC2 (whose cut | off is tomorrow) for that to happen. | | Cheers, | hvr | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Thu Jan 22 10:56:58 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 22 Jan 2015 10:56:58 +0000 Subject: [commit: ghc] master: 32-bit performance wibbles (387f1d1) In-Reply-To: <20150122094124.AA5E43A300@ghc.haskell.org> References: <20150122094124.AA5E43A300@ghc.haskell.org> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562AEED0@DB3PRD3001MB020.064d.mgd.msft.net> BOTHER! I have no idea how my commit, which was meant to change two all.T files, also ended up adding libraries/haskell98. I'll revert and re-commit. I also inadvertently did 'git pull' rather than 'git pull --rebase' so there's a merge node too. But I don't know how to undo that, and probably no harm done. Simon | -----Original Message----- | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf | Of git at git.haskell.org | Sent: 22 January 2015 09:41 | To: ghc-commits at haskell.org | Subject: [commit: ghc] master: 32-bit performance wibbles (387f1d1) | | Repository : ssh://git at git.haskell.org/ghc | | On branch : master | Link : | http://ghc.haskell.org/trac/ghc/changeset/387f1d1ec334788c3e891e9304d4 | 27bc804998f4/ghc | | >--------------------------------------------------------------- | | commit 387f1d1ec334788c3e891e9304d427bc804998f4 | Author: Simon Peyton Jones | Date: Tue Jan 20 17:31:13 2015 +0000 | | 32-bit performance wibbles | | Less for GHC, more for Haddock | | | >--------------------------------------------------------------- | | 387f1d1ec334788c3e891e9304d427bc804998f4 | libraries/{integer-simple => haskell98}/.gitignore | 0 | libraries/haskell98/Array.hs | 15 + | libraries/haskell98/Bits.hs | 8 + | libraries/haskell98/CError.hs | 8 + | libraries/haskell98/CForeign.hs | 7 + | libraries/haskell98/CPUTime.hs | 9 + | libraries/haskell98/CString.hs | 7 + | libraries/haskell98/CTypes.hs | 7 + | libraries/haskell98/Char.hs | 18 + | libraries/haskell98/Complex.hs | 11 + | libraries/haskell98/Directory.hs | 46 +++ | libraries/haskell98/ForeignPtr.hs | 7 + | libraries/haskell98/IO.hs | 74 ++++ | libraries/haskell98/Int.hs | 7 + | libraries/haskell98/Ix.hs | 10 + | libraries/haskell98/LICENSE | 28 ++ | libraries/haskell98/List.hs | 34 ++ | libraries/haskell98/Locale.hs | 17 + | libraries/haskell98/MarshalAlloc.hs | 7 + | libraries/haskell98/MarshalArray.hs | 7 + | libraries/haskell98/MarshalError.hs | 22 ++ | libraries/haskell98/MarshalUtils.hs | 7 + | libraries/haskell98/Maybe.hs | 16 + | libraries/haskell98/Monad.hs | 19 + | libraries/haskell98/Numeric.hs | 48 +++ | libraries/haskell98/Prelude.hs | 196 ++++++++++ | libraries/haskell98/Ptr.hs | 7 + | libraries/haskell98/Random.hs | 407 | +++++++++++++++++++++ | libraries/haskell98/Ratio.hs | 10 + | libraries/{integer-gmp => haskell98}/Setup.hs | 0 | libraries/haskell98/StablePtr.hs | 7 + | libraries/haskell98/Storable.hs | 7 + | libraries/haskell98/System.hs | 15 + | libraries/haskell98/Time.hs | 22 ++ | libraries/haskell98/Word.hs | 7 + | libraries/haskell98/changelog.md | 15 + | libraries/haskell98/haskell98.cabal | 86 +++++ | libraries/haskell98/prologue.txt | 9 + | testsuite/tests/perf/compiler/all.T | 7 +- | testsuite/tests/perf/haddock/all.T | 13 +- | 40 files changed, 1240 insertions(+), 7 deletions(-) | | Diff suppressed because of size. To see it, use: | | git diff-tree --root --patch-with-stat --no-color --find-copies- | harder --ignore-space-at-eol --cc | 387f1d1ec334788c3e891e9304d427bc804998f4 | _______________________________________________ | ghc-commits mailing list | ghc-commits at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-commits From johan.tibell at gmail.com Thu Jan 22 11:06:06 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 22 Jan 2015 12:06:06 +0100 Subject: [commit: ghc] master: 32-bit performance wibbles (387f1d1) In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562AEED0@DB3PRD3001MB020.064d.mgd.msft.net> References: <20150122094124.AA5E43A300@ghc.haskell.org> <618BE556AADD624C9C918AA5D5911BEF562AEED0@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Don't worry about the merge commit. On Jan 22, 2015 11:57 AM, "Simon Peyton Jones" wrote: > BOTHER! > > I have no idea how my commit, which was meant to change two all.T files, > also ended up adding libraries/haskell98. > > I'll revert and re-commit. > > I also inadvertently did 'git pull' rather than 'git pull --rebase' so > there's a merge node too. But I don't know how to undo that, and probably > no harm done. > > Simon > > | -----Original Message----- > | From: ghc-commits [mailto:ghc-commits-bounces at haskell.org] On Behalf > | Of git at git.haskell.org > | Sent: 22 January 2015 09:41 > | To: ghc-commits at haskell.org > | Subject: [commit: ghc] master: 32-bit performance wibbles (387f1d1) > | > | Repository : ssh://git at git.haskell.org/ghc > | > | On branch : master > | Link : > | http://ghc.haskell.org/trac/ghc/changeset/387f1d1ec334788c3e891e9304d4 > | 27bc804998f4/ghc > | > | >--------------------------------------------------------------- > | > | commit 387f1d1ec334788c3e891e9304d427bc804998f4 > | Author: Simon Peyton Jones > | Date: Tue Jan 20 17:31:13 2015 +0000 > | > | 32-bit performance wibbles > | > | Less for GHC, more for Haddock > | > | > | >--------------------------------------------------------------- > | > | 387f1d1ec334788c3e891e9304d427bc804998f4 > | libraries/{integer-simple => haskell98}/.gitignore | 0 > | libraries/haskell98/Array.hs | 15 + > | libraries/haskell98/Bits.hs | 8 + > | libraries/haskell98/CError.hs | 8 + > | libraries/haskell98/CForeign.hs | 7 + > | libraries/haskell98/CPUTime.hs | 9 + > | libraries/haskell98/CString.hs | 7 + > | libraries/haskell98/CTypes.hs | 7 + > | libraries/haskell98/Char.hs | 18 + > | libraries/haskell98/Complex.hs | 11 + > | libraries/haskell98/Directory.hs | 46 +++ > | libraries/haskell98/ForeignPtr.hs | 7 + > | libraries/haskell98/IO.hs | 74 ++++ > | libraries/haskell98/Int.hs | 7 + > | libraries/haskell98/Ix.hs | 10 + > | libraries/haskell98/LICENSE | 28 ++ > | libraries/haskell98/List.hs | 34 ++ > | libraries/haskell98/Locale.hs | 17 + > | libraries/haskell98/MarshalAlloc.hs | 7 + > | libraries/haskell98/MarshalArray.hs | 7 + > | libraries/haskell98/MarshalError.hs | 22 ++ > | libraries/haskell98/MarshalUtils.hs | 7 + > | libraries/haskell98/Maybe.hs | 16 + > | libraries/haskell98/Monad.hs | 19 + > | libraries/haskell98/Numeric.hs | 48 +++ > | libraries/haskell98/Prelude.hs | 196 ++++++++++ > | libraries/haskell98/Ptr.hs | 7 + > | libraries/haskell98/Random.hs | 407 > | +++++++++++++++++++++ > | libraries/haskell98/Ratio.hs | 10 + > | libraries/{integer-gmp => haskell98}/Setup.hs | 0 > | libraries/haskell98/StablePtr.hs | 7 + > | libraries/haskell98/Storable.hs | 7 + > | libraries/haskell98/System.hs | 15 + > | libraries/haskell98/Time.hs | 22 ++ > | libraries/haskell98/Word.hs | 7 + > | libraries/haskell98/changelog.md | 15 + > | libraries/haskell98/haskell98.cabal | 86 +++++ > | libraries/haskell98/prologue.txt | 9 + > | testsuite/tests/perf/compiler/all.T | 7 +- > | testsuite/tests/perf/haddock/all.T | 13 +- > | 40 files changed, 1240 insertions(+), 7 deletions(-) > | > | Diff suppressed because of size. To see it, use: > | > | git diff-tree --root --patch-with-stat --no-color --find-copies- > | harder --ignore-space-at-eol --cc > | 387f1d1ec334788c3e891e9304d427bc804998f4 > | _______________________________________________ > | ghc-commits mailing list > | ghc-commits at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-commits > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jan 22 11:36:08 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 22 Jan 2015 11:36:08 +0000 Subject: vectorisation code? In-Reply-To: <6AA9EDDD-2444-4ADE-8675-793745BBA374@cse.unsw.edu.au> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> <201501200937.25031.jan.stolarek@p.lodz.pl> <8761c1ltym.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF562AE247@DB3PRD3001MB020.064d.mgd.msft.net> <54C039D0.4020206@apeiron.net> <6AA9EDDD-2444-4ADE-8675-793745BBA374@cse.unsw.edu.au> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562AF251@DB3PRD3001MB020.064d.mgd.msft.net> The issue that Richard Eisenberg raised is not DPH doesn't compile (which Geoff might fix) but rather no one is working on DPH, but having it all in the tree imposes a small cost on a large number of people (build/validate cycle time, grep hits, etc) So re-adding the DPH library would worsen the perceived problem, rather than make it better. | > My fear is that without making it part of the nightly builds, | > accumulated bitrot will make it extremely difficult to ever | > re-integrate DPH. I would hate to see that happen. This was the reason we originally put the DPH libraries in the build. Before we did so, changes to GHC often broke DPH, sometimes for superficial reasons, but sometimes because the DPH libraries really push the envelope of what GHC can do. But when literally no one is working on DPH, it's harder to justify imposing (small) costs on everyone without giving immediate benefits to anyone. That's the point that is being made here. I don?t have strong feelings myself. Simon | -----Original Message----- | From: Manuel M T Chakravarty [mailto:chak at cse.unsw.edu.au] | Sent: 22 January 2015 04:08 | To: Mainland Geoffrey | Cc: Simon Peyton Jones; ghc-devs at haskell.org | Subject: Re: vectorisation code? | | Thanks for the offer, Geoff. | | Under these circumstances, I would also very much prefer for Geoff | getting the code in order and leaving it in GHC. | | Manuel | | > Geoffrey Mainland : | > | > I'm sorry I'm a bit late to the game here, but there is also the | > option of reconnecting DPH to the build. | > | > When I patched DPH for the new version of the vector library, I did | > not perform this step---now I'm sorry I didn't. | > | > I am willing to get DPH in working order again---I believe the | > required work will be minimal. However, that only makes sense if we | 1) | > re-enable DPH in the nightly builds (and also by default for | > validate?), and 2) folks will not object too strenuously to having | DPH stick around. | > | > My fear is that without making it part of the nightly builds, | > accumulated bitrot will make it extremely difficult to ever | > re-integrate DPH. I would hate to see that happen. | > | > Geoff | > | > On 01/21/2015 04:11 PM, Simon Peyton Jones wrote: | >> | >> I?ve had a chat to Manuel. He is content for us to remove DPH code | >> altogether (not just CPP/comment it out), provided we are careful | to | >> signpost what has gone and how to get it back. | >> | >> | >> | >> I am no Git expert, so can I leave it to you guys to work out what | to | >> do? The specification is: | >> | >> ? It should be clear how to revert the change; that is, to | >> re-introduce the deleted code. I guess that might be ?git revert | >> ? | >> | >> ? If someone trips over more DPH code later, and wants to | >> remove that too, it should be clear how to add it to the list of | >> things to be revertred. | >> | >> ? We should have a Trac ticket ?Resume work on DPH and | >> vectorisation? or something like that, which summarises the | reversion | >> process. | >> | >> | >> | >> Just to be clear, this does not indicate any lack of interest in | DPH | >> on my part. (Quite the reverse.) It?s just that while no one is | >> actually working on it, we should use our source code control | system | >> to move it out of the way, as others on this thread have | persuasively | >> argued. | >> | >> | >> | >> Manuel, yell if I got anything wrong. | >> | >> | >> | >> Thanks! | >> | >> | >> | >> Simon | >> | >> | >> | >> | >> | >> | >> | >> | >> | >> | >> | >> *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of | >> *Carter Schonwald | >> *Sent:* 21 January 2015 03:32 | >> *To:* RodLogic | >> *Cc:* Manuel M T Chakravarty; ghc-devs at haskell.org | >> *Subject:* Re: vectorisation code? | >> | >> | >> | >> moving it to its own submodule is just a complicated version of | >> cutting a branch that has the code Right before deleting it from | master. | >> | >> afaik, the amount of love needed is roughly "one or more full time | >> grad students really owning it", though i could be wrong. | >> | >> | >> | >> | >> | >> On Tue, Jan 20, 2015 at 5:39 AM, RodLogic > > wrote: | >> | >> (disclaimer: I know nothing about the vectorization code) | >> | >> | >> | >> Now, is the vectorization code really dead code or it is code | that | >> needs love to come back to life? By removing it from the code | >> base, you are probably sealing it's fate as dead code as we are | >> limiting new or existing contributors to act on it (even if it's | a | >> commit hash away). If it is code that needs love to come back to | >> life, grep noise or conditional compilation is a small price to | >> pay here, imho. | >> | >> | >> | >> As a compromise, is it possible to move vectorization code into | >> it's own submodule in git or is it too intertwined with core | GHC? | >> So that it can be worked on independent of GHC? | >> | >> | >> | >> | >> | >> On Tue, Jan 20, 2015 at 6:47 AM, Herbert Valerio Riedel | >> > wrote: | >> | >> On 2015-01-20 at 09:37:25 +0100, Jan Stolarek wrote: | >>>> Here's an alternate suggestion: in SimplCore, keep the call | >> to vectorise | >>>> around, but commented out | >> | >>> Yuck. Carter and Brandon are right here - we have git, let | >> it do the | >>> job. I propose that we remove vectorization code, create a | >> Trac ticket | >>> about vectorization & DPH needing love and record the commit | >> hash in | >>> the ticket so that we can revert it easily in the future. | >> | >> I'm also against commenting out dead code in the presence of | a | >> VCS. | >> | >> Btw, here's two links discussing the issues related to | >> commenting out if | >> anyone's interested in knowing more: | >> | >> - | >> | >> http://programmers.stackexchange.com/questions/190096/can- | commented-o | >> ut-code-be-valuable-documentation | >> | >> - | >> | >> http://programmers.stackexchange.com/questions/45378/is-commented- | out | >> -code-really-always-bad | >> | >> | >> Cheers, | >> hvr | >> | >> | > | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs From m at tweag.io Thu Jan 22 11:40:05 2015 From: m at tweag.io (Boespflug, Mathieu) Date: Thu, 22 Jan 2015 12:40:05 +0100 Subject: Proposal for removing transformers dependency In-Reply-To: <1421915653.1910.2.camel@joachim-breitner.de> References: <87iog0un42.fsf@gmail.com> <877fwfqn9m.fsf@gmail.com> <1421915653.1910.2.camel@joachim-breitner.de> Message-ID: On 22 January 2015 at 09:34, Joachim Breitner wrote: > Hi, > > Am Donnerstag, den 22.01.2015, 08:37 +0100 schrieb Herbert Valerio > Riedel: >> One thing to keep in mind though is that then 'haskeline' (which is >> needed by GHCi) still remains a consumer of 'transformers', so we'd >> still have to bundle a 'transformers' package version with GHC even if >> `ghc` doesn't depend on it anymore. Somewhat related, the `ghc` -> >> `Cabal` dependency was broken up in GHC 7.10 but we'll still bundle >> `Cabal` with GHC 7.10. > > although there has been talk about only shipping the .so file with a > clash-free name or path... was this not done simply because noone > bothered enough to actually do it, or was there a bigger problem? There was an email thread about this very problem started by Michael Snoyman last month. As noted later in that thread, https://www.haskell.org/pipermail/ghc-devs/2014-December/007826.html the packages mentioned above and others were added to the ghc-pkg database in order to address https://ghc.haskell.org/trac/ghc/ticket/8919, hence making the problem worse. It sounds to me like Alexander's patch, plus the solution alluded by Joachim above for "invisible" packages that don't clash with ones registered in the ghc-pkg db, would allow us to avoid having any of the following packages leaking into all sandboxes for all users of GHC 7.10 and following: * haskeline * transformers * xhtml * terminfo Perhaps others. That would be a big win. Here's a more ambitious change: if we could make installation of the ghc library itself optional, then we could avoid forcing the installation of a whole swathe of other packages into the global db, including transformers. Best, Mathieu From simonpj at microsoft.com Thu Jan 22 12:09:47 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 22 Jan 2015 12:09:47 +0000 Subject: Strange failures in directory/ Message-ID: <618BE556AADD624C9C918AA5D5911BEF562AF407@DB3PRD3001MB020.064d.mgd.msft.net> I'm getting these validate failures from "sh validate -fast": Unexpected failures: ../../libraries/directory/tests getHomeDirectory001 [exit code non-0] (normal) .. and about 20 more similar complaints ... The failure report is this: Compile failed (status 256) errors were: [1 of 1] Compiling Main ( getHomeDirectory001.hs, getHomeDirectory001.o ) *** unexpected failure for getHomeDirectory001(normal) BUT if I "cd libraries/directory/tests", and say "make TEST=getDirContents001", I get no failures. Here is a clue. If I "cd testsuite/tests" and say "make TEST=getDirContents001", I get this: =====> getHomeDirectory001(normal) 4288 of 4405 [0, 0, 13] cd ../../libraries/directory/tests && '/5playpen/simonpj/HEAD-2/inplace/bin/ghc-stage2' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-warn-tabs -fno-ghci-history -o getHomeDirectory001 getHomeDirectory001.hs >getHomeDirectory001.comp.stderr 2>&1 cd ../../libraries/directory/tests && ./getHomeDirectory001 getHomeDirectory001.run.stdout 2>getHomeDirectory001.run.stderr =====> getHomeDirectory001(normal) 4301 of 4405 [0, 0, 13] cd ../../libraries/directory/tests && '/5playpen/simonpj/HEAD-2/inplace/bin/ghc-stage2' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-warn-tabs -fno-ghci-history -o getHomeDirectory001 getHomeDirectory001.hs >getHomeDirectory001.comp.stderr 2>&1 cd ../../libraries/directory/tests && ./getHomeDirectory001 getHomeDirectory001.run.stdout 2>getHomeDirectory001.run.stderr Note that it gets compiled TWICE. This is likely to cause problems when multi-threading. Anyone have any ideas? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexander.vershilov at gmail.com Thu Jan 22 12:43:16 2015 From: alexander.vershilov at gmail.com (Alexander V Vershilov) Date: Thu, 22 Jan 2015 16:43:16 +0400 Subject: Proposal for removing transformers dependency In-Reply-To: References: Message-ID: As a result of this thread I have created Phab Diff [1]. It contains required fixes in compiler and ghc-transformers-instance package. I have tested result on 2 following usecases: 1. I have installed in a sandox transformers-0.3 and ghc-heap-view that uses them and ghc-lib. (Doesn't require ghc-tf-instances) 2. I have installed in a sandbox ghc-tf-instaces and ghc-mtl that uses ghc-lib and ghc-tf-instances. Some fixes were required but they all were pretty straighforward to implement. [1] https://phabricator.haskell.org/D626 On 21 January 2015 at 18:51, Alexander V Vershilov wrote: > Hello. > > I'm coming with a proposal for removing transformers dependency > from ghc library. The reason for this proposal that it's not possible > to build consistent environment where a modern libraries (that depend > on a newer transformers or mtl-2.2) and libraries that use ghc API > are used together. And often people are tend to use version that is > bundled with ghc, even if newer are available. > > As transformers usage are quite limited in ghc, and it's really relevant > in ghc-bin, it's possible to duplicate the code, and provide required > fixes in ghc-bin.cabal. As a result ghc uses it's own MonadIO, MonadControl > and Strict State, and ghc-bin.cabal (ghc/*) uses ones from transformers. > > I have prepared a proof of concept [1], however it doesn't look very clean > and it's quite possible that will require some generalization, for example > introduction of the ghc-transformers-instances package that will have > all required instances. > > Should I continue doing this? Are there any things to consider and fix? > > [1] https://github.com/qnikst/ghc/compare/wip/remove-tf > -- > Alexander -- Alexander From mainland at apeiron.net Thu Jan 22 13:59:51 2015 From: mainland at apeiron.net (Geoffrey Mainland) Date: Thu, 22 Jan 2015 08:59:51 -0500 Subject: vectorisation code? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562AF251@DB3PRD3001MB020.064d.mgd.msft.net> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> <201501200937.25031.jan.stolarek@p.lodz.pl> <8761c1ltym.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF562AE247@DB3PRD3001MB020.064d.mgd.msft.net> <54C039D0.4020206@apeiron.net> <6AA9EDDD-2444-4ADE-8675-793745BBA374@cse.unsw.edu.au> <618BE556AADD624C9C918AA5D5911BEF562AF251@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <54C10257.5030907@apeiron.net> The current situation is that DPH is not being built or maintained at all. Given this state of affairs, it is hard to justify keeping it around---DPH is just bitrotting. I am proposing that we reconnect it to the build and keep it building, putting it in minimal maintenance mode. This will at least allow someone to pick it up again in the future without having to first re-integrate it into the then-current GHC. I recognize this imposes a larger ongoing burden than either just leaving it in the tree or purging it completely. Whether or not that burden is justified, I'm not sure. If we purge DPH, I am afraid it is likely we will have lost DPH forever. That would indeed be a loss. Geoff On 1/22/15 6:36 AM, Simon Peyton Jones wrote: > The issue that Richard Eisenberg raised is not > > DPH doesn't compile (which Geoff might fix) > > but rather > > no one is working on DPH, but having it all in the tree > imposes a small cost on a large number of people > (build/validate cycle time, grep hits, etc) > > So re-adding the DPH library would worsen the perceived problem, rather than make it better. > > | > My fear is that without making it part of the nightly builds, > | > accumulated bitrot will make it extremely difficult to ever > | > re-integrate DPH. I would hate to see that happen. > > This was the reason we originally put the DPH libraries in the build. Before we did so, changes to GHC often broke DPH, sometimes for superficial reasons, but sometimes because the DPH libraries really push the envelope of what GHC can do. > > But when literally no one is working on DPH, it's harder to justify imposing (small) costs on everyone without giving immediate benefits to anyone. That's the point that is being made here. I don?t have strong feelings myself. > > Simon > > > | -----Original Message----- > | From: Manuel M T Chakravarty [mailto:chak at cse.unsw.edu.au] > | Sent: 22 January 2015 04:08 > | To: Mainland Geoffrey > | Cc: Simon Peyton Jones; ghc-devs at haskell.org > | Subject: Re: vectorisation code? > | > | Thanks for the offer, Geoff. > | > | Under these circumstances, I would also very much prefer for Geoff > | getting the code in order and leaving it in GHC. > | > | Manuel > | > | > Geoffrey Mainland : > | > > | > I'm sorry I'm a bit late to the game here, but there is also the > | > option of reconnecting DPH to the build. > | > > | > When I patched DPH for the new version of the vector library, I did > | > not perform this step---now I'm sorry I didn't. > | > > | > I am willing to get DPH in working order again---I believe the > | > required work will be minimal. However, that only makes sense if we > | 1) > | > re-enable DPH in the nightly builds (and also by default for > | > validate?), and 2) folks will not object too strenuously to having > | DPH stick around. > | > > | > My fear is that without making it part of the nightly builds, > | > accumulated bitrot will make it extremely difficult to ever > | > re-integrate DPH. I would hate to see that happen. > | > > | > Geoff > | > > | > On 01/21/2015 04:11 PM, Simon Peyton Jones wrote: > | >> > | >> I?ve had a chat to Manuel. He is content for us to remove DPH code > | >> altogether (not just CPP/comment it out), provided we are careful > | to > | >> signpost what has gone and how to get it back. > | >> > | >> > | >> > | >> I am no Git expert, so can I leave it to you guys to work out what > | to > | >> do? The specification is: > | >> > | >> ? It should be clear how to revert the change; that is, to > | >> re-introduce the deleted code. I guess that might be ?git revert > | >> ? > | >> > | >> ? If someone trips over more DPH code later, and wants to > | >> remove that too, it should be clear how to add it to the list of > | >> things to be revertred. > | >> > | >> ? We should have a Trac ticket ?Resume work on DPH and > | >> vectorisation? or something like that, which summarises the > | reversion > | >> process. > | >> > | >> > | >> > | >> Just to be clear, this does not indicate any lack of interest in > | DPH > | >> on my part. (Quite the reverse.) It?s just that while no one is > | >> actually working on it, we should use our source code control > | system > | >> to move it out of the way, as others on this thread have > | persuasively > | >> argued. > | >> > | >> > | >> > | >> Manuel, yell if I got anything wrong. > | >> > | >> > | >> > | >> Thanks! > | >> > | >> > | >> > | >> Simon > | >> > | >> > | >> > | >> > | >> > | >> > | >> > | >> > | >> > | >> > | >> > | >> *From:*ghc-devs [mailto:ghc-devs-bounces at haskell.org] *On Behalf Of > | >> *Carter Schonwald > | >> *Sent:* 21 January 2015 03:32 > | >> *To:* RodLogic > | >> *Cc:* Manuel M T Chakravarty; ghc-devs at haskell.org > | >> *Subject:* Re: vectorisation code? > | >> > | >> > | >> > | >> moving it to its own submodule is just a complicated version of > | >> cutting a branch that has the code Right before deleting it from > | master. > | >> > | >> afaik, the amount of love needed is roughly "one or more full time > | >> grad students really owning it", though i could be wrong. > | >> > | >> > | >> > | >> > | >> > | >> On Tue, Jan 20, 2015 at 5:39 AM, RodLogic | >> > wrote: > | >> > | >> (disclaimer: I know nothing about the vectorization code) > | >> > | >> > | >> > | >> Now, is the vectorization code really dead code or it is code > | that > | >> needs love to come back to life? By removing it from the code > | >> base, you are probably sealing it's fate as dead code as we are > | >> limiting new or existing contributors to act on it (even if it's > | a > | >> commit hash away). If it is code that needs love to come back to > | >> life, grep noise or conditional compilation is a small price to > | >> pay here, imho. > | >> > | >> > | >> > | >> As a compromise, is it possible to move vectorization code into > | >> it's own submodule in git or is it too intertwined with core > | GHC? > | >> So that it can be worked on independent of GHC? > | >> > | >> > | >> > | >> > | >> > | >> On Tue, Jan 20, 2015 at 6:47 AM, Herbert Valerio Riedel > | >> > wrote: > | >> > | >> On 2015-01-20 at 09:37:25 +0100, Jan Stolarek wrote: > | >>>> Here's an alternate suggestion: in SimplCore, keep the call > | >> to vectorise > | >>>> around, but commented out > | >> > | >>> Yuck. Carter and Brandon are right here - we have git, let > | >> it do the > | >>> job. I propose that we remove vectorization code, create a > | >> Trac ticket > | >>> about vectorization & DPH needing love and record the commit > | >> hash in > | >>> the ticket so that we can revert it easily in the future. > | >> > | >> I'm also against commenting out dead code in the presence of > | a > | >> VCS. > | >> > | >> Btw, here's two links discussing the issues related to > | >> commenting out if > | >> anyone's interested in knowing more: > | >> > | >> - > | >> > | >> http://programmers.stackexchange.com/questions/190096/can- > | commented-o > | >> ut-code-be-valuable-documentation > | >> > | >> - > | >> > | >> http://programmers.stackexchange.com/questions/45378/is-commented- > | out > | >> -code-really-always-bad > | >> > | >> > | >> Cheers, > | >> hvr > | >> > | >> > | > > | > > | > _______________________________________________ > | > ghc-devs mailing list > | > ghc-devs at haskell.org > | > http://www.haskell.org/mailman/listinfo/ghc-devs > From hvriedel at gmail.com Thu Jan 22 15:50:35 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 22 Jan 2015 16:50:35 +0100 Subject: vectorisation code? In-Reply-To: <54C10257.5030907@apeiron.net> (Geoffrey Mainland's message of "Thu, 22 Jan 2015 08:59:51 -0500") References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> <201501200937.25031.jan.stolarek@p.lodz.pl> <8761c1ltym.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF562AE247@DB3PRD3001MB020.064d.mgd.msft.net> <54C039D0.4020206@apeiron.net> <6AA9EDDD-2444-4ADE-8675-793745BBA374@cse.unsw.edu.au> <618BE556AADD624C9C918AA5D5911BEF562AF251@DB3PRD3001MB020.064d.mgd.msft.net> <54C10257.5030907@apeiron.net> Message-ID: <87twziq0fo.fsf@gmail.com> On 2015-01-22 at 14:59:51 +0100, Geoffrey Mainland wrote: > The current situation is that DPH is not being built or maintained at > all. Given this state of affairs, it is hard to justify keeping it > around---DPH is just bitrotting. > > I am proposing that we reconnect it to the build and keep it building, > putting it in minimal maintenance mode. Ok, but how do we avoid issues like http://thread.gmane.org/gmane.comp.lang.haskell.ghc.devel/5645/ in the future then? DPH became painful back then, because we didn't know what to do with 'vector' (which as a package at the time also suffered from neglect of maintainership) Cheers, hvr From mainland at apeiron.net Thu Jan 22 16:02:46 2015 From: mainland at apeiron.net (Geoffrey Mainland) Date: Thu, 22 Jan 2015 11:02:46 -0500 Subject: vectorisation code? In-Reply-To: <87twziq0fo.fsf@gmail.com> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> <201501200937.25031.jan.stolarek@p.lodz.pl> <8761c1ltym.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF562AE247@DB3PRD3001MB020.064d.mgd.msft.net> <54C039D0.4020206@apeiron.net> <6AA9EDDD-2444-4ADE-8675-793745BBA374@cse.unsw.edu.au> <618BE556AADD624C9C918AA5D5911BEF562AF251@DB3PRD3001MB020.064d.mgd.msft.net> <54C10257.5030907@apeiron.net> <87twziq0fo.fsf@gmail.com> Message-ID: <54C11F26.9080201@apeiron.net> On 01/22/2015 10:50 AM, Herbert Valerio Riedel wrote: > On 2015-01-22 at 14:59:51 +0100, Geoffrey Mainland wrote: >> The current situation is that DPH is not being built or maintained at >> all. Given this state of affairs, it is hard to justify keeping it >> around---DPH is just bitrotting. >> >> I am proposing that we reconnect it to the build and keep it building, >> putting it in minimal maintenance mode. > Ok, but how do we avoid issues like > > http://thread.gmane.org/gmane.comp.lang.haskell.ghc.devel/5645/ > > in the future then? DPH became painful back then, because we didn't know > what to do with 'vector' (which as a package at the time also suffered > from neglect of maintainership) > > > Cheers, > hvr > That's part of "minimal maintenance mode." Yes, keeping DPH will impose some burden. I am not pretending that keeping DPH imposes no cost, but instead asking what cost we are willing to pay to keep DPH working---maybe the answer is "none." As for the particular issue you mentioned, I patched DPH to fix compatibility with the new vector. Those changes have been in the tree for some time, but DPH was never reconnected to the build, so it has bitrotted again. Note that vector *also* no longer builds with the other libraries in the tree, so if we excise DPH, we should excise vector. I am willing to put some effort into fixing these sorts of problems when they come up. That may still impose too much burden on the other developers. Geoff From hvriedel at gmail.com Thu Jan 22 16:29:24 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Thu, 22 Jan 2015 17:29:24 +0100 Subject: Proposal for removing transformers dependency In-Reply-To: (Mathieu Boespflug's message of "Thu, 22 Jan 2015 12:40:05 +0100") References: <87iog0un42.fsf@gmail.com> <877fwfqn9m.fsf@gmail.com> <1421915653.1910.2.camel@joachim-breitner.de> Message-ID: <87ppa6pymz.fsf@gmail.com> On 2015-01-22 at 12:40:05 +0100, Boespflug, Mathieu wrote: [...] > It sounds to me like Alexander's patch, plus the > solution alluded by Joachim above for "invisible" packages that don't > clash with ones registered in the ghc-pkg db, would allow us to avoid > having any of the following packages leaking into all sandboxes for > all users of GHC 7.10 and following: > > * haskeline > * transformers > * xhtml > * terminfo > > Perhaps others. That would be a big win. Btw, there's an alternative hack that could be used. Since the remaining use of haskeline/terminfo/transformers would then be mostly in GHCi (if haven't missed anything), which is luckily not a library, we don't actually need those packages to be turned into DSOs. We could do something similiar to what `haddock` inside the GHC tree does w/ e.g. its attoparsec dependency: just embed those 4 packages into the ghci executable as if they weren't separate packages. Cheers, hvr From djsamperi at gmail.com Thu Jan 22 18:14:21 2015 From: djsamperi at gmail.com (Dominick Samperi) Date: Thu, 22 Jan 2015 13:14:21 -0500 Subject: Build failure under Fedora 21 In-Reply-To: References: <54B24AA9.3060503@ro-che.info> <54B38702.9030505@ro-che.info> Message-ID: The best way to move beyond the somewhat dated version of the Haskell Platform that is distributed with Fedora 21 is to install the version available at http://www.haskell.org/platform instead of the version installed by yum. This fixes all of the shared library issues that arise when installing pandoc. Thanks to John MacFarlane (pandoc author) for this tip. On Sat, Jan 17, 2015 at 7:15 AM, Peter Trommler wrote: > Dominick Samperi wrote: > >> It turns out that the undefined reference to libHSprimitive-0.5.4.0.so >> when installing pandoc is not related to the use of CentOS or Debian >> binaries. I get the same undefined reference when I try to use >> ghc-7.8.4 compiled from source under Fedora 21. Here is the output of >> 'locate libHSprimitive': >> >> > /home/dsamperi/.cabal/lib/primitive-0.5.4.0/ghc-7.8.4/libHSprimitive-0.5.4.0.a >> /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1-ghc7.6.3.so >> /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1.a >> /usr/lib64/ghc-7.6.3/primitive-0.5.0.1/libHSprimitive-0.5.0.1_p.a >> >> >> So there is a .a library, but no .so (shared) lib in the build from >> source. Can someone explain how to get the build process to create all >> necessary shared libs? > Try adding `--enable-shared` to your cabal command to build the shared > object libraries. You might have to tell cabal to reinstall your existing > primitive-0.5.4.0 or blow away your ~/.cabal and ~/.ghc directories. > > HTH > Peter > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From ekmett at gmail.com Thu Jan 22 21:07:38 2015 From: ekmett at gmail.com (Edward Kmett) Date: Thu, 22 Jan 2015 16:07:38 -0500 Subject: GHC support for the new "record" package In-Reply-To: <54C0C357.4040209@well-typed.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54BFEA99.9090001@well-typed.com> <54C01B4D.6010606@well-typed.com> <54C0C357.4040209@well-typed.com> Message-ID: On Thu, Jan 22, 2015 at 4:31 AM, Adam Gundry wrote: > Actually, the simplifications I recently came up with could allow us to > make uses of the field work as van Laarhoven lenses, other lenses *and* > selector functions. In practice, however, I suspect this might lead to > somewhat confusing error messages, so it might not be desirable. Interesting. Have you actually tried this with a composition of your simplified form, because I don't see how that can work. When we tried this before we showed that there was a fundamental limitation in the way the functional dependencies had to flow information down the chain, also, "foo.bar.baz" has very different interpretations, between the lens and normal accessors, and both are producing functions, so its hard to see how this doesn't yield overlapping instance hell. -Edward -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Fri Jan 23 04:12:29 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 23 Jan 2015 05:12:29 +0100 Subject: GHC support for the new "record" package In-Reply-To: <54BFD870.7070602@gmail.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> Message-ID: On Wed, Jan 21, 2015 at 5:48 PM, Simon Marlow wrote: > On 21/01/2015 16:01, Johan Tibell wrote: > >> My thoughts mostly mirror those of Adam and Edward. >> >> 1) I want something that is backwards compatible. >> > > Backwards compatible in what sense? Extension flags provide backwards > compatibility, because you just don't turn on the extension until you want > to use it. That's how all the other extensions work; most of them change > syntax in some way or other that breaks existing code. In this case in the sense of avoiding splitting code into a new-Haskell vs old-Haskell. This means that existing records should work well (and ideally also get the improved name resolution when used in call sites that have the pragma enabled) in the new record system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joehillen at gmail.com Fri Jan 23 04:56:10 2015 From: joehillen at gmail.com (Joe Hillenbrand) Date: Thu, 22 Jan 2015 20:56:10 -0800 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> Message-ID: On Jan 22, 2015 8:12 PM, "Johan Tibell" wrote: > > > > On Wed, Jan 21, 2015 at 5:48 PM, Simon Marlow wrote: >> >> On 21/01/2015 16:01, Johan Tibell wrote: >>> >>> My thoughts mostly mirror those of Adam and Edward. >>> >>> 1) I want something that is backwards compatible. >> >> >> Backwards compatible in what sense? Extension flags provide backwards compatibility, because you just don't turn on the extension until you want to use it. That's how all the other extensions work; most of them change syntax in some way or other that breaks existing code. > > > In this case in the sense of avoiding splitting code into a new-Haskell vs old-Haskell. This means that existing records should work well (and ideally also get the improved name resolution when used in call sites that have the pragma enabled) in the new record system. > Sorry to chime in since I am not an expert or ghc contributor, but I can't see how the new record system would break any existing valid Haskell code even if it was added wholesale without a language extension (and without special {|...|} syntax). I can see how expected behavior and error messages would change, but not any existing records or accessors. Would anyone mind explaining what would break? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From karel.gardas at centrum.cz Fri Jan 23 06:38:16 2015 From: karel.gardas at centrum.cz (Karel Gardas) Date: Fri, 23 Jan 2015 07:38:16 +0100 Subject: GHC HEAD failure [was: Re: [commit: ghc] master: Update Haddock submodule (34d68d8)] In-Reply-To: <20150123001236.46BA83A300@ghc.haskell.org> References: <20150123001236.46BA83A300@ghc.haskell.org> Message-ID: <54C1EC58.8060803@centrum.cz> Hello, last night build on i386-freebsd, i386/amd64-solaris and smartos-x86 failed with problem on configuring haddock: "inplace/bin/ghc-cabal" check utils/haddock 'ghc-options: -O2' is rarely needed. Check that it is giving a real benefit and not just imposing longer compile times on your users. "inplace/bin/ghc-cabal" configure utils/haddock dist "" --with-ghc="/buildbot/gabor-ghc-head-builder/builder/tempbuild/build/inplace/bin/ghc-stage1" --with-ghc-pkg="/buildbot/gabor-ghc-head-builder/builder/tempbuild/build/inplace/bin/ghc-pkg" --flag in-ghc-tree --disable-library-for-ghci --disable-library-vanilla --disable-library-profiling --disable-shared --configure-option=CFLAGS=" -U__i686 -fno-stack-protector " --configure-option=LDFLAGS=" " --configure-option=CPPFLAGS=" " --gcc-options=" -U__i686 -fno-stack-protector " --configure-option=--with-gmp-includes="/usr/include/gmp" --configure-option=--with-gmp-libraries="/usr/lib" --with-gcc="/usr/bin/gcc" --with-ld="/usr/bin/ld" --configure-option=--with-cc="/usr/bin/gcc" --with-ar="/usr/xpg4/bin/ar" --with-alex="/buildbot//bin/alex" --with-happy="/buildbot//bin/happy" Configuring haddock-2.16.0... ghc-cabal: At least the following dependencies are missing: ghc >=7.9 && <7.11 gmake[1]: *** [utils/haddock/dist/package-data.mk] Error 1 gmake: *** [all] Error 2 At least i386-solaris is using ghc 7.8.2 as a bootsrap ghc and amd64-solaris is using ghc 7.10.0.20141222 as a bootstrap ghc. This is IIRC 7.10 rc1. I've thought the general policy is to keep GHC buildable by two last release generation in the past and if this policy still applies then haddock is probably a culprit here. Perhaps it was done by the commit below which is the reason I address you directly Mateusz. If not, and the failure is caused by something else, then please excuse me for this email. Thanks! Karel On 01/23/15 01:12 AM, git at git.haskell.org wrote: > Repository : ssh://git at git.haskell.org/ghc > > On branch : master > Link : http://ghc.haskell.org/trac/ghc/changeset/34d68d8e83676c5010e9bc5d4619f24879f222af/ghc > >> --------------------------------------------------------------- > > commit 34d68d8e83676c5010e9bc5d4619f24879f222af > Author: Mateusz Kowalczyk > Date: Fri Jan 23 00:14:00 2015 +0000 > > Update Haddock submodule > > >> --------------------------------------------------------------- > > 34d68d8e83676c5010e9bc5d4619f24879f222af > utils/haddock | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/utils/haddock b/utils/haddock > index d61bbc7..bf77580 160000 > --- a/utils/haddock > +++ b/utils/haddock > @@ -1 +1 @@ > -Subproject commit d61bbc75890e4eb0ad508b9c2a27b91f691213e6 > +Subproject commit bf77580eb40fa960b701296ac828372d127a43dd > > _______________________________________________ > ghc-commits mailing list > ghc-commits at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-commits > From adam at well-typed.com Fri Jan 23 08:59:34 2015 From: adam at well-typed.com (Adam Gundry) Date: Fri, 23 Jan 2015 08:59:34 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54BFEA99.9090001@well-typed.com> <54C01B4D.6010606@well-typed.com> <54C0C357.4040209@well-typed.com> Message-ID: <54C20D76.5020202@well-typed.com> On 22/01/15 21:07, Edward Kmett wrote: > On Thu, Jan 22, 2015 at 4:31 AM, Adam Gundry > wrote: > > Actually, the simplifications I recently came up with could allow us to > make uses of the field work as van Laarhoven lenses, other lenses *and* > selector functions. In practice, however, I suspect this might lead to > somewhat confusing error messages, so it might not be desirable. > > > Interesting. Have you actually tried this with a composition of your > simplified form, because I don't see how that can work. > > When we tried this before we showed that there was a fundamental > limitation in the way the functional dependencies had to flow > information down the chain, also, "foo.bar.baz" has very different > interpretations, between the lens and normal accessors, and both are > producing functions, so its hard to see how this doesn't yield > overlapping instance hell. Apparently, composition works fine with both interpretations. Have a look here: https://github.com/adamgundry/records-prototype/blob/master/CoherentPrototype.hs#L274-279 https://github.com/adamgundry/records-prototype/blob/master/CoherentPrototype.hs#L75-90 The key idea is to define class IsRecordField (n :: Symbol) p where field :: proxy n -> p as the interpretation of fields, and notice that the two instances we want IsRecordField n (r -> t ) IsRecordField n ((a -> f b) -> (s -> f t) do not "morally" overlap, because in the first case r will always be a record datatype, and in the second case it is a function. Thus we can distinguish them using a closed type family. However, now that I look back, I notice that I'm using a suspicious functional dependency that doesn't look as if it should satisfy the liberal coverage condition, even though it is accepted by 7.8.3. So perhaps I'm too optimistic; and in any case, the types involved may be too confusing for use in practice. Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From marlowsd at gmail.com Fri Jan 23 10:17:35 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 23 Jan 2015 10:17:35 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> Message-ID: <54C21FBF.4020809@gmail.com> On 23/01/2015 04:12, Johan Tibell wrote: > > > On Wed, Jan 21, 2015 at 5:48 PM, Simon Marlow > wrote: > > On 21/01/2015 16:01, Johan Tibell wrote: > > My thoughts mostly mirror those of Adam and Edward. > > 1) I want something that is backwards compatible. > > > Backwards compatible in what sense? Extension flags provide > backwards compatibility, because you just don't turn on the > extension until you want to use it. That's how all the other > extensions work; most of them change syntax in some way or other > that breaks existing code. > > > In this case in the sense of avoiding splitting code into a new-Haskell > vs old-Haskell. This means that existing records should work well (and > ideally also get the improved name resolution when used in call sites > that have the pragma enabled) in the new record system. I understand that position, but it does impose some pretty big constraints, which may mean the design has to make some compromises. It's probably not worth discussing this tradeoff until there's actually a concrete proposal so that we can quantify how much old code would fail to compile and the cost of any compromises. Cheers, Simon From adam at well-typed.com Fri Jan 23 10:25:33 2015 From: adam at well-typed.com (Adam Gundry) Date: Fri, 23 Jan 2015 10:25:33 +0000 Subject: GHC support for the new "record" package In-Reply-To: <54C21FBF.4020809@gmail.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> Message-ID: <54C2219D.2080103@well-typed.com> On 23/01/15 10:17, Simon Marlow wrote: > On 23/01/2015 04:12, Johan Tibell wrote: >> >> >> On Wed, Jan 21, 2015 at 5:48 PM, Simon Marlow > > wrote: >> >> On 21/01/2015 16:01, Johan Tibell wrote: >> >> My thoughts mostly mirror those of Adam and Edward. >> >> 1) I want something that is backwards compatible. >> >> >> Backwards compatible in what sense? Extension flags provide >> backwards compatibility, because you just don't turn on the >> extension until you want to use it. That's how all the other >> extensions work; most of them change syntax in some way or other >> that breaks existing code. >> >> >> In this case in the sense of avoiding splitting code into a new-Haskell >> vs old-Haskell. This means that existing records should work well (and >> ideally also get the improved name resolution when used in call sites >> that have the pragma enabled) in the new record system. > > I understand that position, but it does impose some pretty big > constraints, which may mean the design has to make some compromises. > It's probably not worth discussing this tradeoff until there's actually > a concrete proposal so that we can quantify how much old code would fail > to compile and the cost of any compromises. In this spirit, I've started to prepare a concrete proposal for a revised OverloadedRecordFields design, based on recent feedback: https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign This would not necessarily include anonymous records at first, but they do fit nicely as a potential later extension, and it would work well with a slightly amended version of the record library in the meantime. I'd be very interested to hear what you think of this. Also, if someone would be prepared to flesh out a proposal based on the anonymous records idea, that might be a useful point of comparison. Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From k-bx at k-bx.com Fri Jan 23 11:04:45 2015 From: k-bx at k-bx.com (Konstantine Rybnikov) Date: Fri, 23 Jan 2015 13:04:45 +0200 Subject: Put "Error:" before error output Message-ID: Hi! I'm bringing this up once again. Can we add "Error:" in the output of an error in a similar way ghc shows "Warning:" for warnings? Main reasoning is that, for example, on a build-server, where you have lots of cores to build your program, if you get an error, it gets lost somewhere in the middle of compiler's output in all other "Warning" messages you get, since error is not always shown last on multi-core build. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuncer.ayaz at gmail.com Fri Jan 23 11:14:25 2015 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Fri, 23 Jan 2015 12:14:25 +0100 Subject: Put "Error:" before error output In-Reply-To: References: Message-ID: On Fri, Jan 23, 2015 at 12:04 PM, Konstantine Rybnikov wrote: > Hi! > > I'm bringing this up once again. Can we add "Error:" in the output > of an error in a similar way ghc shows "Warning:" for warnings? Main > reasoning is that, for example, on a build-server, where you have > lots of cores to build your program, if you get an error, it gets > lost somewhere in the middle of compiler's output in all other > "Warning" messages you get, since error is not always shown last on > multi-core build. Isn't kind of a compiler convention that ""Warning:" is only prepended if an issue is treated as a warning. I mean, you can enable -Werror and treat all or specific warnings as errors as well. From jan.stolarek at p.lodz.pl Fri Jan 23 11:46:38 2015 From: jan.stolarek at p.lodz.pl (Jan Stolarek) Date: Fri, 23 Jan 2015 12:46:38 +0100 Subject: Put "Error:" before error output In-Reply-To: References: Message-ID: <201501231246.38974.jan.stolarek@p.lodz.pl> > error is not always shown last on multi-core build. Somewhat related: #9219 Janek From k-bx at k-bx.com Fri Jan 23 11:55:30 2015 From: k-bx at k-bx.com (Konstantine Rybnikov) Date: Fri, 23 Jan 2015 13:55:30 +0200 Subject: Put "Error:" before error output In-Reply-To: References: Message-ID: Tuncer, If warnings will be treated as errors it's fine to have "Error:" shown for them, I think. On Fri, Jan 23, 2015 at 1:14 PM, Tuncer Ayaz wrote: > On Fri, Jan 23, 2015 at 12:04 PM, Konstantine Rybnikov wrote: > > Hi! > > > > I'm bringing this up once again. Can we add "Error:" in the output > > of an error in a similar way ghc shows "Warning:" for warnings? Main > > reasoning is that, for example, on a build-server, where you have > > lots of cores to build your program, if you get an error, it gets > > lost somewhere in the middle of compiler's output in all other > > "Warning" messages you get, since error is not always shown last on > > multi-core build. > > Isn't kind of a compiler convention that ""Warning:" is only prepended > if an issue is treated as a warning. I mean, you can enable -Werror > and treat all or specific warnings as errors as well. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes.waldmann at htwk-leipzig.de Fri Jan 23 13:07:16 2015 From: johannes.waldmann at htwk-leipzig.de (Johannes Waldmann) Date: Fri, 23 Jan 2015 14:07:16 +0100 Subject: Proposal for removing transformers dependency Message-ID: <54C24784.9050905@htwk-leipzig.de> > ... a proposal for removing transformers dependency from ghc library. +1 - J.W. (I subscribed to this list just to send this one message ...) From iavor.diatchki at gmail.com Fri Jan 23 19:30:20 2015 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Fri, 23 Jan 2015 11:30:20 -0800 Subject: GHC support for the new "record" package In-Reply-To: <54C2219D.2080103@well-typed.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> Message-ID: Hello, I just read through Adam's proposal and here is my take: 1. I like the general idea---in particular: - backwards compatibility is very important to me as I make extensive use of records in all my code, - for me, anonymous records are fairly low priority 2. I would propose that we simplify things further, and provide just one class for overloading: class Field (name :: Symbol) rec rec' field field' | name rec -> field , name rec' -> field' , name rec field' -> rec' , name rec' field -> rec where field :: Functor f => Proxy name -> (field -> f field') -> (rec -> f rec') I don't think we need to go into "lenses" at all, the `field` method simply provides a functorial update function similar to `mapM`. Of course, one could use the `lens` library to get more functionality but this is entirely up to the programmer. When the ORF extension is enabled, GHC should simply generate an instance of the class, in a similar way to what the lens library does. 3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`, but I don't like the concrete symbol choice: - # is a valid operator and a bunch of libraries use it, so it won't be compatible with existing code. - @x might be a better choice; then you could write things like: view @x rec set @x 3 rec over @x (+2) rec - another nice idea (due to Eric Mertens, aka glguy), which allows us to avoid additional special syntax is as follows: - instead of using special syntax, reuse the module system - designate a "magic" module name (e.g., GHC.Records) - when the renamer sees a name imported from that module, it "resolves" the name by desugaring it into whatever we want - For example, if `GHC.Records.x` desugars into `field (Proxy :: Proxy "x")`, we could write things like this: import GHC.Records as R view R.x rec set R.x 3 rec over R.x (+2) rec -Iavor On Fri, Jan 23, 2015 at 2:25 AM, Adam Gundry wrote: > On 23/01/15 10:17, Simon Marlow wrote: > > On 23/01/2015 04:12, Johan Tibell wrote: > >> > >> > >> On Wed, Jan 21, 2015 at 5:48 PM, Simon Marlow >> > wrote: > >> > >> On 21/01/2015 16:01, Johan Tibell wrote: > >> > >> My thoughts mostly mirror those of Adam and Edward. > >> > >> 1) I want something that is backwards compatible. > >> > >> > >> Backwards compatible in what sense? Extension flags provide > >> backwards compatibility, because you just don't turn on the > >> extension until you want to use it. That's how all the other > >> extensions work; most of them change syntax in some way or other > >> that breaks existing code. > >> > >> > >> In this case in the sense of avoiding splitting code into a new-Haskell > >> vs old-Haskell. This means that existing records should work well (and > >> ideally also get the improved name resolution when used in call sites > >> that have the pragma enabled) in the new record system. > > > > I understand that position, but it does impose some pretty big > > constraints, which may mean the design has to make some compromises. > > It's probably not worth discussing this tradeoff until there's actually > > a concrete proposal so that we can quantify how much old code would fail > > to compile and the cost of any compromises. > > In this spirit, I've started to prepare a concrete proposal for a > revised OverloadedRecordFields design, based on recent feedback: > > > https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign > > This would not necessarily include anonymous records at first, but they > do fit nicely as a potential later extension, and it would work well > with a slightly amended version of the record library in the meantime. > I'd be very interested to hear what you think of this. > > Also, if someone would be prepared to flesh out a proposal based on the > anonymous records idea, that might be a useful point of comparison. > > Adam > > -- > Adam Gundry, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuncer.ayaz at gmail.com Fri Jan 23 20:13:06 2015 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Fri, 23 Jan 2015 21:13:06 +0100 Subject: Put "Error:" before error output In-Reply-To: References: Message-ID: On Fri, Jan 23, 2015 at 12:55 PM, Konstantine Rybnikov wrote: > Tuncer, > > If warnings will be treated as errors it's fine to have "Error:" > shown for them, I think. Yes, it will be printed the same way and have the same severity as any other error. I think I have misinterpreted your initial post, sorry about that. To correct myself, compilers do print "error:" prefixes, and for example your usual CC will print the following: filename:row:column: error: error-message filename:row:column: warning: warning-message So on second thought your suggestion makes sense :). > On Fri, Jan 23, 2015 at 1:14 PM, Tuncer Ayaz wrote: > > > > On Fri, Jan 23, 2015 at 12:04 PM, Konstantine Rybnikov wrote: > > > Hi! > > > > > > I'm bringing this up once again. Can we add "Error:" in the > > > output of an error in a similar way ghc shows "Warning:" for > > > warnings? Main reasoning is that, for example, on a > > > build-server, where you have lots of cores to build your > > > program, if you get an error, it gets lost somewhere in the > > > middle of compiler's output in all other "Warning" messages you > > > get, since error is not always shown last on multi-core build. > > > > Isn't kind of a compiler convention that ""Warning:" is only > > prepended if an issue is treated as a warning. I mean, you can > > enable -Werror and treat all or specific warnings as errors as > > well. From adam at well-typed.com Fri Jan 23 22:06:38 2015 From: adam at well-typed.com (Adam Gundry) Date: Fri, 23 Jan 2015 22:06:38 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> Message-ID: <54C2C5EE.9030100@well-typed.com> Thanks for the feedback, Iavor! On 23/01/15 19:30, Iavor Diatchki wrote: > 2. I would propose that we simplify things further, and provide just one > class for overloading: > > class Field (name :: Symbol) > rec rec' > field field' > | name rec -> field > , name rec' -> field' > , name rec field' -> rec' > , name rec' field -> rec > where > field :: Functor f => Proxy name -> (field -> f field') -> > (rec -> f rec') > > I don't think we need to go into "lenses" at all, the `field` method > simply provides a functorial > update function similar to `mapM`. Of course, one could use the `lens` > library to > get more functionality but this is entirely up to the programmer. > > When the ORF extension is enabled, GHC should simply generate an > instance of the class, > in a similar way to what the lens library does. There's something to be said for the simplicity of this approach, provided we're happy to commit to this representation of lenses. I'm attracted to the extra flexibility of the IsRecordField class -- I just noticed that it effectively gives us a syntax for identifier-like Symbol singletons, which could be useful in completely different contexts -- and I'd like to understand the real costs of the additional complexity it imposes. > 3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`, > but I don't like the concrete symbol choice: > - # is a valid operator and a bunch of libraries use it, so it won't > be compatible with existing code. Ah. I didn't realise that, but assumed it was safe behind -XMagicHash. Yes, that's no good. > - @x might be a better choice; then you could write things like: > view @x rec > set @x 3 rec > over @x (+2) rec This could work, though it has the downside that we've been informally using @ for explicit type application for a long time! Does anyone know what the status of the proposed ExplicitTypeApplication extension is? > - another nice idea (due to Eric Mertens, aka glguy), which allows us > to avoid additional special syntax is as follows: > - instead of using special syntax, reuse the module system > - designate a "magic" module name (e.g., GHC.Records) > - when the renamer sees a name imported from that module, it > "resolves" the name by desugaring it into whatever we want > - For example, if `GHC.Records.x` desugars into `field (Proxy :: > Proxy "x")`, we could write things like this: > > import GHC.Records as R > > view R.x rec > set R.x 3 rec > over R.x (+2) rec Interesting; I think Edward suggested something similar earlier in this thread. Avoiding a special syntax is a definite advantage, but the need for a qualified name makes composing the resulting lenses a bit tiresome (R.x.R.y.R.z or R.x . R.y . R.z). I suppose one could do import GHC.Records (x, y, z) import MyModule hiding (x, y, z) but having to manually hide the selector functions and bring into scope the lenses is also annoying. Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ekmett at gmail.com Fri Jan 23 22:23:54 2015 From: ekmett at gmail.com (Edward Kmett) Date: Fri, 23 Jan 2015 17:23:54 -0500 Subject: GHC support for the new "record" package In-Reply-To: <54C2C5EE.9030100@well-typed.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> Message-ID: On Fri, Jan 23, 2015 at 5:06 PM, Adam Gundry wrote: > Thanks for the feedback, Iavor! > > On 23/01/15 19:30, Iavor Diatchki wrote: > > 2. I would propose that we simplify things further, and provide just one > > class for overloading: > > > > class Field (name :: Symbol) > > rec rec' > > field field' > > | name rec -> field > > , name rec' -> field' > > , name rec field' -> rec' > > , name rec' field -> rec > > where > > field :: Functor f => Proxy name -> (field -> f field') -> > > (rec -> f rec') > > > > I don't think we need to go into "lenses" at all, the `field` method > > simply provides a functorial > > update function similar to `mapM`. Of course, one could use the `lens` > > library to > > get more functionality but this is entirely up to the programmer. > > > > When the ORF extension is enabled, GHC should simply generate an > > instance of the class, > > in a similar way to what the lens library does > > 3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`, > > but I don't like the concrete symbol choice: > > - # is a valid operator and a bunch of libraries use it, so it won't > > be compatible with existing code. > > Ah. I didn't realise that, but assumed it was safe behind -XMagicHash. > Yes, that's no good. > > > - @x might be a better choice; then you could write things like: > > view @x rec > > set @x 3 rec > > over @x (+2) rec > > This could work, though it has the downside that we've been informally > using @ for explicit type application for a long time! Does anyone know > what the status of the proposed ExplicitTypeApplication extension is? I'll confess I've been keen on stealing @foo for the purpose of (Proxy :: Proxy foo) or (Proxy :: Proxy "foo") from the type application stuff for a long time -- primarily because I remain rather dubious about how well the type application stuff can work, once you take a type and it goes through a usage/generalization cycle, the order of the types you can "apply" gets all jumbled up, making type application very difficult to actually use. Proxies on the other hand remain stable. I realize that I'm probably on the losing side of that debate, however. But I think it is fair to say that that little bit of dangling syntax will be a bone that is heavily fought over. ;) > - another nice idea (due to Eric Mertens, aka glguy), which allows us > > to avoid additional special syntax is as follows: > > - instead of using special syntax, reuse the module system > > - designate a "magic" module name (e.g., GHC.Records) > > - when the renamer sees a name imported from that module, it > > "resolves" the name by desugaring it into whatever we want > > - For example, if `GHC.Records.x` desugars into `field (Proxy :: > > Proxy "x")`, we could write things like this: > > > > import GHC.Records as R > > > > view R.x rec > > set R.x 3 rec > > over R.x (+2) rec > > Interesting; I think Edward suggested something similar earlier in this > thread. Avoiding a special syntax is a definite advantage, but the need > for a qualified name makes composing the resulting lenses a bit tiresome > (R.x.R.y.R.z or R.x . R.y . R.z). I suppose one could do > > import GHC.Records (x, y, z) > import MyModule hiding (x, y, z) > > but having to manually hide the selector functions and bring into scope > the lenses is also annoying. In the suggestion I made as a (c) option for how to proceed around field names a few posts back in this thread I was hinting towards having an explicit use of {| foo :: x |} somewhere in the module provide an implicit import of import Field (foo) then users can always reference Field.foo explicitly if they don't have such in local scope, and names all share a common source. Of course this was in the context a Nikita style {| ... |} rather than the ORF { .. }. If the Nikita records didn't make an accessor, because there's no way for them to really do so, then there'd be nothing to conflict with. Being able to use import and use them with ORF-style records would just be gravy then. Users would be able to get those out of the box. -Edward -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at gregweber.info Fri Jan 23 22:47:00 2015 From: greg at gregweber.info (Greg Weber) Date: Fri, 23 Jan 2015 14:47:00 -0800 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> Message-ID: If we only add syntax when the language extension is used then we are not clobbering everyone. # is not that common of an operator. I would much rather upset a few people by taking that operator back when they opt-in to turning the extension on than having a worse records implementation. On Fri, Jan 23, 2015 at 2:23 PM, Edward Kmett wrote: > > On Fri, Jan 23, 2015 at 5:06 PM, Adam Gundry wrote: > >> Thanks for the feedback, Iavor! >> >> On 23/01/15 19:30, Iavor Diatchki wrote: >> > 2. I would propose that we simplify things further, and provide just one >> > class for overloading: >> > >> > class Field (name :: Symbol) >> > rec rec' >> > field field' >> > | name rec -> field >> > , name rec' -> field' >> > , name rec field' -> rec' >> > , name rec' field -> rec >> > where >> > field :: Functor f => Proxy name -> (field -> f field') -> >> > (rec -> f rec') >> > >> > I don't think we need to go into "lenses" at all, the `field` method >> > simply provides a functorial >> > update function similar to `mapM`. Of course, one could use the `lens` >> > library to >> > get more functionality but this is entirely up to the programmer. >> > >> > When the ORF extension is enabled, GHC should simply generate an >> > instance of the class, >> > in a similar way to what the lens library does >> > > > 3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`, >> > but I don't like the concrete symbol choice: >> > - # is a valid operator and a bunch of libraries use it, so it won't >> > be compatible with existing code. >> >> Ah. I didn't realise that, but assumed it was safe behind -XMagicHash. >> Yes, that's no good. >> >> > - @x might be a better choice; then you could write things like: >> > view @x rec >> > set @x 3 rec >> > over @x (+2) rec >> >> This could work, though it has the downside that we've been informally >> using @ for explicit type application for a long time! Does anyone know >> what the status of the proposed ExplicitTypeApplication extension is? > > > I'll confess I've been keen on stealing @foo for the purpose of (Proxy :: > Proxy foo) or (Proxy :: Proxy "foo") from the type application stuff for a > long time -- primarily because I remain rather dubious about how well the > type application stuff can work, once you take a type and it goes through a > usage/generalization cycle, the order of the types you can "apply" gets all > jumbled up, making type application very difficult to actually use. Proxies > on the other hand remain stable. I realize that I'm probably on the losing > side of that debate, however. But I think it is fair to say that that > little bit of dangling syntax will be a bone that is heavily fought over. ;) > > > - another nice idea (due to Eric Mertens, aka glguy), which allows us >> > to avoid additional special syntax is as follows: >> > - instead of using special syntax, reuse the module system >> > - designate a "magic" module name (e.g., GHC.Records) >> > - when the renamer sees a name imported from that module, it >> > "resolves" the name by desugaring it into whatever we want >> > - For example, if `GHC.Records.x` desugars into `field (Proxy :: >> > Proxy "x")`, we could write things like this: >> > >> > import GHC.Records as R >> > >> > view R.x rec >> > set R.x 3 rec >> > over R.x (+2) rec >> >> Interesting; I think Edward suggested something similar earlier in this >> thread. Avoiding a special syntax is a definite advantage, but the need >> for a qualified name makes composing the resulting lenses a bit tiresome >> (R.x.R.y.R.z or R.x . R.y . R.z). I suppose one could do >> >> import GHC.Records (x, y, z) >> import MyModule hiding (x, y, z) >> >> but having to manually hide the selector functions and bring into scope >> the lenses is also annoying. > > > In the suggestion I made as a (c) option for how to proceed around field > names a few posts back in this thread I was hinting towards having an > explicit use of {| foo :: x |} somewhere in the module provide an implicit > import of > > import Field (foo) > > then users can always reference Field.foo explicitly if they don't have > such in local scope, and names all share a common source. > > Of course this was in the context a Nikita style {| ... |} rather than the > ORF { .. }. > > If the Nikita records didn't make an accessor, because there's no way for > them to really do so, then there'd be nothing to conflict with. > > Being able to use import and use them with ORF-style records would just be > gravy then. Users would be able to get those out of the box. > > -Edward > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Fri Jan 23 23:14:52 2015 From: ekmett at gmail.com (Edward Kmett) Date: Fri, 23 Jan 2015 18:14:52 -0500 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> Message-ID: If the level of complaints I received when I stole (#) for use in lens is any indication, er.. it is in very wide use. It was by far the most contentious operator I grabbed. ;) It seems to me that I'd not be in a hurry to both break existing code and pay a long term syntactic cost when we have options on the table that don't require either, the "magic Field module" approach that both Eric and I appear to have arrived at independently side-steps this issue nicely and appears to result in a better user experience. Keep in mind, one source of objections to operator-based sigils is that if you put an sigil at the start of a lens the tax isn't one character but two, there is a space you now need to avoid (.#) when chaining these things. "foo.bar" vs. "#foo . #bar" and the latter will always be uglier. The `import Field (...)` approach results in users never having to pay more syntactically than with options they have available to them now, and being class based is even beneficial to folks who don't use Nikita's records. -Edward On Fri, Jan 23, 2015 at 5:47 PM, Greg Weber wrote: > If we only add syntax when the language extension is used then we are not > clobbering everyone. # is not that common of an operator. I would much > rather upset a few people by taking that operator back when they opt-in to > turning the extension on than having a worse records implementation. > > On Fri, Jan 23, 2015 at 2:23 PM, Edward Kmett wrote: > >> >> On Fri, Jan 23, 2015 at 5:06 PM, Adam Gundry wrote: >> >>> Thanks for the feedback, Iavor! >>> >>> On 23/01/15 19:30, Iavor Diatchki wrote: >>> > 2. I would propose that we simplify things further, and provide just >>> one >>> > class for overloading: >>> > >>> > class Field (name :: Symbol) >>> > rec rec' >>> > field field' >>> > | name rec -> field >>> > , name rec' -> field' >>> > , name rec field' -> rec' >>> > , name rec' field -> rec >>> > where >>> > field :: Functor f => Proxy name -> (field -> f field') -> >>> > (rec -> f rec') >>> > >>> > I don't think we need to go into "lenses" at all, the `field` method >>> > simply provides a functorial >>> > update function similar to `mapM`. Of course, one could use the >>> `lens` >>> > library to >>> > get more functionality but this is entirely up to the programmer. >>> > >>> > When the ORF extension is enabled, GHC should simply generate an >>> > instance of the class, >>> > in a similar way to what the lens library does >>> >> >> > 3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`, >>> > but I don't like the concrete symbol choice: >>> > - # is a valid operator and a bunch of libraries use it, so it won't >>> > be compatible with existing code. >>> >>> Ah. I didn't realise that, but assumed it was safe behind -XMagicHash. >>> Yes, that's no good. >>> >>> > - @x might be a better choice; then you could write things like: >>> > view @x rec >>> > set @x 3 rec >>> > over @x (+2) rec >>> >>> This could work, though it has the downside that we've been informally >>> using @ for explicit type application for a long time! Does anyone know >>> what the status of the proposed ExplicitTypeApplication extension is? >> >> >> I'll confess I've been keen on stealing @foo for the purpose of (Proxy :: >> Proxy foo) or (Proxy :: Proxy "foo") from the type application stuff for a >> long time -- primarily because I remain rather dubious about how well the >> type application stuff can work, once you take a type and it goes through a >> usage/generalization cycle, the order of the types you can "apply" gets all >> jumbled up, making type application very difficult to actually use. Proxies >> on the other hand remain stable. I realize that I'm probably on the losing >> side of that debate, however. But I think it is fair to say that that >> little bit of dangling syntax will be a bone that is heavily fought over. ;) >> >> > - another nice idea (due to Eric Mertens, aka glguy), which allows us >>> > to avoid additional special syntax is as follows: >>> > - instead of using special syntax, reuse the module system >>> > - designate a "magic" module name (e.g., GHC.Records) >>> > - when the renamer sees a name imported from that module, it >>> > "resolves" the name by desugaring it into whatever we want >>> > - For example, if `GHC.Records.x` desugars into `field (Proxy :: >>> > Proxy "x")`, we could write things like this: >>> > >>> > import GHC.Records as R >>> > >>> > view R.x rec >>> > set R.x 3 rec >>> > over R.x (+2) rec >>> >>> Interesting; I think Edward suggested something similar earlier in this >>> thread. Avoiding a special syntax is a definite advantage, but the need >>> for a qualified name makes composing the resulting lenses a bit tiresome >>> (R.x.R.y.R.z or R.x . R.y . R.z). I suppose one could do >>> >>> import GHC.Records (x, y, z) >>> import MyModule hiding (x, y, z) >>> >>> but having to manually hide the selector functions and bring into scope >>> the lenses is also annoying. >> >> >> In the suggestion I made as a (c) option for how to proceed around field >> names a few posts back in this thread I was hinting towards having an >> explicit use of {| foo :: x |} somewhere in the module provide an implicit >> import of >> >> import Field (foo) >> >> then users can always reference Field.foo explicitly if they don't have >> such in local scope, and names all share a common source. >> >> Of course this was in the context a Nikita style {| ... |} rather than >> the ORF { .. }. >> >> If the Nikita records didn't make an accessor, because there's no way for >> them to really do so, then there'd be nothing to conflict with. >> >> Being able to use import and use them with ORF-style records would just >> be gravy then. Users would be able to get those out of the box. >> >> -Edward >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Fri Jan 23 23:41:20 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 23 Jan 2015 23:41:20 +0000 Subject: GHC support for the new "record" package In-Reply-To: <54C2C5EE.9030100@well-typed.com> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> | I just | noticed that it effectively gives us a syntax for identifier-like Symbol | singletons, which could be useful in completely different contexts Indeed so. I have written a major increment to https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign which people reading this thread may find interesting. Look for "Plan B". For the first time I think I can see a nice, simple, elegant, orthogonal story. Simon From ezyang at mit.edu Fri Jan 23 23:58:54 2015 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 23 Jan 2015 15:58:54 -0800 Subject: Strange failures in directory/ In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562AF407@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF562AF407@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1422057500-sup-9154@sabre> No idea. As a point of comparison, when I do the same procedure, getDirContents001 only shows up once. Edward Excerpts from Simon Peyton Jones's message of 2015-01-22 04:09:47 -0800: > I'm getting these validate failures from "sh validate -fast": > > > Unexpected failures: > > ../../libraries/directory/tests getHomeDirectory001 [exit code non-0] (normal) > .. and about 20 more similar complaints ... > > The failure report is this: > > Compile failed (status 256) errors were: > > [1 of 1] Compiling Main ( getHomeDirectory001.hs, getHomeDirectory001.o ) > > *** unexpected failure for getHomeDirectory001(normal) > > BUT if I "cd libraries/directory/tests", and say "make TEST=getDirContents001", I get no failures. > > Here is a clue. If I "cd testsuite/tests" and say "make TEST=getDirContents001", I get this: > > > =====> getHomeDirectory001(normal) 4288 of 4405 [0, 0, 13] > > cd ../../libraries/directory/tests && '/5playpen/simonpj/HEAD-2/inplace/bin/ghc-stage2' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-warn-tabs -fno-ghci-history -o getHomeDirectory001 getHomeDirectory001.hs >getHomeDirectory001.comp.stderr 2>&1 > > cd ../../libraries/directory/tests && ./getHomeDirectory001 getHomeDirectory001.run.stdout 2>getHomeDirectory001.run.stderr > > =====> getHomeDirectory001(normal) 4301 of 4405 [0, 0, 13] > > cd ../../libraries/directory/tests && '/5playpen/simonpj/HEAD-2/inplace/bin/ghc-stage2' -fforce-recomp -dcore-lint -dcmm-lint -dno-debug-output -no-user-package-db -rtsopts -fno-warn-tabs -fno-ghci-history -o getHomeDirectory001 getHomeDirectory001.hs >getHomeDirectory001.comp.stderr 2>&1 > > cd ../../libraries/directory/tests && ./getHomeDirectory001 getHomeDirectory001.run.stdout 2>getHomeDirectory001.run.stderr > > Note that it gets compiled TWICE. This is likely to cause problems when multi-threading. > > Anyone have any ideas? > > Simon From johan.tibell at gmail.com Sat Jan 24 01:04:38 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Fri, 23 Jan 2015 17:04:38 -0800 Subject: GHC support for the new "record" package In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I really like this proposal (except I would bike shed about the syntax for record selector to be dot, like in the majority of languages.) On Fri, Jan 23, 2015 at 3:41 PM, Simon Peyton Jones wrote: > | I just > | noticed that it effectively gives us a syntax for identifier-like Symbol > | singletons, which could be useful in completely different contexts > > Indeed so. I have written a major increment to > > https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign > which people reading this thread may find interesting. Look for "Plan B". > > For the first time I think I can see a nice, simple, elegant, orthogonal > story. > > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ndmitchell at gmail.com Sat Jan 24 07:45:57 2015 From: ndmitchell at gmail.com (Neil Mitchell) Date: Sat, 24 Jan 2015 07:45:57 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hi All, I fixed a missing "x" in one of the instances. I like the proposal, mostly because it has nothing to do with records, leaving people to experiment with records in libraries. I'm not keen on the use of Template Haskell to define lenses, and the fact that all base libraries are going to need custom makeLens definitions set apart from their definitions, plus IV is rather "wired" into record selectors, which can't be undone later. I think we can fix some of that by desugaring record definitions to: data T = MkT {x :: Int} instance FieldSelector "T" T Int where fieldSelector (MkT x) = x Then someone can, in a library, define: instance FieldSelector x r a => IV x (r -> a) where iv = fieldSelector Now that records don't mention IV, we are free to provide lots of different instances, each capturing some properties of each field, without committing to any one style of lens at this point. Therefore, we could have record desugaring also produce: instance FieldSetter "T" T Int where fieldSet v (T _) = T v And also: instance FieldSTAB "T" T Int where fieldSTAB = ... the stab lens ... (As we find new interesting types of operations over a field, with different levels of polymorphism etc, we can keep adding new ones without breaking compatibility, and most users won't care. Prototyping new ones in Template Haskell is still probably a good idea.) Now someone can define, in a record library: instance (FieldSelector x r a, FieldSetter x r a) => IV x (Lens r a) where iv = makeLens fieldSelector fieldSet Or, for people who want #x to be a STAB lens directly (without a Lens type wrapper), they can omit the IV x (r -> a) instance, and only have #x have instances producing the STAB lens. The one downside of this plan is orphan instances, which means if you are writing a library and use one type of IV declaration (the selector one), then anyone else building on your library won't be able to use a different type of IV (the stab one). One potential way to fix that is to parameterise IV, so you can say (warning, even more half-baked thoughts ahead): {-# LANGUAGE ImplicitValues=MyType #-} Where MyType is a type I've defined in one of my imports, and then desugar #x to: iv @ "x" @ MyType @ alpha And extend IV with an extra type parameter. Now all the Lens library IV instances can include LensType, and people can mix and match different record schemes in different modules. Separately, the pattern: data T = ...; $(makeLens 'T) crops up a lot, and is gently ugly. I wonder if there should be an extension that let's you write: data T = ... deriving ('makeLens), or even just deriving (Lens) which desugars to the same thing? Thanks, Neil On Sat, Jan 24, 2015 at 1:04 AM, Johan Tibell wrote: > I really like this proposal (except I would bike shed about the syntax for > record selector to be dot, like in the majority of languages.) > > On Fri, Jan 23, 2015 at 3:41 PM, Simon Peyton Jones > wrote: >> >> | I just >> | noticed that it effectively gives us a syntax for identifier-like Symbol >> | singletons, which could be useful in completely different contexts >> >> Indeed so. I have written a major increment to >> >> https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign >> which people reading this thread may find interesting. Look for "Plan B". >> >> For the first time I think I can see a nice, simple, elegant, orthogonal >> story. >> >> Simon >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From k-bx at k-bx.com Sat Jan 24 11:55:44 2015 From: k-bx at k-bx.com (Konstantine Rybnikov) Date: Sat, 24 Jan 2015 13:55:44 +0200 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> Message-ID: May I suggest something for a syntax (as an option, sorry if it's silly or not related)? I really don't like neither "@" or "#" because they seem too hacky, meanwhile GHC already has an "accessor" syntax with braces { and }, so, might it be an option to have something like: ``` data Foo = Foo { val :: Int } data Bar = Bar { foo :: Foo } main = do let bar = Bar (Foo 10) print bar{foo{val}} let bar' = bar{foo{val}=10} return () ``` I think this syntax is 100% understandable for a "newbie". Not sure how is it related to lenses though. What do you think? If the level of complaints I received when I stole (#) for use in lens is any indication, er.. it is in very wide use. It was by far the most contentious operator I grabbed. ;) It seems to me that I'd not be in a hurry to both break existing code and pay a long term syntactic cost when we have options on the table that don't require either, the "magic Field module" approach that both Eric and I appear to have arrived at independently side-steps this issue nicely and appears to result in a better user experience. Keep in mind, one source of objections to operator-based sigils is that if you put an sigil at the start of a lens the tax isn't one character but two, there is a space you now need to avoid (.#) when chaining these things. "foo.bar" vs. "#foo . #bar" and the latter will always be uglier. The `import Field (...)` approach results in users never having to pay more syntactically than with options they have available to them now, and being class based is even beneficial to folks who don't use Nikita's records. -Edward On Fri, Jan 23, 2015 at 5:47 PM, Greg Weber wrote: > If we only add syntax when the language extension is used then we are not > clobbering everyone. # is not that common of an operator. I would much > rather upset a few people by taking that operator back when they opt-in to > turning the extension on than having a worse records implementation. > > On Fri, Jan 23, 2015 at 2:23 PM, Edward Kmett wrote: > >> >> On Fri, Jan 23, 2015 at 5:06 PM, Adam Gundry wrote: >> >>> Thanks for the feedback, Iavor! >>> >>> On 23/01/15 19:30, Iavor Diatchki wrote: >>> > 2. I would propose that we simplify things further, and provide just >>> one >>> > class for overloading: >>> > >>> > class Field (name :: Symbol) >>> > rec rec' >>> > field field' >>> > | name rec -> field >>> > , name rec' -> field' >>> > , name rec field' -> rec' >>> > , name rec' field -> rec >>> > where >>> > field :: Functor f => Proxy name -> (field -> f field') -> >>> > (rec -> f rec') >>> > >>> > I don't think we need to go into "lenses" at all, the `field` method >>> > simply provides a functorial >>> > update function similar to `mapM`. Of course, one could use the >>> `lens` >>> > library to >>> > get more functionality but this is entirely up to the programmer. >>> > >>> > When the ORF extension is enabled, GHC should simply generate an >>> > instance of the class, >>> > in a similar way to what the lens library does >>> >> >> > 3. I like the idea of `#x` desugaring into `field (Proxy :: Proxy "x")`, >>> > but I don't like the concrete symbol choice: >>> > - # is a valid operator and a bunch of libraries use it, so it won't >>> > be compatible with existing code. >>> >>> Ah. I didn't realise that, but assumed it was safe behind -XMagicHash. >>> Yes, that's no good. >>> >>> > - @x might be a better choice; then you could write things like: >>> > view @x rec >>> > set @x 3 rec >>> > over @x (+2) rec >>> >>> This could work, though it has the downside that we've been informally >>> using @ for explicit type application for a long time! Does anyone know >>> what the status of the proposed ExplicitTypeApplication extension is? >> >> >> I'll confess I've been keen on stealing @foo for the purpose of (Proxy :: >> Proxy foo) or (Proxy :: Proxy "foo") from the type application stuff for a >> long time -- primarily because I remain rather dubious about how well the >> type application stuff can work, once you take a type and it goes through a >> usage/generalization cycle, the order of the types you can "apply" gets all >> jumbled up, making type application very difficult to actually use. Proxies >> on the other hand remain stable. I realize that I'm probably on the losing >> side of that debate, however. But I think it is fair to say that that >> little bit of dangling syntax will be a bone that is heavily fought over. ;) >> >> > - another nice idea (due to Eric Mertens, aka glguy), which allows us >>> > to avoid additional special syntax is as follows: >>> > - instead of using special syntax, reuse the module system >>> > - designate a "magic" module name (e.g., GHC.Records) >>> > - when the renamer sees a name imported from that module, it >>> > "resolves" the name by desugaring it into whatever we want >>> > - For example, if `GHC.Records.x` desugars into `field (Proxy :: >>> > Proxy "x")`, we could write things like this: >>> > >>> > import GHC.Records as R >>> > >>> > view R.x rec >>> > set R.x 3 rec >>> > over R.x (+2) rec >>> >>> Interesting; I think Edward suggested something similar earlier in this >>> thread. Avoiding a special syntax is a definite advantage, but the need >>> for a qualified name makes composing the resulting lenses a bit tiresome >>> (R.x.R.y.R.z or R.x . R.y . R.z). I suppose one could do >>> >>> import GHC.Records (x, y, z) >>> import MyModule hiding (x, y, z) >>> >>> but having to manually hide the selector functions and bring into scope >>> the lenses is also annoying. >> >> >> In the suggestion I made as a (c) option for how to proceed around field >> names a few posts back in this thread I was hinting towards having an >> explicit use of {| foo :: x |} somewhere in the module provide an implicit >> import of >> >> import Field (foo) >> >> then users can always reference Field.foo explicitly if they don't have >> such in local scope, and names all share a common source. >> >> Of course this was in the context a Nikita style {| ... |} rather than >> the ORF { .. }. >> >> If the Nikita records didn't make an accessor, because there's no way for >> them to really do so, then there'd be nothing to conflict with. >> >> Being able to use import and use them with ORF-style records would just >> be gravy then. Users would be able to get those out of the box. >> >> -Edward >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.trstenjak at gmail.com Sat Jan 24 13:07:05 2015 From: daniel.trstenjak at gmail.com (Daniel Trstenjak) Date: Sat, 24 Jan 2015 14:07:05 +0100 Subject: GHC support for the new "record" package In-Reply-To: References: <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> Message-ID: <20150124130705.GA2878@machine> Hi Konstantine, > let bar' = bar{foo{val}=10} If you're inside a record context you might just have something like: let bar' = bar { foo.val = 10 } and let val = bar { foo.val } or even let bar' = bar { foo.val %= someFunction } This just seems to be some kind of syntactic sugar, so it's most likely less powerful than real lenses. Greetings, Daniel From marlowsd at gmail.com Sat Jan 24 20:37:04 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Sat, 24 Jan 2015 20:37:04 +0000 Subject: GHC support for the new "record" package In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <54C40270.8060103@gmail.com> On 23/01/15 23:41, Simon Peyton Jones wrote: > | I just > | noticed that it effectively gives us a syntax for identifier-like Symbol > | singletons, which could be useful in completely different contexts > > Indeed so. I have written a major increment to > https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign > which people reading this thread may find interesting. Look for "Plan B". > > For the first time I think I can see a nice, simple, elegant, orthogonal story. Cunning, and very general. I like it. Cheers, Simon From karel.gardas at centrum.cz Sat Jan 24 21:54:43 2015 From: karel.gardas at centrum.cz (Karel Gardas) Date: Sat, 24 Jan 2015 22:54:43 +0100 Subject: SPARC NCG, how to debug load isn issue. Message-ID: <54C414A3.4090607@centrum.cz> Folks, from time to time I'm attempting to resurrect SPARC NCG. It looks like it's off by default since 7.4? release and I feel it's kind of pity. I've been able to hack it on 7.6.x and make it functional. I failed on 7.8 and later. Double float load was broken there. Now, I'm attempting on fairly recent GHC HEAD as of Jan 17 and I do have problem with illegal isn generated into the binary. This is caused by LD II64 ... Instr to be translated to SPARC ldd ,g1 where g1 reg is not even, but odd and this fails as spec. says: " The load doubleword integer instructions (LDD, LDDA) move a doubleword from memory into an r register pair. The more significant word at the effective memory address is moved into the even r register. The less significant word (at the effective memory address + 4) is moved into the following odd r register. (Note that a load doubleword with rd = 0 modifies only r[1].) The least significant bit of the rd field is unused and should be set to zero by software. An attempt to execute a load doubleword instruction that refers to a mis-aligned (odd) destination register number may cause an illegal_instruction trap. " I've found out that the problematic source code is HeapStackCheck.cmm and the problematic piece is: if (Capability_context_switch(MyCapability()) != 0 :: CInt || Capability_interrupt(MyCapability()) != 0 :: CInt || (StgTSO_alloc_limit(CurrentTSO) `lt` (0::I64) && (TO_W_(StgTSO_flags(CurrentTSO)) & TSO_ALLOC_LIMIT) != 0)) { ret = ThreadYielding; goto sched; This "(0::I64)" causes it. So that's the problem description. Now I'm attempting to debug it a little bit to find out where the LD II64 Instr is generated and I'm not able to find single place which would looks familiar with asm I get here: .Lcq: ld [%i1+812],%g1 ldd [%g1+64],%g1 cmp %g1,0 bge .Lcs nop b .Lcr nop more importantly when I look into sparc's version on mkLoadInstr, I don't see any way how it may generate LD II64: sparc_mkLoadInstr dflags reg _ slot = let platform = targetPlatform dflags off = spillSlotToOffset dflags slot off_w = 1 + (off `div` 4) sz = case targetClassOfReg platform reg of RcInteger -> II32 RcFloat -> FF32 RcDouble -> FF64 _ -> panic "sparc_mkLoadInstr" in LD sz (fpRel (- off_w)) reg In whole SPARC NCG I've found the only place which clearly uses LD II64 and this is in Gen32.hs for loading literal float into reg: getRegister (CmmLit (CmmFloat d W64)) = do lbl <- getNewLabelNat tmp <- getNewRegNat II32 let code dst = toOL [ LDATA ReadOnlyData $ Statics lbl [CmmStaticLit (CmmFloat d W64)], SETHI (HI (ImmCLbl lbl)) tmp, LD II64 (AddrRegImm tmp (LO (ImmCLbl lbl))) dst] return (Any FF64 code) It's interesting but also iselExpr64 which should be probably here for manipulating 64bit data on 32bit platform, so even this is using pairs of LD II32 Instrs instead of single LD II64.... So I'm kind of out of idea where the LD II64 gets in the flow and is later translated into ldd with problematic reg. Do you have any idea how to debug this issue? Or do you have any idea where to read more about general structure of NCG, I've already seen https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/Backends/NCG -- but this is kind of dated... Thanks for any idea how to proceed! Karel From juhpetersen at gmail.com Sun Jan 25 10:36:46 2015 From: juhpetersen at gmail.com (Jens Petersen) Date: Sun, 25 Jan 2015 19:36:46 +0900 Subject: Build failure under Fedora 21 In-Reply-To: References: <54B24AA9.3060503@ro-che.info> <54B38702.9030505@ro-che.info> Message-ID: Hi Dominic, On 23 January 2015 at 03:14, Dominick Samperi wrote: > The best way to move beyond the somewhat dated version of the Haskell > Platform that is distributed with Fedora 21 is to install the version > available at http://www.haskell.org/platform instead of the version > installed by yum. Okay, or if you only need a pandoc executable you can use my pandoc Fedora Copr repo: https://copr.fedoraproject.org/coprs/petersen/pandoc/ It is built with ghc-7.6.3 and a cabal-install-1.20 sandbox. We are currently working on updating the Fedora Rawhide development tree to ghc-7.8.4, haskell-platform-2014.2, pandoc-1.13, etc. Jens From svenpanne at gmail.com Sun Jan 25 11:27:46 2015 From: svenpanne at gmail.com (Sven Panne) Date: Sun, 25 Jan 2015 12:27:46 +0100 Subject: Put "Error:" before error output In-Reply-To: References: Message-ID: 2015-01-23 12:55 GMT+01:00 Konstantine Rybnikov : > If warnings will be treated as errors it's fine to have "Error:" shown for > them, I think. +1 for this, it is how e.g. gcc behaves with -Werror, too. So unless there is a compelling reason to do things differently (which I don't see here), I would just follow conventional behavior instead of being creative. :-) From adam at well-typed.com Mon Jan 26 08:52:45 2015 From: adam at well-typed.com (Adam Gundry) Date: Mon, 26 Jan 2015 08:52:45 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> Message-ID: <54C6005D.6000807@well-typed.com> Hi Konstantine, On 24/01/15 11:55, Konstantine Rybnikov wrote: > May I suggest something for a syntax (as an option, sorry if it's silly > or not related)? I really don't like neither "@" or "#" because they > seem too hacky, meanwhile GHC already has an "accessor" syntax with > braces { and }, so, might it be an option to have something like: > > ``` > data Foo = Foo { val :: Int } > data Bar = Bar { foo :: Foo } > > main = do > let bar = Bar (Foo 10) > print bar{foo{val}} > let bar' = bar{foo{val}=10} > return () > > ``` > > I think this syntax is 100% understandable for a "newbie". Not sure how > is it related to lenses though. > > What do you think? Thanks for thinking about this problem (we certainly need fresh ideas!) but unfortunately this syntax is already taken by NamedFieldPuns, which interprets foo{val} ==> foo{val = val} so I don't think we can easily use it. I'm still keen to find a better solution than # or magic imports, though! Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From adam at well-typed.com Mon Jan 26 09:20:25 2015 From: adam at well-typed.com (Adam Gundry) Date: Mon, 26 Jan 2015 09:20:25 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <54C606D9.9030902@well-typed.com> Thanks, Simon, I think we're starting to find a nice story indeed. Your implicit values idea is what I was starting to grope towards with IsRecordField, but I hadn't spotted the similarity to implicit parameters. Like Neil, I still think it's worth separating the magically-solved typeclass that describes when a field belongs to a record (aka HasField) from the typeclass that explains how to interpret the #x syntax (aka IsRecordField/IV). In particular, with your plan the composition of two implicit values ends up having an ambiguous type: #foo . #bar :: (IV "foo" (b -> c), IV "bar" (a -> b)) => a -> c We need to be able to express that the codomain is functionally dependent on the domain (or vice versa, for van Laarhoven lenses), which I think entails having some instance ... => IV n (r -> a). Moreover, seeing as we're so close, it would be nice if we come out of this with a way to get lenses "for free" from fields, rather than needing TH. All this brings us back to the question of which instance(s) to have for (->), if any. I think Neil's suggestion for avoiding orphans is feasible... in fact, I believe the only real conflict is between instance IV n (r -> a ) instance IV n ((a -> f b) -> s -> f t) so we could probably just have two extensions to fix an additional parameter at one of two values. Although I'm not keen on the aesthetics. Perhaps we should just vote on whether preferential treatment is given to selectors or lenses, and pick one? Adam On 24/01/15 07:45, Neil Mitchell wrote: > Hi All, > > I fixed a missing "x" in one of the instances. I like the proposal, > mostly because it has nothing to do with records, leaving people to > experiment with records in libraries. > > I'm not keen on the use of Template Haskell to define lenses, and the > fact that all base libraries are going to need custom makeLens > definitions set apart from their definitions, plus IV is rather > "wired" into record selectors, which can't be undone later. I think we > can fix some of that by desugaring record definitions to: > > data T = MkT {x :: Int} > > instance FieldSelector "T" T Int where > fieldSelector (MkT x) = x > > Then someone can, in a library, define: > > instance FieldSelector x r a => IV x (r -> a) where > iv = fieldSelector > > Now that records don't mention IV, we are free to provide lots of > different instances, each capturing some properties of each field, > without committing to any one style of lens at this point. Therefore, > we could have record desugaring also produce: > > instance FieldSetter "T" T Int where > fieldSet v (T _) = T v > > And also: > > instance FieldSTAB "T" T Int where > fieldSTAB = ... the stab lens ... > > (As we find new interesting types of operations over a field, with > different levels of polymorphism etc, we can keep adding new ones > without breaking compatibility, and most users won't care. Prototyping > new ones in Template Haskell is still probably a good idea.) Now > someone can define, in a record library: > > instance (FieldSelector x r a, FieldSetter x r a) => IV x (Lens r a) where > iv = makeLens fieldSelector fieldSet > > Or, for people who want #x to be a STAB lens directly (without a Lens > type wrapper), they can omit the IV x (r -> a) instance, and only have > #x have instances producing the STAB lens. > > The one downside of this plan is orphan instances, which means if you > are writing a library and use one type of IV declaration (the selector > one), then anyone else building on your library won't be able to use a > different type of IV (the stab one). One potential way to fix that is > to parameterise IV, so you can say (warning, even more half-baked > thoughts ahead): > > {-# LANGUAGE ImplicitValues=MyType #-} > > Where MyType is a type I've defined in one of my imports, and then > desugar #x to: > > iv @ "x" @ MyType @ alpha > > And extend IV with an extra type parameter. Now all the Lens library > IV instances can include LensType, and people can mix and match > different record schemes in different modules. > > Separately, the pattern: data T = ...; $(makeLens 'T) crops up a lot, > and is gently ugly. I wonder if there should be an extension that > let's you write: data T = ... deriving ('makeLens), or even just > deriving (Lens) which desugars to the same thing? > > Thanks, Neil -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Mon Jan 26 12:24:37 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 26 Jan 2015 12:24:37 +0000 Subject: Put "Error:" before error output In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF562B697D@DB3PRD3001MB020.064d.mgd.msft.net> There has been quite a long email trail about this. Would someone like to summarise in a Trac ticket feature request? Assuming there's a consensus, would anyone like to offer a patch? Should not hard. There will be an annoying cutover problem, because many regression tests will start failing. Anyone offering the patch would need to include all of those. Of course it'll affect other packages too -- hence the need for consensus. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Sven | Panne | Sent: 25 January 2015 11:28 | To: Konstantine Rybnikov | Cc: ghc-devs at haskell.org | Subject: Re: Put "Error:" before error output | | 2015-01-23 12:55 GMT+01:00 Konstantine Rybnikov : | > If warnings will be treated as errors it's fine to have "Error:" shown | for | > them, I think. | | +1 for this, it is how e.g. gcc behaves with -Werror, too. So unless | there is a compelling reason to do things differently (which I don't | see here), I would just follow conventional behavior instead of being | creative. :-) | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Mon Jan 26 12:24:40 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 26 Jan 2015 12:24:40 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562B69AA@DB3PRD3001MB020.064d.mgd.msft.net> I really like this proposal (except I would bike shed about the syntax for record selector to be dot, like in the majority of languages.) In other languages dot is a binary operator, so that r.x selects the x field from record r. In this proposal #x is a unary operator (more of a lexical modifier really), which returns a first-class composable value for the field. Only later does it meet a record. This is a big difference. (Moreover, debating dot has proved a graveyard for previous discussions.) I?m glad you like the base proposal. Simon From: Johan Tibell [mailto:johan.tibell at gmail.com] Sent: 24 January 2015 01:05 To: Simon Peyton Jones Cc: Adam Gundry; Iavor Diatchki; Simon Marlow; ghc-devs at haskell.org Subject: Re: GHC support for the new "record" package I really like this proposal (except I would bike shed about the syntax for record selector to be dot, like in the majority of languages.) On Fri, Jan 23, 2015 at 3:41 PM, Simon Peyton Jones > wrote: | I just | noticed that it effectively gives us a syntax for identifier-like Symbol | singletons, which could be useful in completely different contexts Indeed so. I have written a major increment to https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign which people reading this thread may find interesting. Look for "Plan B". For the first time I think I can see a nice, simple, elegant, orthogonal story. Simon _______________________________________________ ghc-devs mailing list ghc-devs at haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From k-bx at k-bx.com Mon Jan 26 13:42:32 2015 From: k-bx at k-bx.com (Konstantine Rybnikov) Date: Mon, 26 Jan 2015 15:42:32 +0200 Subject: Put "Error:" before error output In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562B697D@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF562B697D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I'd like to try and do both, since this shouldn't be hard and doesn't look like top-priority. On Mon, Jan 26, 2015 at 2:24 PM, Simon Peyton Jones wrote: > There has been quite a long email trail about this. Would someone like to > summarise in a Trac ticket feature request? > > Assuming there's a consensus, would anyone like to offer a patch? Should > not hard. > > There will be an annoying cutover problem, because many regression tests > will start failing. Anyone offering the patch would need to include all of > those. Of course it'll affect other packages too -- hence the need for > consensus. > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Sven > | Panne > | Sent: 25 January 2015 11:28 > | To: Konstantine Rybnikov > | Cc: ghc-devs at haskell.org > | Subject: Re: Put "Error:" before error output > | > | 2015-01-23 12:55 GMT+01:00 Konstantine Rybnikov : > | > If warnings will be treated as errors it's fine to have "Error:" shown > | for > | > them, I think. > | > | +1 for this, it is how e.g. gcc behaves with -Werror, too. So unless > | there is a compelling reason to do things differently (which I don't > | see here), I would just follow conventional behavior instead of being > | creative. :-) > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 26 13:50:44 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 26 Jan 2015 13:50:44 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> | "wired" into record selectors, which can't be undone later. I think we | can fix some of that by desugaring record definitions to: | | data T = MkT {x :: Int} | | instance FieldSelector "T" T Int where | fieldSelector (MkT x) = x | | Then someone can, in a library, define: | | instance FieldSelector x r a => IV x (r -> a) where | iv = fieldSelector | | Now that records don't mention IV, we are free to provide lots of | different instances, each capturing some properties of each field, | without committing to any one style of lens at this point. Therefore, | we could have record desugaring also produce: | | instance FieldSetter "T" T Int where | fieldSet v (T _) = T v | | And also: | | instance FieldSTAB "T" T Int where | fieldSTAB = ... the stab lens ... OK, I buy this. We generate FieldSelector instances where possible, and FieldSetter instances where possible (fewer cases). Fine. Cutting to the chase, if we are beginning to converge, could someone (Adam, Neil?) modify the Redesign page https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign to focus on plan B only; and add this FieldGetter/Setter stuff? It's confusing when we have too many things in play. I'm sick at the moment, so I'm going home to bed -- hence handing off in a hopeful way to you two. I have added Edwards "import Field(x)" suggestion under syntax, although I don't really like it. One last thing: Edward, could you live with lenses coming from #x being of a newtype (Lens a b), or stab variant, rather than actually being a higher rank function etc? Of course lens composition would no longer be function composition, but that might not be so terrible; ".." perhaps. It would make error messages vastly more perspicuous. And, much as I love lenses, I think it's a mistake not to abstraction; it dramatically limits your future wiggle room. I really think we are finally converging. Simon From gale at sefer.org Mon Jan 26 16:42:38 2015 From: gale at sefer.org (Yitzchak Gale) Date: Mon, 26 Jan 2015 18:42:38 +0200 Subject: American vs. British English In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562A78FF@DB3PRD3001MB020.064d.mgd.msft.net> References: <201501161119.07912.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF562A78FF@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Even though my native English is the U.S. variety, I still haven't gotten used to writing {-# LANGUAGE GeneralizedNewtypeDeriving #-} It's a constant compiler error for me. I'm just so accustomed to the idea that in the Haskell world, U.K. spelling and usage are the norm. Would it be difficult to add the other spelling as an alias? Just my two cents, err, tuppence, err, whatever. -Yitz On Fri, Jan 16, 2015 at 12:26 PM, Simon Peyton Jones wrote: > We don't have a solid policy. Personally I prefer English, but then I would. > > Simon > > | -----Original Message----- > | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Jan > | Stolarek > | Sent: 16 January 2015 10:19 > | To: ghc-devs at haskell.org > | Subject: American vs. British English > | > | I just realized GHC has data types named FamFlavor and FamFlavour. > | That said, is there a policy that says which English should be used in > | the source code? > | > | Janek > | > | _______________________________________________ > | ghc-devs mailing list > | ghc-devs at haskell.org > | http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From m at tweag.io Mon Jan 26 18:17:28 2015 From: m at tweag.io (Boespflug, Mathieu) Date: Mon, 26 Jan 2015 19:17:28 +0100 Subject: American vs. British English In-Reply-To: References: <201501161119.07912.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF562A78FF@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: FWIW, even the British can't entirely make up their mind about whether to -ize or to -ise: http://blog.oxforddictionaries.com/2011/03/ize-or-ise/ The advantage of *not* introducing aliases is that it makes it that much easier to exhaustively test whether some extension is turned on - it means extensions have a canonical name that everyone uses. On 26 January 2015 at 17:42, Yitzchak Gale wrote: > Even though my native English is the U.S. > variety, I still haven't gotten used to writing > > {-# LANGUAGE GeneralizedNewtypeDeriving #-} > > It's a constant compiler error for me. I'm just so accustomed > to the idea that in the Haskell world, U.K. spelling and usage > are the norm. > > Would it be difficult to add the other spelling as an alias? > > Just my two cents, err, tuppence, err, whatever. > -Yitz > > On Fri, Jan 16, 2015 at 12:26 PM, Simon Peyton Jones > wrote: >> We don't have a solid policy. Personally I prefer English, but then I would. >> >> Simon >> >> | -----Original Message----- >> | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of Jan >> | Stolarek >> | Sent: 16 January 2015 10:19 >> | To: ghc-devs at haskell.org >> | Subject: American vs. British English >> | >> | I just realized GHC has data types named FamFlavor and FamFlavour. >> | That said, is there a policy that says which English should be used in >> | the source code? >> | >> | Janek >> | >> | _______________________________________________ >> | ghc-devs mailing list >> | ghc-devs at haskell.org >> | http://www.haskell.org/mailman/listinfo/ghc-devs >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From greg at gregweber.info Mon Jan 26 18:57:12 2015 From: greg at gregweber.info (Greg Weber) Date: Mon, 26 Jan 2015 10:57:12 -0800 Subject: American vs. British English In-Reply-To: References: <201501161119.07912.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF562A78FF@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Best would be spelling suggestion for language extensions much like we get for other names. On Mon, Jan 26, 2015 at 10:17 AM, Boespflug, Mathieu wrote: > FWIW, even the British can't entirely make up their mind about whether > to -ize or to -ise: > > http://blog.oxforddictionaries.com/2011/03/ize-or-ise/ > > The advantage of *not* introducing aliases is that it makes it that > much easier to exhaustively test whether some extension is turned on - > it means extensions have a canonical name that everyone uses. > > On 26 January 2015 at 17:42, Yitzchak Gale wrote: > > Even though my native English is the U.S. > > variety, I still haven't gotten used to writing > > > > {-# LANGUAGE GeneralizedNewtypeDeriving #-} > > > > It's a constant compiler error for me. I'm just so accustomed > > to the idea that in the Haskell world, U.K. spelling and usage > > are the norm. > > > > Would it be difficult to add the other spelling as an alias? > > > > Just my two cents, err, tuppence, err, whatever. > > -Yitz > > > > On Fri, Jan 16, 2015 at 12:26 PM, Simon Peyton Jones > > wrote: > >> We don't have a solid policy. Personally I prefer English, but then I > would. > >> > >> Simon > >> > >> | -----Original Message----- > >> | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of > Jan > >> | Stolarek > >> | Sent: 16 January 2015 10:19 > >> | To: ghc-devs at haskell.org > >> | Subject: American vs. British English > >> | > >> | I just realized GHC has data types named FamFlavor and FamFlavour. > >> | That said, is there a policy that says which English should be used > in > >> | the source code? > >> | > >> | Janek > >> | > >> | _______________________________________________ > >> | ghc-devs mailing list > >> | ghc-devs at haskell.org > >> | http://www.haskell.org/mailman/listinfo/ghc-devs > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://www.haskell.org/mailman/listinfo/ghc-devs > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Mon Jan 26 20:22:16 2015 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 26 Jan 2015 15:22:16 -0500 Subject: GHC support for the new "record" package In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Personally, I don't like the sigil mangled version at all. If it is then further encumbered by a combinator it is now several symbols longer at every single use site than other alternatives put forth in this thread. =( xx #bar . xx #baz or xx @bar . xx @baz compares badly enough against bar.baz for some as yet unnamed combinator xx and is a big enough tax for all users to unavoidably pay that I fear it would greatly hinder adoption. The former also has the disadvantage of stealing an operator that is already in wide use. Even assuming the fixity issues can be worked out for some other set of operators to glue these tother we're still looking at x^!? #bar!? #baz vs. x^.bar.baz with another set of arcane rules to switch back and forth out of this to deal with the lenses/traversals/prisms/etc that many folks have in their code today. It is something like 3 extra sets of symbols to memorize plus a tax of 3 characters per lens use site. I know that I for one would hesitate to throw over my template haskell generated lenses for something that was noisier at every use site. For all that lenses are complex internally, they are a lot less arbitrary than that. The import Field trick is magic, yes, but it has the benefit of being the first approach I've seen where the resulting syntax can be as light as what the user can generate by hand today. -Edward On Mon, Jan 26, 2015 at 8:50 AM, Simon Peyton Jones wrote: > | "wired" into record selectors, which can't be undone later. I think we > | can fix some of that by desugaring record definitions to: > | > | data T = MkT {x :: Int} > | > | instance FieldSelector "T" T Int where > | fieldSelector (MkT x) = x > | > | Then someone can, in a library, define: > | > | instance FieldSelector x r a => IV x (r -> a) where > | iv = fieldSelector > | > | Now that records don't mention IV, we are free to provide lots of > | different instances, each capturing some properties of each field, > | without committing to any one style of lens at this point. Therefore, > | we could have record desugaring also produce: > | > | instance FieldSetter "T" T Int where > | fieldSet v (T _) = T v > | > | And also: > | > | instance FieldSTAB "T" T Int where > | fieldSTAB = ... the stab lens ... > > OK, I buy this. > > We generate FieldSelector instances where possible, and FieldSetter > instances where possible (fewer cases). > > Fine. > > > > Cutting to the chase, if we are beginning to converge, could someone > (Adam, Neil?) modify the Redesign page > https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign > to focus on plan B only; and add this FieldGetter/Setter stuff? > > It's confusing when we have too many things in play. I'm sick at the > moment, so I'm going home to bed -- hence handing off in a hopeful way to > you two. > > I have added Edwards "import Field(x)" suggestion under syntax, although I > don't really like it. > > One last thing: Edward, could you live with lenses coming from #x being of > a newtype (Lens a b), or stab variant, rather than actually being a higher > rank function etc? Of course lens composition would no longer be function > composition, but that might not be so terrible; ".." perhaps. It would > make error messages vastly more perspicuous. And, much as I love lenses, I > think it's a mistake not to abstraction; it dramatically limits your future > wiggle room. > > > > I really think we are finally converging. > > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikita.y.volkov at mail.ru Mon Jan 26 21:18:13 2015 From: nikita.y.volkov at mail.ru (Nikita Volkov) Date: Tue, 27 Jan 2015 00:18:13 +0300 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Edward, I think the point that Simon was making was that sweetened expressions like `#bar` could directly instantiate lenses if only Lens was a distinct type, instead of an alias to a function. I.e., it would be `#bar . #baz`. Introducing such a change would, of course, exclude the possibility of making "lens"-compatible libraries without depending on "lens", like I did with "record". However the most basic functionality of "lens" could be extracted into a separate library with a minimum of transitive dependencies, then the reasons for not depending on it would simply dissolve. 2015-01-26 23:22 GMT+03:00 Edward Kmett : > Personally, I don't like the sigil mangled version at all. > > If it is then further encumbered by a combinator it is now several symbols > longer at every single use site than other alternatives put forth in this > thread. =( > > xx #bar . xx #baz > > or > > xx @bar . xx @baz > > compares badly enough against > > bar.baz > > for some as yet unnamed combinator xx and is a big enough tax for all > users to unavoidably pay that I fear it would greatly hinder adoption. > > The former also has the disadvantage of stealing an operator that is > already in wide use. > > Even assuming the fixity issues can be worked out for some other set of > operators to glue these tother we're still looking at > > x^!? #bar!? #baz > > vs. > > x^.bar.baz > > with another set of arcane rules to switch back and forth out of this to > deal with the lenses/traversals/prisms/etc that many folks have in their > code today. > > It is something like 3 extra sets of symbols to memorize plus a tax of 3 > characters per lens use site. > > I know that I for one would hesitate to throw over my template haskell > generated lenses for something that was noisier at every use site. For all > that lenses are complex internally, they are a lot less arbitrary than that. > > The import Field trick is magic, yes, but it has the benefit of being the > first approach I've seen where the resulting syntax can be as light as what > the user can generate by hand today. > > -Edward > > On Mon, Jan 26, 2015 at 8:50 AM, Simon Peyton Jones > wrote: > >> | "wired" into record selectors, which can't be undone later. I think we >> | can fix some of that by desugaring record definitions to: >> | >> | data T = MkT {x :: Int} >> | >> | instance FieldSelector "T" T Int where >> | fieldSelector (MkT x) = x >> | >> | Then someone can, in a library, define: >> | >> | instance FieldSelector x r a => IV x (r -> a) where >> | iv = fieldSelector >> | >> | Now that records don't mention IV, we are free to provide lots of >> | different instances, each capturing some properties of each field, >> | without committing to any one style of lens at this point. Therefore, >> | we could have record desugaring also produce: >> | >> | instance FieldSetter "T" T Int where >> | fieldSet v (T _) = T v >> | >> | And also: >> | >> | instance FieldSTAB "T" T Int where >> | fieldSTAB = ... the stab lens ... >> >> OK, I buy this. >> >> We generate FieldSelector instances where possible, and FieldSetter >> instances where possible (fewer cases). >> >> Fine. >> >> >> >> Cutting to the chase, if we are beginning to converge, could someone >> (Adam, Neil?) modify the Redesign page >> https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign >> to focus on plan B only; and add this FieldGetter/Setter stuff? >> >> It's confusing when we have too many things in play. I'm sick at the >> moment, so I'm going home to bed -- hence handing off in a hopeful way to >> you two. >> >> I have added Edwards "import Field(x)" suggestion under syntax, although >> I don't really like it. >> >> One last thing: Edward, could you live with lenses coming from #x being >> of a newtype (Lens a b), or stab variant, rather than actually being a >> higher rank function etc? Of course lens composition would no longer be >> function composition, but that might not be so terrible; ".." perhaps. It >> would make error messages vastly more perspicuous. And, much as I love >> lenses, I think it's a mistake not to abstraction; it dramatically limits >> your future wiggle room. >> >> >> >> I really think we are finally converging. >> >> Simon >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikita.y.volkov at mail.ru Mon Jan 26 21:23:18 2015 From: nikita.y.volkov at mail.ru (Nikita Volkov) Date: Tue, 27 Jan 2015 00:23:18 +0300 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hello everyone! I of course will be honoured to participate in the work on anonymous records, if hopefully we get to it. However as I've seen there's been some debate going on and confusion about whether they are at all worth implementing. That is why I am writing an article titled ?Why anonymous records matter?, which I plan to publish this week. Meanwhile I've updated "record" to support type-changing updates. Adam?s proposal unfortunately wouldn?t work for anonymous records due to type family instance conflicts. What I did was implement something in the spirit of a Functional Dependencies based solution for tuples from Edward's "lens" library. Thanks for inspiration! Also I?ve decided to redo the ?record? project as a full-blown preprocessor with full Haskell syntax support using hacks around ?haskell-src-exts?, as was once suggested by Chris Done on Reddit. I?m planning to stick to Simon?s insightful proposal concerning Implicit Values as much as possible. Hopefully this all will turn the project into a play ground for a GHC extension, which people will be able to use with existing GHC releases. However I?m only beginning to work on this. Also, as Adam?s already aware, I am exploring anonymous tagged unions . Turns out they share quite a lot of properties with anonymous records and in a similar style solve the constructor namespacing issue. Looks like Implicit Values could come in handy for this as well. Best regards, Nikita 2015-01-27 0:18 GMT+03:00 Nikita Volkov : > Edward, I think the point that Simon was making was that sweetened > expressions like `#bar` could directly instantiate lenses if only Lens was > a distinct type, instead of an alias to a function. I.e., it would be `#bar > . #baz`. > > Introducing such a change would, of course, exclude the possibility of > making "lens"-compatible libraries without depending on "lens", like I did > with "record". However the most basic functionality of "lens" could be > extracted into a separate library with a minimum of transitive > dependencies, then the reasons for not depending on it would simply > dissolve. > > > 2015-01-26 23:22 GMT+03:00 Edward Kmett : > >> Personally, I don't like the sigil mangled version at all. >> >> If it is then further encumbered by a combinator it is now several >> symbols longer at every single use site than other alternatives put forth >> in this thread. =( >> >> xx #bar . xx #baz >> >> or >> >> xx @bar . xx @baz >> >> compares badly enough against >> >> bar.baz >> >> for some as yet unnamed combinator xx and is a big enough tax for all >> users to unavoidably pay that I fear it would greatly hinder adoption. >> >> The former also has the disadvantage of stealing an operator that is >> already in wide use. >> >> Even assuming the fixity issues can be worked out for some other set of >> operators to glue these tother we're still looking at >> >> x^!? #bar!? #baz >> >> vs. >> >> x^.bar.baz >> >> with another set of arcane rules to switch back and forth out of this to >> deal with the lenses/traversals/prisms/etc that many folks have in their >> code today. >> >> It is something like 3 extra sets of symbols to memorize plus a tax of 3 >> characters per lens use site. >> >> I know that I for one would hesitate to throw over my template haskell >> generated lenses for something that was noisier at every use site. For all >> that lenses are complex internally, they are a lot less arbitrary than that. >> >> The import Field trick is magic, yes, but it has the benefit of being the >> first approach I've seen where the resulting syntax can be as light as what >> the user can generate by hand today. >> >> -Edward >> >> On Mon, Jan 26, 2015 at 8:50 AM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >>> | "wired" into record selectors, which can't be undone later. I think we >>> | can fix some of that by desugaring record definitions to: >>> | >>> | data T = MkT {x :: Int} >>> | >>> | instance FieldSelector "T" T Int where >>> | fieldSelector (MkT x) = x >>> | >>> | Then someone can, in a library, define: >>> | >>> | instance FieldSelector x r a => IV x (r -> a) where >>> | iv = fieldSelector >>> | >>> | Now that records don't mention IV, we are free to provide lots of >>> | different instances, each capturing some properties of each field, >>> | without committing to any one style of lens at this point. Therefore, >>> | we could have record desugaring also produce: >>> | >>> | instance FieldSetter "T" T Int where >>> | fieldSet v (T _) = T v >>> | >>> | And also: >>> | >>> | instance FieldSTAB "T" T Int where >>> | fieldSTAB = ... the stab lens ... >>> >>> OK, I buy this. >>> >>> We generate FieldSelector instances where possible, and FieldSetter >>> instances where possible (fewer cases). >>> >>> Fine. >>> >>> >>> >>> Cutting to the chase, if we are beginning to converge, could someone >>> (Adam, Neil?) modify the Redesign page >>> https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign >>> to focus on plan B only; and add this FieldGetter/Setter stuff? >>> >>> It's confusing when we have too many things in play. I'm sick at the >>> moment, so I'm going home to bed -- hence handing off in a hopeful way to >>> you two. >>> >>> I have added Edwards "import Field(x)" suggestion under syntax, although >>> I don't really like it. >>> >>> One last thing: Edward, could you live with lenses coming from #x being >>> of a newtype (Lens a b), or stab variant, rather than actually being a >>> higher rank function etc? Of course lens composition would no longer be >>> function composition, but that might not be so terrible; ".." perhaps. It >>> would make error messages vastly more perspicuous. And, much as I love >>> lenses, I think it's a mistake not to abstraction; it dramatically limits >>> your future wiggle room. >>> >>> >>> >>> I really think we are finally converging. >>> >>> Simon >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Mon Jan 26 21:34:37 2015 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 26 Jan 2015 16:34:37 -0500 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Mon, Jan 26, 2015 at 4:18 PM, Nikita Volkov wrote: > > I.e., it would be `#bar . #baz`. > Note: once you start using a data-type then (.) necessarily fails to allow you to ever do type changing assignment, due to the shape of Category, or you have to use yet another operator, so that snippet cannot work without giving up something we can do today. OTOH: Using the lens-style story, no types are needed here that isn't already available in base and, done right, no existing operators need be stolen from the user, and type changing assignment is trivial. I'm confess, I, like many in this thread, am less than comfortable with the notion of bringing chunks of lens into base. Frankly, I'd casually dismissed such concerns as a job for Haskell 2025. ;) However, I've been trying to consider it with an open mind here, because the alternatives proposed thus far lock in uglier code than the status quo with more limitations while simultaneously being harder to explain. -Edward Introducing such a change would, of course, exclude the possibility of > making "lens"-compatible libraries without depending on "lens", like I did > with "record". However the most basic functionality of "lens" could be > extracted into a separate library with a minimum of transitive > dependencies, then the reasons for not depending on it would simply > dissolve. > > > 2015-01-26 23:22 GMT+03:00 Edward Kmett : > >> Personally, I don't like the sigil mangled version at all. >> >> If it is then further encumbered by a combinator it is now several >> symbols longer at every single use site than other alternatives put forth >> in this thread. =( >> >> xx #bar . xx #baz >> >> or >> >> xx @bar . xx @baz >> >> compares badly enough against >> >> bar.baz >> >> for some as yet unnamed combinator xx and is a big enough tax for all >> users to unavoidably pay that I fear it would greatly hinder adoption. >> >> The former also has the disadvantage of stealing an operator that is >> already in wide use. >> >> Even assuming the fixity issues can be worked out for some other set of >> operators to glue these tother we're still looking at >> >> x^!? #bar!? #baz >> >> vs. >> >> x^.bar.baz >> >> with another set of arcane rules to switch back and forth out of this to >> deal with the lenses/traversals/prisms/etc that many folks have in their >> code today. >> >> It is something like 3 extra sets of symbols to memorize plus a tax of 3 >> characters per lens use site. >> >> I know that I for one would hesitate to throw over my template haskell >> generated lenses for something that was noisier at every use site. For all >> that lenses are complex internally, they are a lot less arbitrary than that. >> >> The import Field trick is magic, yes, but it has the benefit of being the >> first approach I've seen where the resulting syntax can be as light as what >> the user can generate by hand today. >> >> -Edward >> >> On Mon, Jan 26, 2015 at 8:50 AM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >>> | "wired" into record selectors, which can't be undone later. I think we >>> | can fix some of that by desugaring record definitions to: >>> | >>> | data T = MkT {x :: Int} >>> | >>> | instance FieldSelector "T" T Int where >>> | fieldSelector (MkT x) = x >>> | >>> | Then someone can, in a library, define: >>> | >>> | instance FieldSelector x r a => IV x (r -> a) where >>> | iv = fieldSelector >>> | >>> | Now that records don't mention IV, we are free to provide lots of >>> | different instances, each capturing some properties of each field, >>> | without committing to any one style of lens at this point. Therefore, >>> | we could have record desugaring also produce: >>> | >>> | instance FieldSetter "T" T Int where >>> | fieldSet v (T _) = T v >>> | >>> | And also: >>> | >>> | instance FieldSTAB "T" T Int where >>> | fieldSTAB = ... the stab lens ... >>> >>> OK, I buy this. >>> >>> We generate FieldSelector instances where possible, and FieldSetter >>> instances where possible (fewer cases). >>> >>> Fine. >>> >>> >>> >>> Cutting to the chase, if we are beginning to converge, could someone >>> (Adam, Neil?) modify the Redesign page >>> https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign >>> to focus on plan B only; and add this FieldGetter/Setter stuff? >>> >>> It's confusing when we have too many things in play. I'm sick at the >>> moment, so I'm going home to bed -- hence handing off in a hopeful way to >>> you two. >>> >>> I have added Edwards "import Field(x)" suggestion under syntax, although >>> I don't really like it. >>> >>> One last thing: Edward, could you live with lenses coming from #x being >>> of a newtype (Lens a b), or stab variant, rather than actually being a >>> higher rank function etc? Of course lens composition would no longer be >>> function composition, but that might not be so terrible; ".." perhaps. It >>> would make error messages vastly more perspicuous. And, much as I love >>> lenses, I think it's a mistake not to abstraction; it dramatically limits >>> your future wiggle room. >>> >>> >>> >>> I really think we are finally converging. >>> >>> Simon >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Jan 26 21:41:54 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 26 Jan 2015 21:41:54 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> Personally, I don't like the sigil mangled version at all. You don?t comment on the relationship with implicit parameters. Are they ok with sigils? If it is then further encumbered by a combinator it is now several symbols longer at every single use site than other alternatives put forth in this thread. =( No, as Nikita says, under the ?Redesign? proposal it would be #bar . #baz The import Field trick is magic, yes, but it has the benefit of being the first approach I've seen where the resulting syntax can be as light as what the user can generate by hand today. That?s why I added it to the ?Redesign? page. It seems viable to me; a clever idea, thank you. Still, personally I prefer #x because of the link with implicit parameters. It would be interesting to know what others think. I'm confess, I, like many in this thread, am less than comfortable with the notion of bringing chunks of lens into base. Frankly, I'd casually dismissed such concerns as a job for Haskell 2025. ;) However, I've been trying to consider it with an open mind here, because the alternatives proposed thus far lock in uglier code than the status quo with more limitations while simultaneously being harder to explain. I don?t think anyone is suggesting adding any of lens are they? Which bits did you think were being suggested for addition? Note: once you start using a data-type then (.) necessarily fails to allow you to ever do type changing assignment, due to the shape of Category, or you have to use yet another operator, so that snippet cannot work without giving up something we can do today. OTOH: Using the lens-style story, no types are needed here that isn't already available in base and, done right, no existing operators need be stolen from the user, and type changing assignment is trivial. I?m afraid I couldn?t understand this paragraph at all. Perhaps some examples would help, to illustrate what you mean? Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin at well-typed.com Tue Jan 27 00:13:39 2015 From: austin at well-typed.com (Austin Seipp) Date: Mon, 26 Jan 2015 18:13:39 -0600 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 2 Message-ID: We are pleased to announce the second release candidate for GHC 7.10.1: https://downloads.haskell.org/~ghc/7.10.1-rc2/ This includes the source tarball and bindists for 64bit/32bit Linux and Windows. Binary builds for other platforms will be available shortly. (CentOS 6.5 binaries are not available at this time like they were for 7.8.x). These binaries and tarballs have an accompanying SHA256SUMS file signed by my GPG key id (0x3B58D86F). We plan to make the 7.10.1 release sometime in February of 2015. Please test as much as possible; bugs are much cheaper if we find them before the release! -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ekmett at gmail.com Tue Jan 27 00:43:44 2015 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 26 Jan 2015 19:43:44 -0500 Subject: GHC support for the new "record" package In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Mon, Jan 26, 2015 at 4:41 PM, Simon Peyton Jones wrote: > Personally, I don't like the sigil mangled version at all. > > You don?t comment on the relationship with implicit parameters. Are they > ok with sigils? > I don't have too many opinions about implicit parameters, but they don't really see a lot of use, which makes me somewhat leery of copying the pattern. ;) If it is then further encumbered by a combinator it is now several > symbols longer at every single use site than other alternatives put forth > in this thread. =( > > No, as Nikita says, under the ?Redesign? proposal it would be #bar . #baz > The problem is that if you make #bar an instance of Category so that it can use (.) then it will fail to allow type changing re-assignment. > The import Field trick is magic, yes, but it has the benefit of being > the first approach I've seen where the resulting syntax can be as light as > what the user can generate by hand today. > > That?s why I added it to the ?Redesign? page. It seems viable to me; a > clever idea, thank you. Still, personally I prefer #x because of the link > with implicit parameters. It would be interesting to know what others > think. > Admittedly @bar . @baz has the benefit that it introduces no namespacing conflicts at all. If we really had to go with some kind of sigil based solution, I _could_ rally behind that one, but I do like it a lot less than the import trick, if only because the import trick gets rid of that last '@' and space we have on every accessor and you have to admit that the existing foo^.bar.baz.quux idiom reads a lot more cleanly than foo ^. @bar . @baz . @quux ever can. (I used @foo above because it avoids any potential conflict with existing user code as @ isn't a legal operator) I'm confess, I, like many in this thread, am less than comfortable with > the notion of bringing chunks of lens into base. Frankly, I'd casually > dismissed such concerns as a job for Haskell 2025. ;) However, I've been > trying to consider it with an open mind here, because the alternatives > proposed thus far lock in uglier code than the status quo with more > limitations while simultaneously being harder to explain. > > I don?t think anyone is suggesting adding any of lens are they? Which > bits did you think were being suggested for addition? > I was mostly referring to the use of the (a -> f b) -> s -> f t form. > Note: once you start using a data-type then (.) necessarily fails to > allow you to ever do type changing assignment, due to the shape of > Category, or you have to use yet another operator, so that snippet cannot > work without giving up something we can do today. OTOH: Using the > lens-style story, no types are needed here that isn't already available in > base and, done right, no existing operators need be stolen from the user, > and type changing assignment is trivial. > > I?m afraid I couldn?t understand this paragraph at all. Perhaps some > examples would help, to illustrate > what you mean? > I was writing that paragraph in response to your query if it'd make sense to have the @foo return some data type: It comes at a rather high cost. Lens gets away with using (.) to compose because its really using functions, with a funny mapM-like shape (a -> f b) -> (s -> f t) is still a function on the outside, it just happens to have a (co)domain that also looks like a function (a -> f b). If we make the outside type constructor a data type with its own Category instance, and just go `Accessor s a` then it either loses its ability to change out types in s -- e.g. change out the type of the second member in a pair, or it loses its ability to compose. We gave up the latter to make Gundry's proposal work as we were forced into that shape by trying to return a combinators that could be overloaded to act as an existing accessor function. To keep categorical composition for the accessor, you might at first think we can use a product kind or something to get Accessor '(s,t) '(a,b) with both indices but that gets stuck when you go to define `id`, so necessarily such a version of things winds up needing its own set of combinators. -Edward -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Tue Jan 27 00:59:20 2015 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 26 Jan 2015 19:59:20 -0500 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BF7A07.50502@well-typed.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I'm also rather worried, looking over the IV proposal, that it just doesn't actually work. We actually tried the code under "Haskell 98 records" back when Gundry first started his proposal and it fell apart when you went to compose them. A fundep/class associated type in the class is a stronger constraint that a type equality defined on an individual instance. I don't see how @foo . @bar . @baz (or #foo . #bar . #baz as would be written under the concrete proposal on the wiki) is ever supposed to figure out the intermediate types when working polymorphically in the data type involved. What happens when the type of that chain of accessors is left to inference? You get stuck wallowing in AllowAmbiguousTypes territory: (#foo . #bar . #baz) :: (IV "foo" (c -> d), IV "bar" (b -> c), IV "baz" (a -> b)) => a -> d has a variables 'b' and 'c' that don't occur on the right hand side, and which are only determinable by knowing that the instances you expect to see look something like: instance (a ~ Bool) => IV "x" (S -> a) where iv (MkS x) = x but that is too weak to figure out that "S" determines "a" unless S is already known, even if we just limit ourselves to field accessors as functions. -Edward On Mon, Jan 26, 2015 at 7:43 PM, Edward Kmett wrote: > On Mon, Jan 26, 2015 at 4:41 PM, Simon Peyton Jones > wrote: > >> Personally, I don't like the sigil mangled version at all. >> >> You don?t comment on the relationship with implicit parameters. Are they >> ok with sigils? >> > > I don't have too many opinions about implicit parameters, but they don't > really see a lot of use, which makes me somewhat leery of copying the > pattern. ;) > > If it is then further encumbered by a combinator it is now several >> symbols longer at every single use site than other alternatives put forth >> in this thread. =( >> >> No, as Nikita says, under the ?Redesign? proposal it would be #bar . #baz >> > The problem is that if you make #bar an instance of Category so that it > can use (.) then it will fail to allow type changing re-assignment. > > >> The import Field trick is magic, yes, but it has the benefit of being >> the first approach I've seen where the resulting syntax can be as light as >> what the user can generate by hand today. >> >> That?s why I added it to the ?Redesign? page. It seems viable to me; a >> clever idea, thank you. Still, personally I prefer #x because of the link >> with implicit parameters. It would be interesting to know what others >> think. >> > Admittedly @bar . @baz has the benefit that it introduces no namespacing > conflicts at all. > > If we really had to go with some kind of sigil based solution, I _could_ > rally behind that one, but I do like it a lot less than the import trick, > if only because the import trick gets rid of that last '@' and space we > have on every accessor and you have to admit that the existing > > foo^.bar.baz.quux > > idiom reads a lot more cleanly than > > foo ^. @bar . @baz . @quux > > ever can. > > (I used @foo above because it avoids any potential conflict with existing > user code as @ isn't a legal operator) > > I'm confess, I, like many in this thread, am less than comfortable with >> the notion of bringing chunks of lens into base. Frankly, I'd casually >> dismissed such concerns as a job for Haskell 2025. ;) However, I've been >> trying to consider it with an open mind here, because the alternatives >> proposed thus far lock in uglier code than the status quo with more >> limitations while simultaneously being harder to explain. >> >> I don?t think anyone is suggesting adding any of lens are they? Which >> bits did you think were being suggested for addition? >> > I was mostly referring to the use of the (a -> f b) -> s -> f t form. > >> Note: once you start using a data-type then (.) necessarily fails to >> allow you to ever do type changing assignment, due to the shape of >> Category, or you have to use yet another operator, so that snippet cannot >> work without giving up something we can do today. OTOH: Using the >> lens-style story, no types are needed here that isn't already available in >> base and, done right, no existing operators need be stolen from the user, >> and type changing assignment is trivial. >> >> I?m afraid I couldn?t understand this paragraph at all. Perhaps some >> examples would help, to illustrate >> > what you mean? >> > I was writing that paragraph in response to your query if it'd make sense > to have the @foo return some data type: It comes at a rather high cost. > > Lens gets away with using (.) to compose because its really using > functions, with a funny mapM-like shape (a -> f b) -> (s -> f t) is still a > function on the outside, it just happens to have a (co)domain that also > looks like a function (a -> f b). > > If we make the outside type constructor a data type with its own Category > instance, and just go `Accessor s a` then it either loses its ability to > change out types in s -- e.g. change out the type of the second member in > a pair, or it loses its ability to compose. > > We gave up the latter to make Gundry's proposal work as we were forced > into that shape by trying to return a combinators that could be overloaded > to act as an existing accessor function. > > To keep categorical composition for the accessor, you might at first think > we can use a product kind or something to get Accessor '(s,t) '(a,b) with > both indices but that gets stuck when you go to define `id`, so necessarily > such a version of things winds up needing its own set of combinators. > > -Edward > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Tue Jan 27 02:40:19 2015 From: david.feuer at gmail.com (David Feuer) Date: Mon, 26 Jan 2015 21:40:19 -0500 Subject: GHC support for the new "record" package Message-ID: >> I don?t think anyone is suggesting adding any of lens are they? Which >> bits did you think were being suggested for addition? >> > I was mostly referring to the use of the (a -> f b) -> s -> f t form. All right. If nobody's suggesting it, I'll suggest it. Is it really that evil? Why does it occupy such a strange place off to the side of the rest of the Haskell ecosystem? David From mark.lentczner at gmail.com Tue Jan 27 05:46:51 2015 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Mon, 26 Jan 2015 21:46:51 -0800 Subject: GHC support for the new "record" package In-Reply-To: References: Message-ID: My 2? on this topic are solely about syntax: ? I actually like the @ sigil: It is somewhat mnemonic: @age is like roughly "at the age field..." ? The module import hacks are horrid for something so important to the evolution of the language. And it makes me cringe for every writer of a programmer tool in the future! ? I disagree with Edward's assessment: I find foo^.bar.baz.quux awful because a) I dislike the ^. and the copious lens operators, b) I dislike the attempt to mimic member access in other languages. ? To amplify the second point, I see little value in attempting to mimic the dot of other languages. So what if the lens (or lens-like-thing) composition operator is something else? For heaven's sake, why not double slash? @bar // @baz // @quux Or perhaps @bar |> @baz |> @quux Or even (I'm a Unicode nut) @bar ? @baz ? @quux ? If the dot implies we can't have a data type and type changing (thanks to Category) then skip it and using something else that will let us have a data type and type changing. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mietek at bak.io Tue Jan 27 06:20:54 2015 From: mietek at bak.io (=?iso-8859-1?Q?Mi=EBtek_Bak?=) Date: Tue, 27 Jan 2015 06:20:54 +0000 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 2 In-Reply-To: References: Message-ID: It appears GHC 7.10.1-rc2 doesn?t support glibc 2.11 ? specifically, 2.11.1 (Ubuntu 10.04 LTS) and 2.11.3 (Debian 6). glibc 2.12 (CentOS 6) seems to work fine. Symptoms include: Installing library in /app/ghc/lib/ghc-7.10.0.20150123/ghc_0kOYffGYd794400D7yvIjm "/app/ghc/lib/ghc-7.10.0.20150123/bin/ghc-pkg" --force --global-package-db "/app/ghc/lib/ghc-7.10.0.20150123/package.conf.d" update rts/dist/package.conf.install Reading package info from "rts/dist/package.conf.install" ... done. "utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist" register libraries/ghc-prim dist-install "/app/ghc/lib/ghc-7.10.0.20150123/bin/ghc" "/app/ghc/lib/ghc-7.10.0.20150123/bin/ghc-pkg" "/app/ghc/lib/ghc-7.10.0.20150123" '' '/app/ghc' '/app/ghc/lib/ghc-7.10.0.20150123' '/app/ghc/share/doc/ghc/html/libraries' NO Warning: cannot determine version of /app/ghc/lib/ghc-7.10.0.20150123/bin/ghc : "" ghc-cabal: '/app/ghc/lib/ghc-7.10.0.20150123/bin/ghc' exited with an error: /app/ghc/lib/ghc-7.10.0.20150123/bin/ghc: symbol lookup error: /app/ghc/lib/ghc-7.10.0.20150123/bin/../rts/libHSrts_thr-ghc7.10.0.20150123.so: undefined symbol: pthread_setname_np The bindist name does mention 'deb7', so perhaps this is all working as intended. However, similarly named bindists for GHC 7.8.* work fine with glibc 2.11. In other news, I?m happy to say Halcyon now supports GHC 7.10.1-rc2 on CentOS 6 and 7, Debian 7, Fedora 19, 20, and 21, and Ubuntu 12 and 14. https://halcyon.sh/ $ halcyon install --ghc-version=7.10.1-rc2 --cabal-version=1.22.0.0 Best, -- Mi?tek On 2015-01-27, at 00:13, Austin Seipp wrote: > We are pleased to announce the second release candidate for GHC 7.10.1: > > https://downloads.haskell.org/~ghc/7.10.1-rc2/ > > This includes the source tarball and bindists for 64bit/32bit Linux > and Windows. Binary builds for other platforms will be available > shortly. (CentOS 6.5 binaries are not available at this time like they > were for 7.8.x). These binaries and tarballs have an accompanying > SHA256SUMS file signed by my GPG key id (0x3B58D86F). > > We plan to make the 7.10.1 release sometime in February of 2015. > > Please test as much as possible; bugs are much cheaper if we find them > before the release! > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4203 bytes Desc: not available URL: From adam at well-typed.com Tue Jan 27 09:07:25 2015 From: adam at well-typed.com (Adam Gundry) Date: Tue, 27 Jan 2015 09:07:25 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <54C7554D.9080104@well-typed.com> Yes, we can't make IV the magic class for which instances are generated. As I pointed out earlier in the thread, we need to give an instance for the function space that enforces the functional dependency (either with an actual fundep or a type family), and keep a distinguished HasField class. AFAICS it's still an open question as to whether that instance should provide (a) selector functions r -> a (b) lenses (a -> f b) -> s -> f t (c) both (d) neither but I'm starting to think (b) is the sanest option. Otherwise, I think we've more or less converged on the issues (apart from the syntax question) and I'll update the wiki page appropriately. On the syntax question, Edward, could you say more about how you would expect the magic imports to work? If a module both declares (or imports) a record field `x` and magically imports `x`, what does a use of `x` mean? (In the original ORF, we didn't have the magic module, but just said that record fields were automatically polymorphic... that works but is a bit fiddly in the renamer, and isn't a conservative extension.) Adam On 27/01/15 00:59, Edward Kmett wrote: > I'm also rather worried, looking over the IV proposal, that it just > doesn't actually work. > > We actually tried the code under "Haskell 98 records" back when Gundry > first started his proposal and it fell apart when you went to compose them. > > A fundep/class associated type in the class is a stronger constraint > that a type equality defined on an individual instance. > > I don't see how > > @foo . @bar . @baz > > (or #foo . #bar . #baz as would be written under the concrete proposal > on the wiki) > > is ever supposed to figure out the intermediate types when working > polymorphically in the data type involved. > > What happens when the type of that chain of accessors is left to > inference? You get stuck wallowing in AllowAmbiguousTypes territory: > > (#foo . #bar . #baz) :: (IV "foo" (c -> d), IV "bar" (b -> c), IV "baz" > (a -> b)) => a -> d > > has a variables 'b' and 'c' that don't occur on the right hand side, and > which are only determinable by knowing that the instances you expect to > see look something like: > > instance (a ~ Bool) => IV "x" (S -> a) where > iv (MkS x) = x > > but that is too weak to figure out that "S" determines "a" unless S is > already known, even if we just limit ourselves to field accessors as > functions. > > -Edward -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Tue Jan 27 09:16:16 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 27 Jan 2015 09:16:16 +0000 Subject: GHC support for the new "record" package In-Reply-To: <54C7554D.9080104@well-typed.com> References: <54BECC45.6010906@gmail.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562B7A9A@DB3PRD3001MB020.064d.mgd.msft.net> Adam, are you willing to update the wiki page to reflect the latest state of the conversation, identifying remaining choices? That would be v helpful. Simon | -----Original Message----- | From: Adam Gundry [mailto:adam at well-typed.com] | Sent: 27 January 2015 09:07 | To: Edward Kmett; Simon Peyton Jones | Cc: Simon Marlow; ghc-devs at haskell.org | Subject: Re: GHC support for the new "record" package | | Yes, we can't make IV the magic class for which instances are | generated. | As I pointed out earlier in the thread, we need to give an instance | for the function space that enforces the functional dependency (either | with an actual fundep or a type family), and keep a distinguished | HasField class. AFAICS it's still an open question as to whether that | instance should provide | | (a) selector functions r -> a | (b) lenses (a -> f b) -> s -> f t | (c) both | (d) neither | | but I'm starting to think (b) is the sanest option. | | Otherwise, I think we've more or less converged on the issues (apart | from the syntax question) and I'll update the wiki page appropriately. | | On the syntax question, Edward, could you say more about how you would | expect the magic imports to work? If a module both declares (or | imports) a record field `x` and magically imports `x`, what does a use | of `x` mean? (In the original ORF, we didn't have the magic module, | but just said that record fields were automatically polymorphic... | that works but is a bit fiddly in the renamer, and isn't a | conservative extension.) | | Adam | | | On 27/01/15 00:59, Edward Kmett wrote: | > I'm also rather worried, looking over the IV proposal, that it just | > doesn't actually work. | > | > We actually tried the code under "Haskell 98 records" back when | Gundry | > first started his proposal and it fell apart when you went to | compose them. | > | > A fundep/class associated type in the class is a stronger constraint | > that a type equality defined on an individual instance. | > | > I don't see how | > | > @foo . @bar . @baz | > | > (or #foo . #bar . #baz as would be written under the concrete | proposal | > on the wiki) | > | > is ever supposed to figure out the intermediate types when working | > polymorphically in the data type involved. | > | > What happens when the type of that chain of accessors is left to | > inference? You get stuck wallowing in AllowAmbiguousTypes territory: | > | > (#foo . #bar . #baz) :: (IV "foo" (c -> d), IV "bar" (b -> c), IV | "baz" | > (a -> b)) => a -> d | > | > has a variables 'b' and 'c' that don't occur on the right hand side, | > and which are only determinable by knowing that the instances you | > expect to see look something like: | > | > instance (a ~ Bool) => IV "x" (S -> a) where | > iv (MkS x) = x | > | > but that is too weak to figure out that "S" determines "a" unless S | is | > already known, even if we just limit ourselves to field accessors as | > functions. | > | > -Edward | | | -- | Adam Gundry, Haskell Consultant | Well-Typed LLP, http://www.well-typed.com/ From adam at well-typed.com Tue Jan 27 09:19:16 2015 From: adam at well-typed.com (Adam Gundry) Date: Tue, 27 Jan 2015 09:19:16 +0000 Subject: GHC support for the new "record" package In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562B7A9A@DB3PRD3001MB020.064d.mgd.msft.net> References: <54BECC45.6010906@gmail.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7A9A@DB3PRD3001MB020.064d.mgd.msft.net > Message-ID: <54C75814.4010105@well-typed.com> On 27/01/15 09:16, Simon Peyton Jones wrote: > Adam, are you willing to update the wiki page to reflect the latest state of the conversation, identifying remaining choices? That would be v helpful. I'm on it now. It'll take a little while because I'm merging plans A and B into a single coherent story. Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From ndmitchell at gmail.com Tue Jan 27 09:30:42 2015 From: ndmitchell at gmail.com (Neil Mitchell) Date: Tue, 27 Jan 2015 09:30:42 +0000 Subject: GHC support for the new "record" package In-Reply-To: <54C75814.4010105@well-typed.com> References: <54BECC45.6010906@gmail.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7A9A@DB3PRD3001MB020.064d.mgd.msft.net> <54C75814.4010105@well-typed.com> Message-ID: Edward: Note that x = #x is a perfectly legal definition, and now you can have your lenses exactly as before. When discussing this with Simon, I actually proposed that x = #x be automatically generated by the data definitions, and then nub'd after. Not sure it's a good idea or not, but it's certainly possible. I was also of the opinion that data should produce FieldSelector classes etc, but _not_ link them to IV, specifically to avoid problems with the stab lenses. I expect that if you only wanted stab lenses, and never selectors, you could (probably) tie them up in a way that did the resolution nicely without ambiguity problems. With those two pieces I think you can still have your: foo^.bar.baz.quux Thanks, Neil On Tue, Jan 27, 2015 at 9:19 AM, Adam Gundry wrote: > On 27/01/15 09:16, Simon Peyton Jones wrote: >> Adam, are you willing to update the wiki page to reflect the latest state of the conversation, identifying remaining choices? That would be v helpful. > > I'm on it now. It'll take a little while because I'm merging plans A and > B into a single coherent story. > > Adam > > > -- > Adam Gundry, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From simonpj at microsoft.com Tue Jan 27 10:13:10 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 27 Jan 2015 10:13:10 +0000 Subject: American vs. British English In-Reply-To: References: <201501161119.07912.jan.stolarek@p.lodz.pl> <618BE556AADD624C9C918AA5D5911BEF562A78FF@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562B7C88@DB3PRD3001MB020.064d.mgd.msft.net> | The advantage of *not* introducing aliases is that it makes it that | much easier to exhaustively test whether some extension is turned on - | it means extensions have a canonical name that everyone uses. It's too late. We have aliases for lots of pragmas and language extensions, and probably should have one for this too, if only for consistency. (If someone wants to be consistent the other way, and builds a consensus for taking them all out, that's fine by me too.) Meanwhile, would someone like to add the alias for GeneralisedNewtypeDeriving? Simon | -----Original Message----- | From: Boespflug, Mathieu [mailto:m at tweag.io] | Sent: 26 January 2015 18:17 | To: gale at sefer.org | Cc: Simon Peyton Jones; ghc-devs at haskell.org | Subject: Re: American vs. British English | | FWIW, even the British can't entirely make up their mind about whether | to -ize or to -ise: | | http://blog.oxforddictionaries.com/2011/03/ize-or-ise/ | | The advantage of *not* introducing aliases is that it makes it that | much easier to exhaustively test whether some extension is turned on - | it means extensions have a canonical name that everyone uses. | | On 26 January 2015 at 17:42, Yitzchak Gale wrote: | > Even though my native English is the U.S. | > variety, I still haven't gotten used to writing | > | > {-# LANGUAGE GeneralizedNewtypeDeriving #-} | > | > It's a constant compiler error for me. I'm just so accustomed to the | > idea that in the Haskell world, U.K. spelling and usage are the | norm. | > | > Would it be difficult to add the other spelling as an alias? | > | > Just my two cents, err, tuppence, err, whatever. | > -Yitz | > | > On Fri, Jan 16, 2015 at 12:26 PM, Simon Peyton Jones | > wrote: | >> We don't have a solid policy. Personally I prefer English, but | then I would. | >> | >> Simon | >> | >> | -----Original Message----- | >> | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf | Of | >> | Jan Stolarek | >> | Sent: 16 January 2015 10:19 | >> | To: ghc-devs at haskell.org | >> | Subject: American vs. British English | >> | | >> | I just realized GHC has data types named FamFlavor and | FamFlavour. | >> | That said, is there a policy that says which English should be | >> | used in the source code? | >> | | >> | Janek | >> | | >> | _______________________________________________ | >> | ghc-devs mailing list | >> | ghc-devs at haskell.org | >> | http://www.haskell.org/mailman/listinfo/ghc-devs | >> _______________________________________________ | >> ghc-devs mailing list | >> ghc-devs at haskell.org | >> http://www.haskell.org/mailman/listinfo/ghc-devs | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs From adam at well-typed.com Tue Jan 27 10:25:32 2015 From: adam at well-typed.com (Adam Gundry) Date: Tue, 27 Jan 2015 10:25:32 +0000 Subject: GHC support for the new "record" package In-Reply-To: <54C75814.4010105@well-typed.com> References: <54BECC45.6010906@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7A9A@DB3PRD3001MB020.064d.mgd.msft.net > <54C75814.4010105@well-typ ed.com> Message-ID: <54C7679C.2080708@well-typed.com> On 27/01/15 09:19, Adam Gundry wrote: > On 27/01/15 09:16, Simon Peyton Jones wrote: >> Adam, are you willing to update the wiki page to reflect the latest state of the conversation, identifying remaining choices? That would be v helpful. > > I'm on it now. It'll take a little while because I'm merging plans A and > B into a single coherent story. Done. As I understand it, the key remaining choices (flagged up with the phrase "Design question" are): 1. What are the IV instances provided in base? These could give selector functions, lenses, both or neither. 2. How do we identify implicit values? Either we have a syntactic cue (like `#` or `@`) or we do some magic in the renamer. - If the former, are normal unambiguous record selectors available as well? Or do we allow/generate definitions like x = #x, as Neil suggests? - If the latter, what happens when a record field and an implicit value are both in scope? Adam -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From alan.zimm at gmail.com Tue Jan 27 10:55:09 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Tue, 27 Jan 2015 12:55:09 +0200 Subject: [GHC] #9988: Remove fun_id, is_infix from FunBind, as they are now in Match In-Reply-To: <059.d428d37f962b86c62fe36de80414ec84@haskell.org> References: <044.a60b224ba7da7f43c70f7f0475934826@haskell.org> <059.d428d37f962b86c62fe36de80414ec84@haskell.org> Message-ID: No problem, I will work around it, I originally did not want to do it because of potential destabilisation, this still holds. On Tue, Jan 27, 2015 at 12:08 PM, GHC wrote: > #9988: Remove fun_id, is_infix from FunBind, as they are now in Match > -------------------------------------+------------------------------------- > Reporter: alanz | Owner: alanz > Type: task | Status: new > Priority: normal | Milestone: 7.12.1 > Component: Compiler | Version: 7.10.1-rc1 > Resolution: | Keywords: > Operating System: Unknown/Multiple | Architecture: > Type of failure: None/Unknown | Unknown/Multiple > Blocked By: | Test Case: > Related Tickets: | Blocking: > | Differential Revisions: > -------------------------------------+------------------------------------- > > Comment (by simonpj): > > 7.10 is in RC2. We can't keep modifying it otherwise we will never get it > out. I suppose that if you have strong reason to believe that you are not > introducing new bugs, and having it in is going to transform your life for > the better, then you can make the case. > > -- > Ticket URL: > GHC > The Glasgow Haskell Compiler > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Tue Jan 27 11:12:09 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Tue, 27 Jan 2015 11:12:09 +0000 Subject: GHC support for the new "record" package In-Reply-To: <54C7679C.2080708@well-typed.com> References: <54BECC45.6010906@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7A9A@DB3PRD3001MB020.064d.mgd.msft.net > <54C75814.4010105@well-typ ed.com> <54C7679C.2080708@well-typed.com> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> | 1. What are the IV instances provided in base? These could give | selector functions, lenses, both or neither. My instinct: just selector functions. Leave lenses for a lens package. I still have not understood the argument for lenses being a function rather that a newtype wrapping that function; apart from the (valuable) ability to re-use ordinary (.), which is cute. Edward has explained this several time, but I have failed to understand. Simon From johan.tibell at gmail.com Tue Jan 27 15:53:02 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 27 Jan 2015 07:53:02 -0800 Subject: GHC support for the new "record" package In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> References: <54BECC45.6010906@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <54C7679C.2080708@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Tue, Jan 27, 2015 at 3:12 AM, Simon Peyton Jones wrote: > | 1. What are the IV instances provided in base? These could give > | selector functions, lenses, both or neither. > > My instinct: just selector functions. Leave lenses for a lens package. > +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gale at sefer.org Tue Jan 27 16:49:13 2015 From: gale at sefer.org (Yitzchak Gale) Date: Tue, 27 Jan 2015 18:49:13 +0200 Subject: Put "Error:" before error output In-Reply-To: References: Message-ID: -1 There are common idioms that rely on the current behavior, so I think this would break a lot code. Examples: In command line programs, it is very common to use "error" for printing the usage message. Many programs use "error" as a general way to exit from pure code with a message. I'm not commenting about whether or not those are good practice, just reporting that they are out there. I would be in favor of this though if it is off by default and is turned on by an option or pragma. But not just -Werror, though, except for messages that would otherwise have been prefixed by "Warning", like the current behavior. Thanks, Yitz On Fri, Jan 23, 2015 at 1:04 PM, Konstantine Rybnikov wrote: > Hi! > > I'm bringing this up once again. Can we add "Error:" in the output of an > error in a similar way ghc shows "Warning:" for warnings? Main reasoning is > that, for example, on a build-server, where you have lots of cores to build > your program, if you get an error, it gets lost somewhere in the middle of > compiler's output in all other "Warning" messages you get, since error is > not always shown last on multi-core build. > > Thanks. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From k-bx at k-bx.com Tue Jan 27 17:02:32 2015 From: k-bx at k-bx.com (Konstantine Rybnikov) Date: Tue, 27 Jan 2015 19:02:32 +0200 Subject: Put "Error:" before error output In-Reply-To: References: Message-ID: Yitzchak, Sorry, I didn't get what you mean. Do you mean `error` [0] function from Prelude? The discussion is currently not regarding runtime program behavior, nor it is about `error` function. It's rather regarding compiler output message on compilation failure, so it shouldn't get mixed with your program's runtime behavior in any way. See ticket #10021 [1] for examples of what I'm talking about (I'm adding "motivation" section right now). [0]: http://hackage.haskell.org/package/base-4.7.0.2/docs/Prelude.html#v:error [1]: https://ghc.haskell.org/trac/ghc/ticket/10021#modify On Tue, Jan 27, 2015 at 6:49 PM, Yitzchak Gale wrote: > -1 > > There are common idioms that rely on the current behavior, > so I think this would break a lot code. > > Examples: > > In command line programs, it is very common to use > "error" for printing the usage message. > > Many programs use "error" as a general way to exit > from pure code with a message. > > I'm not commenting about whether or not those > are good practice, just reporting that they are out there. > > I would be in favor of this though if it is off by default > and is turned on by an option or pragma. But not just > -Werror, though, except for messages that would > otherwise have been prefixed by "Warning", like > the current behavior. > > Thanks, > Yitz > > > > On Fri, Jan 23, 2015 at 1:04 PM, Konstantine Rybnikov > wrote: > > Hi! > > > > I'm bringing this up once again. Can we add "Error:" in the output of an > > error in a similar way ghc shows "Warning:" for warnings? Main reasoning > is > > that, for example, on a build-server, where you have lots of cores to > build > > your program, if you get an error, it gets lost somewhere in the middle > of > > compiler's output in all other "Warning" messages you get, since error is > > not always shown last on multi-core build. > > > > Thanks. > > > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Tue Jan 27 17:05:58 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Tue, 27 Jan 2015 12:05:58 -0500 Subject: Put "Error:" before error output In-Reply-To: References: Message-ID: On Tue, Jan 27, 2015 at 12:02 PM, Konstantine Rybnikov wrote: > Sorry, I didn't get what you mean. Do you mean `error` [0] function from > Prelude? The discussion is currently not regarding runtime program > behavior, nor it is about `error` function. It's rather regarding compiler > output message on compilation failure, so it shouldn't get mixed with your > program's runtime behavior in any way ...unless using runhaskell/runghc. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From k-bx at k-bx.com Tue Jan 27 17:35:46 2015 From: k-bx at k-bx.com (Konstantine Rybnikov) Date: Tue, 27 Jan 2015 19:35:46 +0200 Subject: Put "Error:" before error output In-Reply-To: References: Message-ID: Yes, correct. But I think ghci's errors are already distinguishable from `error`, e.g. this: ``` Prelude> :set -Wall Prelude> :load test [1 of 1] Compiling Main ( test.hs, interpreted ) test.hs:1:1: Warning: Top-level binding with no type signature: main :: IO () Ok, modules loaded: Main. Prelude> let main = asdasdasd :6:12: Not in scope: ?asdasdasd? *Main> :load test2 [1 of 1] Compiling Main ( test2.hs, interpreted ) test2.hs:1:8: Not in scope: ?foo? Failed, modules loaded: none. Prelude> error "foo" *** Exception: foo ``` will change to: ``` Prelude> :set -Wall Prelude> :load test [1 of 1] Compiling Main ( test.hs, interpreted ) test.hs:1:1: Warning: Top-level binding with no type signature: main :: IO () Ok, modules loaded: Main. Prelude> let main = asdasdasd :6:12: Not in scope: ?asdasdasd? *Main> :load test2 [1 of 1] Compiling Main ( test2.hs, interpreted ) test2.hs:1:8: Error: Not in scope: ?foo? Failed, modules loaded: none. Prelude> error "foo" *** Exception: foo ``` Don't think this will cause any trouble (well, except for current tests that check for output, which was mentioned in ticket #10021, but I hope it's not that big of a problem). On Tue, Jan 27, 2015 at 7:05 PM, Brandon Allbery wrote: > On Tue, Jan 27, 2015 at 12:02 PM, Konstantine Rybnikov > wrote: > >> Sorry, I didn't get what you mean. Do you mean `error` [0] function from >> Prelude? The discussion is currently not regarding runtime program >> behavior, nor it is about `error` function. It's rather regarding compiler >> output message on compilation failure, so it shouldn't get mixed with your >> program's runtime behavior in any way > > > ...unless using runhaskell/runghc. > > -- > brandon s allbery kf8nh sine nomine > associates > allbery.b at gmail.com > ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad > http://sinenomine.net > -------------- next part -------------- An HTML attachment was scrubbed... URL: From moritz at lichtzwerge.de Tue Jan 27 20:55:08 2015 From: moritz at lichtzwerge.de (Moritz Angermann) Date: Tue, 27 Jan 2015 21:55:08 +0100 Subject: Looking up importDecl and Unique inequality Message-ID: <6D2CF7C2-9398-4292-BB4D-4414E56B1B23@lichtzwerge.de> Hi *, I'm still trying to get a stage1 compiler with TH support. After some experimenting, I managed to adjust ghc-7.8 sufficiently to build and not complain about compiling TH code right away. I'm now stuck with importDecl trying to import Language.Haskell.TH.Lib.ExpQ (as that is referenced in my sample code I try to compile), but even after loading the TH/Lib.hi file unable to find it due to a missmatch in the Uniques. I have a Stage 2, ghc-7.8 (A) compiler for the host, which I use to compile a Stage 1 ghc-7.8 (B) compiler. Now I want to inject some code (a plugin) into (B), which therefore is compiled with (A) and hence depends on the packages of (A). But as A and B are the identical version, I hope that I should be able to feed (B) the same package db. Given that the plugin was compiled into a cabal package with (A) at (Y) and the package db of (A) is at (X), I try to compile my sample code with $ cabal exec B -- path/to/Sample.hs -package-db X -package-db Y -package plugin -fplugin MyPlugin -- Sample.hs -- {-# LANGUAGE TemplateHaskell #-} module Main where main :: IO () main = do let e = $([|Just "Splice!"|]) :: Maybe String putStrLn . show $ e -- Sample.hs -- The type checker tries to find Language.Haskell.TH.Lib.ExpQ and loads the lib/TH.hi from (X) with the declaration, but fails to find it in the eps_PTE after loading the interface. What I was able to figure out was that Language.Haskell.TH.Lib.ExpQ loaded from lib/TH.hi in (X) obtains the unique key "rCM", while the unique key that is being looked up is "39S". Can someone shed some light onto where I am misunderstanding how something is supposed to work? Cheers, Moritz From austin at well-typed.com Tue Jan 27 21:06:05 2015 From: austin at well-typed.com (Austin Seipp) Date: Tue, 27 Jan 2015 15:06:05 -0600 Subject: GHC Weekly News - 2015/01/27 Message-ID: Hi *, It's time for the weekly GHC news. Over at GHC HQ, we discussed some things this week including: - Austin took the time the past week to check `./validate --slow` failures, and is planning on filing bugs and fixes for the remaining failures soon. Afterwords, we'll immediately begin enabling `--slow` on Phabricator, so developers get their patches tested more thoroughly. - The 7.10 release looks like it will likely not have a 3rd Release Candidate, and will be released in late Feburary of 2015, as we originally expected. - The 7.10 branch currently has two showstopping bugs we plan on hitting before the final release. And we'd really like for users to test so we can catch more! - Austin Seipp will likely be gone for the coming week in a trip to New York City from the 28th to the 4th, meaning (much to the dismay of cheering crowds) you'd better catch him beforehand if you need him! (Alternatively Austin will be held back due to an intense snowstorm developing in NYC. So, we'll see!) - Austin is planning on helping the LLVM support in HEAD soon; after coordinating with Ben Gamari, we're hoping to ship GHC 7.12 with (at least) LLVM 3.6 as an officially supported backend, based on the documentation described in https://ghc.haskell.org/trac/ghc/wiki/ImprovedLLVMBackend - lots of thanks to Ben for working with upstream to file bugs and improve things! And in other news, through chatter on the mailing list and Phabricator, we have: - Austin Seipp announced GHC 7.10.1 RC2: https://www.haskell.org/pipermail/ghc-devs/2015-January/008140.html - Peter Trommler posted his first version of a native Linux/PowerPC 64bit code generator! There's still a lot more work to do, but this is a significantly improved situation over the unregisterised C backend. Curious developers can see the patch at Phab:D629. - A long, ongoing thread started by Richard Eisenberg about the long-term plans for the vectorisation code have been posted. The worry is that the vectoriser as well as DPH have stagnated in development, which costs other developers any time they need to build GHC, make larger changes, or keep code clean. There have been a lot of varied proposals in the thread from removing the code to commenting it out, to keeping it. It's unclear what the future holds, but the discussion still rages on. https://www.haskell.org/pipermail/ghc-devs/2015-January/007986.html - Karel Gardas is working on reviving the SPARC native code generator, but has hit a snag where double float load instructions were broken. https://www.haskell.org/pipermail/ghc-devs/2015-January/008123.html - Alexander Vershilov made a proposal to the GHC team: can we remove the `transformers` dependency? It turns out to be a rather painful dependency for users of the GHC API and of packages depending on `transformers`, as you cannot link against any version other than the one GHC ships, causing pain. The alternative proposal involves splitting off the `transformers` dependency into a package of Orphan instances. The final decision isn't yet clear, nor is a winner in clear sight yet! https://www.haskell.org/pipermail/ghc-devs/2015-January/008058.html - Konstantine Rybnikov has a simple question about GHC's error messages: can they say `Error:` before anything else, to be more consistent with warnings? It seems like a positive change - and it looks like Konstantine is on the job to fix it, too. https://www.haskell.org/pipermail/ghc-devs/2015-January/008105.html - Simon Marlow has started a long thread about the fate of records in future GHC versions. Previously, Adam Gundry had worked on `OverloadedRecordFields`. And now Nikita Volkov has introduced his `records` library which sits in a slightly different spot in the design space. But now the question is - how do we proceed? Despite all prior historical precedent, it looks like there's finally some convergence on a reasonable design that can hit GHC in the future. https://www.haskell.org/pipermail/ghc-devs/2015-January/008049.html Closed tickets the past two weeks include: #9889, #9384, #8624, #9922, #9878, #9999, #9957, #7298, #9836, #10008, #9856, #9975, #10013, #9949, #9953, #9856, #9955, #9867, #10015, #9961, #5364, #9928, and #10028. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From chak at cse.unsw.edu.au Tue Jan 27 23:19:56 2015 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Wed, 28 Jan 2015 10:19:56 +1100 Subject: vectorisation code? In-Reply-To: <54C11F26.9080201@apeiron.net> References: <0CC751A6-A9A9-47AA-961E-A997F39844A2@cis.upenn.edu> <618BE556AADD624C9C918AA5D5911BEF562AAB21@DB3PRD3001MB020.064d.mgd.msft.net> <201501200937.25031.jan.stolarek@p.lodz.pl> <8761c1ltym.fsf@gmail.com> <618BE556AADD624C9C918AA5D5911BEF562AE247@DB3PRD3001MB020.064d.mgd.msft.net> <54C039D0.4020206@apeiron.net> <6AA9EDDD-2444-4ADE-8675-793745BBA374@cse.unsw.edu.au> <618BE556AADD624C9C918AA5D5911BEF562AF251@DB3PRD3001MB020.064d.mgd.msft.net> <54C10257.5030907@apeiron.net> <87twziq0fo.fsf@gmail.com> <54C11F26.9080201@apeiron.net> Message-ID: The way I see it, the main cost of keeping DPH around is to handle breakages such as that with vector. I can?t promise to address those in a timely manner, which is why I agreed to disable/remove DPH. However, as Geoff stepped forward, this issue is solved. As for the overhead in compile time etc, I don?t think, it is that much of a deal. During development, most compiles runs are incremental anyway. Concerning handling of the DPH libraries, since Austin was looking at putting some love into the ?slow test suite. Maybe we could re-active the DPH tests, and hence, DPH library builds for ?slow? testing. The DPH libraries have shaken out many GHC bugs in the past. Adding them to ?slow? would ensure they don?t bit rot, improve the test suite, but keep it out of the main path for dev builds. Manuel > Geoffrey Mainland : > > On 01/22/2015 10:50 AM, Herbert Valerio Riedel wrote: >> On 2015-01-22 at 14:59:51 +0100, Geoffrey Mainland wrote: >>> The current situation is that DPH is not being built or maintained at >>> all. Given this state of affairs, it is hard to justify keeping it >>> around---DPH is just bitrotting. >>> >>> I am proposing that we reconnect it to the build and keep it building, >>> putting it in minimal maintenance mode. >> Ok, but how do we avoid issues like >> >> http://thread.gmane.org/gmane.comp.lang.haskell.ghc.devel/5645/ >> >> in the future then? DPH became painful back then, because we didn't know >> what to do with 'vector' (which as a package at the time also suffered >> from neglect of maintainership) >> >> >> Cheers, >> hvr >> > > That's part of "minimal maintenance mode." Yes, keeping DPH will impose > some burden. I am not pretending that keeping DPH imposes no cost, but > instead asking what cost we are willing to pay to keep DPH > working---maybe the answer is "none." > > As for the particular issue you mentioned, I patched DPH to fix > compatibility with the new vector. Those changes have been in the tree > for some time, but DPH was never reconnected to the build, so it has > bitrotted again. > > Note that vector *also* no longer builds with the other libraries in the > tree, so if we excise DPH, we should excise vector. > > I am willing to put some effort into fixing these sorts of problems when > they come up. That may still impose too much burden on the other developers. > > Geoff > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From ekmett at gmail.com Tue Jan 27 23:29:50 2015 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 27 Jan 2015 18:29:50 -0500 Subject: GHC support for the new "record" package In-Reply-To: <54C7554D.9080104@well-typed.com> References: <54BECC45.6010906@gmail.com> <54BFD870.7070602@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> Message-ID: On Tue, Jan 27, 2015 at 4:07 AM, Adam Gundry wrote: > > AFAICS it's still an open question as to whether that instance > should provide > > (a) selector functions r -> a > (b) lenses (a -> f b) -> s -> f t > (c) both > (d) neither > > but I'm starting to think (b) is the sanest option. > Glad I'm not the only voice in the wilderness ;) On the syntax question, Edward, could you say more about how you would > expect the magic imports to work? If a module both declares (or imports) > a record field `x` and magically imports `x`, what does a use of `x` > mean? (In the original ORF, we didn't have the magic module, but just > said that record fields were automatically polymorphic... that works but > is a bit fiddly in the renamer, and isn't a conservative extension.) > The straw man I was offering when this was just about {| foo :: .., ... |} -style records would be to have those bring into scope the Field.foo lenses by default as a courtesy, since there is nothing involved in that that necessarily ever defines a normal field accessor. I'm very much not convinced one way or the other if such a courtesy import would be better than requiring the user to do it by hand. It is when we start mixing this with ORF that things get confusing, which is of course why we're having this nice big discussion. Having definitions we bring from that module able to be used with normal records via something like the ORF makes sense. It invites some headaches though, as higher-rank fields seem to be a somewhat insurmountable obstacle to the latter, whereas they can be unceremoniously ignored in anonymous records, since they didn't exist before. As Neil noted, you _can_ write `foo = @foo` to make such an accessor have the lighter weight syntax. Of course, once folks start using template haskell to do so, we get right back to where we are today. It also invites the question of where such exports should be made. I'm less sanguine about the proposed IV class, as it doesn't actually work in its current incarnation in the proposal as mentioned above. Assuming it has been modified to actually compose and infer, the benefit of the `import Field (...)` or naked @foo approach is that if two modules bring in the same field they are both compatible when imported into a third module. One half-way serious option might be to have that Field or Lens or whatever module just export `foo = @foo` definitions from a canonical place so they can be shared, and to decide if folks have to import it explicitly to use it. Then @foo could be the lens to get at the contents of the field, can do type changing assignment, and users can import the fields to avoid the noise. It confess, the solution there feels quite heavy, though. -Edward Adam > > > On 27/01/15 00:59, Edward Kmett wrote: > > I'm also rather worried, looking over the IV proposal, that it just > > doesn't actually work. > > > > We actually tried the code under "Haskell 98 records" back when Gundry > > first started his proposal and it fell apart when you went to compose > them. > > > > A fundep/class associated type in the class is a stronger constraint > > that a type equality defined on an individual instance. > > > > I don't see how > > > > @foo . @bar . @baz > > > > (or #foo . #bar . #baz as would be written under the concrete proposal > > on the wiki) > > > > is ever supposed to figure out the intermediate types when working > > polymorphically in the data type involved. > > > > What happens when the type of that chain of accessors is left to > > inference? You get stuck wallowing in AllowAmbiguousTypes territory: > > > > (#foo . #bar . #baz) :: (IV "foo" (c -> d), IV "bar" (b -> c), IV "baz" > > (a -> b)) => a -> d > > > > has a variables 'b' and 'c' that don't occur on the right hand side, and > > which are only determinable by knowing that the instances you expect to > > see look something like: > > > > instance (a ~ Bool) => IV "x" (S -> a) where > > iv (MkS x) = x > > > > but that is too weak to figure out that "S" determines "a" unless S is > > already known, even if we just limit ourselves to field accessors as > > functions. > > > > -Edward > > > -- > Adam Gundry, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Tue Jan 27 23:47:37 2015 From: ekmett at gmail.com (Edward Kmett) Date: Tue, 27 Jan 2015 18:47:37 -0500 Subject: GHC support for the new "record" package In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> References: <54BECC45.6010906@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <54C7679C.2080708@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Tue, Jan 27, 2015 at 6:12 AM, Simon Peyton Jones wrote: > | 1. What are the IV instances provided in base? These could give > | selector functions, lenses, both or neither. > > My instinct: just selector functions. Leave lenses for a lens package. > How do these selectors actually typecheck when composed? Ignoring lenses all together for the moment, I don't see how IV works. > I still have not understood the argument for lenses being a function > rather that a newtype wrapping that function; apart from the (valuable) > ability to re-use ordinary (.), which is cute. Edward has explained this > several time, but I have failed to understand. You can make a data type data Lens s a = Lens (s -> a) (a -> s -> s) or newtype Lens s a = Lens (s -> (a, a -> s)) The latter is basically the approach I used to take in my old data-lens library. This works great for lenses that don't let you change types. You can write a Category instance for this notion of lens. You can make it compose the way functions normally compose (or you can flip the arguments and make it compose the way lenses in the lens library do, here you have an option.) Now, expand it to let you do type changing assignment. newtype Lens s t a b = Lens (s -> a) (s -> b -> t) Now we have 4 arguments, but Category wants 2. I've punted a way-too-messy aside about why 4 arguments are used to the end. [*] You can come up with a horrible way in which you can encode a GADT data Lens :: (*,*) -> (*,*) -> * where Lens :: (s -> a) -> (s -> b -> t) -> Lens '(s,t) '(a,b) but when you go to define instance Category Lens where id = ... you'd get stuck, because we can't prove that all inhabitants of (*,*) look like '(a,b) for some types a and b. On the other hand, you can make the data type too big data Lens :: * -> * -> * where Lens :: (s -> a) -> (s -> b -> t) -> Lens (s,t) (a,b) Id :: Lens a a but now you can distinguish too much information, GHC is doing case analysis everywhere, etc. Performance drops like a stone and it doesn't fit the abstraction. In short, using a dedicated data type costs you access to (.) for composition or costs you the ability to let the types change. -Edward [*] Why 4 arguments? We can make up our own combinators for putting these things together, but we can't use (.) from the Prelude or even from Control.Category. There are lots of ways to motivate the 4 argument version: Logically there are two type families involved the 'inner' family and the 'outer' one and the lens type looks like outer i is isomorphic to the pair of some 'complement' that doesn't depend on the index i, and some inner i. outer i <-> (complement, inner i) We can't talk about such families in Haskell though, we need them to compose by pullback/unification, so we fake it by using two instantiations of the schema outer i -> (inner i, inner j -> outer j) which is enough for 99% of the things a user wants to say with a lens or field accessor. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dan.doel at gmail.com Wed Jan 28 00:26:52 2015 From: dan.doel at gmail.com (Dan Doel) Date: Tue, 27 Jan 2015 19:26:52 -0500 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <54C7679C.2080708@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Tue, Jan 27, 2015 at 6:47 PM, Edward Kmett wrote: > > This works great for lenses that don't let you change types. > ?This is not the only restriction required for this to be an acceptable solution. As soon as you have a distinct Lens type, and use something Category-like for composition, you are limiting yourself to composing two lenses to get back a lens (barring a terrible mptc 'solution'). And that is weak. The only reason I (personally) think lens pulls its weight, and is worth using (unlike every prior lens library, which I never bothered with), is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to properly degrade to one another and compose automatically. So if we're settling on a nominal Lens type in a proposal, then it is automatically only good for one thing to me: defining values of the better lens type.? -- Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Wed Jan 28 01:39:26 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 27 Jan 2015 21:39:26 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 2 In-Reply-To: References: Message-ID: Has anybody successfully build and used this on the Mac on 10.10 using the latest XCode? While it is better than RC1 I am still seeing the following two issues: - programs compiled with llvm fail at runtime with illegal instruction - calling main from the ghci inerpreter after loading compiled code results in - Prelude Main> main Too late for parseStaticFlags: call it before runGhc or runGhcT *** Exception: ExitFailure 1 Instead of solving the above, I'd be happy to switch to a Mac OS bindist and see if I have the same problems there. Do we have an ETA for a Mac OS bindist? Thanks George On Mon, Jan 26, 2015 at 8:13 PM, Austin Seipp wrote: > We are pleased to announce the second release candidate for GHC 7.10.1: > > https://downloads.haskell.org/~ghc/7.10.1-rc2/ > > This includes the source tarball and bindists for 64bit/32bit Linux > and Windows. Binary builds for other platforms will be available > shortly. (CentOS 6.5 binaries are not available at this time like they > were for 7.8.x). These binaries and tarballs have an accompanying > SHA256SUMS file signed by my GPG key id (0x3B58D86F). > > We plan to make the 7.10.1 release sometime in February of 2015. > > Please test as much as possible; bugs are much cheaper if we find them > before the release! > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Jan 28 01:52:12 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 27 Jan 2015 20:52:12 -0500 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 2 In-Reply-To: References: Message-ID: George, what version of llvm are you using? afaik, only llvm 3.5 is supported for 7.10 (though I could be wrong) On Tue, Jan 27, 2015 at 8:39 PM, George Colpitts wrote: > Has anybody successfully build and used this on the Mac on 10.10 using the > latest XCode? While it is better than RC1 I am still seeing the following > two issues: > > > - programs compiled with llvm fail at runtime with illegal instruction > - calling main from the ghci inerpreter after loading compiled code > results in > - Prelude Main> main > Too late for parseStaticFlags: call it before runGhc or runGhcT > *** Exception: ExitFailure 1 > > Instead of solving the above, I'd be happy to switch to a Mac OS bindist > and see if I have the same problems there. Do we have an ETA for a Mac OS > bindist? > > > Thanks > > George > > > On Mon, Jan 26, 2015 at 8:13 PM, Austin Seipp > wrote: > >> We are pleased to announce the second release candidate for GHC 7.10.1: >> >> https://downloads.haskell.org/~ghc/7.10.1-rc2/ >> >> This includes the source tarball and bindists for 64bit/32bit Linux >> and Windows. Binary builds for other platforms will be available >> shortly. (CentOS 6.5 binaries are not available at this time like they >> were for 7.8.x). These binaries and tarballs have an accompanying >> SHA256SUMS file signed by my GPG key id (0x3B58D86F). >> >> We plan to make the 7.10.1 release sometime in February of 2015. >> >> Please test as much as possible; bugs are much cheaper if we find them >> before the release! >> >> -- >> Regards, >> >> Austin Seipp, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Wed Jan 28 02:11:06 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Tue, 27 Jan 2015 22:11:06 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 2 In-Reply-To: References: Message-ID: I have llvm 3.4.2. Not sure why I thought that was the supported version. Where would that be documented? There doesn't seem to be anything on this in https://downloads.haskell.org/~ghc/7.10.1-rc1/docs/html/users_guide/release-7-10-1.html There is lots of mail about llvm. I guess the following from Ben Gamari on 11/28 implies llvm 3.5. I couldn't find anything more definitive. Once I get a definitive answer I will try again assuming the answer is not 3.4.2 To summarize, * it seems like LLVM 3.4 chokes on the code produced by my 3.5 rework when the `$def` symbols are marked as internal * ARM is broken (again) due to a bug in the GHC calling convention implementation; an LLVM fix is waiting to be merged * I have code reworking TNTC for LLVM 3.6; unfortunately LLVM 3.6 support will likely need to wait until 7.12 * Austin's LLVM packaging proposal seems very much like the right way forward * Anticipating this proposal, I have started collecting [2] optimization passes Cheers, On Tue, Jan 27, 2015 at 9:52 PM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > George, what version of llvm are you using? afaik, only llvm 3.5 is > supported for 7.10 (though I could be wrong) > > On Tue, Jan 27, 2015 at 8:39 PM, George Colpitts < > george.colpitts at gmail.com> wrote: > >> Has anybody successfully build and used this on the Mac on 10.10 using >> the latest XCode? While it is better than RC1 I am still seeing the >> following two issues: >> >> >> - programs compiled with llvm fail at runtime with illegal instruction >> - calling main from the ghci inerpreter after loading compiled code >> results in >> - Prelude Main> main >> Too late for parseStaticFlags: call it before runGhc or runGhcT >> *** Exception: ExitFailure 1 >> >> Instead of solving the above, I'd be happy to switch to a Mac OS bindist >> and see if I have the same problems there. Do we have an ETA for a Mac OS >> bindist? >> >> >> Thanks >> >> George >> >> >> On Mon, Jan 26, 2015 at 8:13 PM, Austin Seipp >> wrote: >> >>> We are pleased to announce the second release candidate for GHC 7.10.1: >>> >>> https://downloads.haskell.org/~ghc/7.10.1-rc2/ >>> >>> This includes the source tarball and bindists for 64bit/32bit Linux >>> and Windows. Binary builds for other platforms will be available >>> shortly. (CentOS 6.5 binaries are not available at this time like they >>> were for 7.8.x). These binaries and tarballs have an accompanying >>> SHA256SUMS file signed by my GPG key id (0x3B58D86F). >>> >>> We plan to make the 7.10.1 release sometime in February of 2015. >>> >>> Please test as much as possible; bugs are much cheaper if we find them >>> before the release! >>> >>> -- >>> Regards, >>> >>> Austin Seipp, Haskell Consultant >>> Well-Typed LLP, http://www.well-typed.com/ >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Wed Jan 28 03:31:29 2015 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Tue, 27 Jan 2015 19:31:29 -0800 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 2 In-Reply-To: References: Message-ID: I've just built a bindist under 10.10, but just normal not expressly llvm. I'll test this in a bit then post it -- but might be sometime tomorrow before it is up. - Mark ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.lentczner at gmail.com Wed Jan 28 03:41:52 2015 From: mark.lentczner at gmail.com (Mark Lentczner) Date: Tue, 27 Jan 2015 19:41:52 -0800 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <54C7679C.2080708@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Tue, Jan 27, 2015 at 3:47 PM, Edward Kmett wrote: > We can make up our own combinators for putting these things together, but > we can't use (.) from the Prelude or even from Control.Category. > Is this the only reason *not* to have a data type? (Sorry, I wasn't totally following the GADT-nastics!) That is, if, for a moment, we just assume a different operator for composing lenses, then will a data/newtype work? Now, *if* (as I understand it), under IV (assuming it work), it works for lens libraries iff they use a data/newtype for the lens (so that their instance is *the* instance for ->, I'm guessing)...... *then*, I say using a different operator for compose is a small price to pay. (Well, as I said before, I'd actually prefer a different compose operator!) Mind you, I might be totally mis-understanding the arguments and reasoning! -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Wed Jan 28 04:05:22 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Wed, 28 Jan 2015 00:05:22 -0400 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 2 In-Reply-To: References: Message-ID: On Tuesday, January 27, 2015, George Colpitts wrote: > Thanks Carter!! llvm 3.5.1 fixed my llvm problem > That leaves one issue with ghci and the normal compiler > > - calling main from the ghci interpreter after loading *compiled* code > results in > - Prelude Main> main > Too late for parseStaticFlags: call it before runGhc or runGhcT > *** Exception: ExitFailure 1 > > > > On Tue, Jan 27, 2015 at 10:11 PM, George Colpitts < > george.colpitts at gmail.com > > wrote: > >> I have llvm 3.4.2. Not sure why I thought that was the supported version. >> Where would that be documented? There doesn't seem to be anything on this >> in >> https://downloads.haskell.org/~ghc/7.10.1-rc1/docs/html/users_guide/release-7-10-1.html >> >> There is lots of mail about llvm. I guess the following from Ben Gamari >> on 11/28 implies llvm 3.5. I couldn't find anything more definitive. >> >> Once I get a definitive answer I will try again assuming the answer is >> not 3.4.2 >> >> To summarize, >> >> * it seems like LLVM 3.4 chokes on the code produced by my 3.5 rework >> when the `$def` symbols are marked as internal >> >> * ARM is broken (again) due to a bug in the GHC calling convention >> implementation; an LLVM fix is waiting to be merged >> >> * I have code reworking TNTC for LLVM 3.6; unfortunately LLVM 3.6 >> support will likely need to wait until 7.12 >> >> * Austin's LLVM packaging proposal seems very much like the right way >> forward >> >> * Anticipating this proposal, I have started collecting [2] >> optimization passes >> >> Cheers, >> >> >> >> On Tue, Jan 27, 2015 at 9:52 PM, Carter Schonwald < >> carter.schonwald at gmail.com >> > wrote: >> >>> George, what version of llvm are you using? afaik, only llvm 3.5 is >>> supported for 7.10 (though I could be wrong) >>> >>> On Tue, Jan 27, 2015 at 8:39 PM, George Colpitts < >>> george.colpitts at gmail.com >>> > wrote: >>> >>>> Has anybody successfully build and used this on the Mac on 10.10 using >>>> the latest XCode? While it is better than RC1 I am still seeing the >>>> following two issues: >>>> >>>> >>>> - programs compiled with llvm fail at runtime with illegal >>>> instruction >>>> - calling main from the ghci inerpreter after loading compiled code >>>> results in >>>> - Prelude Main> main >>>> Too late for parseStaticFlags: call it before runGhc or runGhcT >>>> *** Exception: ExitFailure 1 >>>> >>>> Instead of solving the above, I'd be happy to switch to a Mac OS >>>> bindist and see if I have the same problems there. Do we have an ETA for a >>>> Mac OS bindist? >>>> >>>> >>>> Thanks >>>> >>>> George >>>> >>>> >>>> On Mon, Jan 26, 2015 at 8:13 PM, Austin Seipp >>> > wrote: >>>> >>>>> We are pleased to announce the second release candidate for GHC 7.10.1: >>>>> >>>>> https://downloads.haskell.org/~ghc/7.10.1-rc2/ >>>>> >>>>> This includes the source tarball and bindists for 64bit/32bit Linux >>>>> and Windows. Binary builds for other platforms will be available >>>>> shortly. (CentOS 6.5 binaries are not available at this time like they >>>>> were for 7.8.x). These binaries and tarballs have an accompanying >>>>> SHA256SUMS file signed by my GPG key id (0x3B58D86F). >>>>> >>>>> We plan to make the 7.10.1 release sometime in February of 2015. >>>>> >>>>> Please test as much as possible; bugs are much cheaper if we find them >>>>> before the release! >>>>> >>>>> -- >>>>> Regards, >>>>> >>>>> Austin Seipp, Haskell Consultant >>>>> Well-Typed LLP, http://www.well-typed.com/ >>>>> _______________________________________________ >>>>> ghc-devs mailing list >>>>> ghc-devs at haskell.org >>>>> >>>>> http://www.haskell.org/mailman/listinfo/ghc-devs >>>>> >>>> >>>> >>>> _______________________________________________ >>>> ghc-devs mailing list >>>> ghc-devs at haskell.org >>>> >>>> http://www.haskell.org/mailman/listinfo/ghc-devs >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 28 10:02:34 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 28 Jan 2015 10:02:34 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <54C7679C.2080708@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562B8B87@DB3PRD3001MB020.064d.mgd.msft.net> Ignoring lenses all together for the moment, I don't see how IV works. Could you take a look at the current version of https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign and the give an example of something problematic. You may well be right, but it?s hard to know without something specific to bite on. You can make a data type data Lens s a = Lens (s -> a) (a -> s -> s) You could, but that would be very different to your lovely lenses today, and it is certainly not what I was suggesting. All I was suggesting was newtype Lens s t a b = L { unwrap :: forall f. Functor f => (a -> f b) -> s -> f t } Just a wrapper around the precise type you use today. So presumably anything whatsoever that you can do today, you can also do by saying (unwrap l). Yes, that means you can?t use ordinary function composition (.) for these wrapped lenses. I agree that?s a pity. Perhaps this single point is so important that it justifies breaking abstraction. But breaking abstractions comes with costs, to error messages, and to future evolution. Are there other costs to having the abstraction, or is it just (.)? After all, the lens combinators themselves can wrap and unwrap to their heart?s content; it?s just the clients of the library that we care about here. Simon From: Edward Kmett [mailto:ekmett at gmail.com] Sent: 27 January 2015 23:48 To: Simon Peyton Jones Cc: Adam Gundry; ghc-devs at haskell.org Subject: Re: GHC support for the new "record" package On Tue, Jan 27, 2015 at 6:12 AM, Simon Peyton Jones > wrote: | 1. What are the IV instances provided in base? These could give | selector functions, lenses, both or neither. My instinct: just selector functions. Leave lenses for a lens package. How do these selectors actually typecheck when composed? Ignoring lenses all together for the moment, I don't see how IV works. I still have not understood the argument for lenses being a function rather that a newtype wrapping that function; apart from the (valuable) ability to re-use ordinary (.), which is cute. Edward has explained this several time, but I have failed to understand. You can make a data type data Lens s a = Lens (s -> a) (a -> s -> s) or newtype Lens s a = Lens (s -> (a, a -> s)) The latter is basically the approach I used to take in my old data-lens library. This works great for lenses that don't let you change types. You can write a Category instance for this notion of lens. You can make it compose the way functions normally compose (or you can flip the arguments and make it compose the way lenses in the lens library do, here you have an option.) Now, expand it to let you do type changing assignment. newtype Lens s t a b = Lens (s -> a) (s -> b -> t) Now we have 4 arguments, but Category wants 2. I've punted a way-too-messy aside about why 4 arguments are used to the end. [*] You can come up with a horrible way in which you can encode a GADT data Lens :: (*,*) -> (*,*) -> * where Lens :: (s -> a) -> (s -> b -> t) -> Lens '(s,t) '(a,b) but when you go to define instance Category Lens where id = ... you'd get stuck, because we can't prove that all inhabitants of (*,*) look like '(a,b) for some types a and b. On the other hand, you can make the data type too big data Lens :: * -> * -> * where Lens :: (s -> a) -> (s -> b -> t) -> Lens (s,t) (a,b) Id :: Lens a a but now you can distinguish too much information, GHC is doing case analysis everywhere, etc. Performance drops like a stone and it doesn't fit the abstraction. In short, using a dedicated data type costs you access to (.) for composition or costs you the ability to let the types change. -Edward [*] Why 4 arguments? We can make up our own combinators for putting these things together, but we can't use (.) from the Prelude or even from Control.Category. There are lots of ways to motivate the 4 argument version: Logically there are two type families involved the 'inner' family and the 'outer' one and the lens type looks like outer i is isomorphic to the pair of some 'complement' that doesn't depend on the index i, and some inner i. outer i <-> (complement, inner i) We can't talk about such families in Haskell though, we need them to compose by pullback/unification, so we fake it by using two instantiations of the schema outer i -> (inner i, inner j -> outer j) which is enough for 99% of the things a user wants to say with a lens or field accessor. -------------- next part -------------- An HTML attachment was scrubbed... URL: From benno.fuenfstueck at gmail.com Wed Jan 28 10:18:29 2015 From: benno.fuenfstueck at gmail.com (=?UTF-8?B?QmVubm8gRsO8bmZzdMO8Y2s=?=) Date: Wed, 28 Jan 2015 10:18:29 +0000 Subject: GHC support for the new "record" package References: <54BECC45.6010906@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <54C7679C.2080708@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B8B87@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Hi Simon, One problem with the newtype approach is that you can no longer write a single function that can work with all of lenses, traversals or other optics. For example, in the current lens library, set can be used for prism, lens, iso and traversal. That would just not be possible when using a newtype. Regards, Benno Simon Peyton Jones schrieb am Mi., 28. Jan. 2015 11:03: > Ignoring lenses all together for the moment, I don't see how IV works. > > > > Could you take a look at the *current* version of > https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign > > > > and the give an example of something problematic. You may well be right, > but it?s hard to know without something specific to bite on. > > > > You can make a data type > > data Lens s a = Lens (s -> a) (a -> s -> s) > > > > You could, but that would be very different to your lovely lenses today, > and it is certainly not what I was suggesting. All I was suggesting was > > > > newtype Lens s t a b = L { unwrap :: forall f. Functor > > f => (a -> f b) -> s -> f t } > > > > Just a wrapper around the precise type you use today. So presumably > anything whatsoever that you can do today, you can also do by saying > (unwrap l). > > > > Yes, that means you can?t use ordinary function composition (.) for these > wrapped lenses. I agree that?s a pity. Perhaps this single point is so > important that it justifies breaking abstraction. But breaking > abstractions comes with costs, to error messages, and to future evolution. > > > > Are there other costs to having the abstraction, or is it just (.)? After > all, the lens combinators themselves can wrap and unwrap to their heart?s > content; it?s just the clients of the library that we care about here. > > > > Simon > > > > *From:* Edward Kmett [mailto:ekmett at gmail.com] > *Sent:* 27 January 2015 23:48 > *To:* Simon Peyton Jones > *Cc:* Adam Gundry; ghc-devs at haskell.org > > > *Subject:* Re: GHC support for the new "record" package > > > > On Tue, Jan 27, 2015 at 6:12 AM, Simon Peyton Jones > wrote: > > | 1. What are the IV instances provided in base? These could give > | selector functions, lenses, both or neither. > > My instinct: just selector functions. Leave lenses for a lens package. > > > > How do these selectors actually typecheck when composed? > > > > Ignoring lenses all together for the moment, I don't see how IV works. > > > > > > I still have not understood the argument for lenses being a function > rather that a newtype wrapping that function; apart from the (valuable) > ability to re-use ordinary (.), which is cute. Edward has explained this > several time, but I have failed to understand. > > > > You can make a data type > > > > data Lens s a = Lens (s -> a) (a -> s -> s) > > > > or > > > > newtype Lens s a = Lens (s -> (a, a -> s)) > > > > The latter is basically the approach I used to take in my old data-lens > library. > > > > This works great for lenses that don't let you change types. > > > > You can write a Category instance for this notion of lens. > > > > You can make it compose the way functions normally compose (or you can > flip the arguments and make it compose the way lenses in the lens library > do, here you have an option.) > > > > Now, expand it to let you do type changing assignment. > > > > newtype Lens s t a b = Lens (s -> a) (s -> b -> t) > > > > Now we have 4 arguments, but Category wants 2. > > > > I've punted a way-too-messy aside about why 4 arguments are used to the > end. [*] > > > > You can come up with a horrible way in which you can encode a GADT > > > > data Lens :: (*,*) -> (*,*) -> * where > > Lens :: (s -> a) -> (s -> b -> t) -> Lens '(s,t) '(a,b) > > > > but when you go to define > > > > instance Category Lens where > > id = ... > > > > you'd get stuck, because we can't prove that all inhabitants of (*,*) look > like '(a,b) for some types a and b. > > > > On the other hand, you can make the data type too big > > > > data Lens :: * -> * -> * where > > Lens :: (s -> a) -> (s -> b -> t) -> Lens (s,t) (a,b) > > Id :: Lens a a > > > > but now you can distinguish too much information, GHC is doing case > analysis everywhere, etc. > > > > Performance drops like a stone and it doesn't fit the abstraction. > > > > In short, using a dedicated data type costs you access to (.) for > composition or costs you the ability to let the types change. > > > > -Edward > > > > [*] Why 4 arguments? > > > > We can make up our own combinators for putting these things together, but > we can't use (.) from the Prelude or even from Control.Category. > > > > There are lots of ways to motivate the 4 argument version: > > > > Logically there are two type families involved the 'inner' family and the > 'outer' one and the lens type looks like > > > > outer i is isomorphic to the pair of some 'complement' that doesn't depend > on the index i, and some inner i. > > > > outer i <-> (complement, inner i) > > > > We can't talk about such families in Haskell though, we need them to > compose by pullback/unification, so we fake it by using two instantiations > of the schema > > > > outer i -> (inner i, inner j -> outer j) > > > > which is enough for 99% of the things a user wants to say with a lens or > field accessor. > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Wed Jan 28 10:32:47 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 28 Jan 2015 10:32:47 +0000 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <54C7679C.2080708@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <618BE556AADD624C9C918AA5D5911BEF562B8C6F@DB3PRD3001MB020.064d.mgd.msft.net> As soon as you have a distinct Lens type, and use something Category-like for composition, you are limiting yourself to composing two lenses to get back a lens (barring a terrible mptc 'solution'). And that is weak. The only reason I (personally) think lens pulls its weight, and is worth using (unlike every prior lens library, which I never bothered with), is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to properly degrade to one another and compose automatically.? Aha. I keep asking whether it?s just the cute ability to re-use (.) that justifies the lack of abstraction in the Lens type. But Dan?s comment has made me remember something from my own talk on the subject. Here are the types of lenses and traversals (2-parameter versions): type Lens? s a = forall f. Functor f => (a -> f a) -> (s -> f s) type Traversal? s a = forall f. Applicative f => (a -> f a) -> (s -> f s) Suppose we have ln1 :: Lens' s1 s2 tr1 :: Traversal' s1 s2 ln2 :: Lens' s2 a tr2 :: Traversal' s2 a Now these compositions are all well typed ln1 . ln2 :: Lens' s1 a tr1 . tr2 :: Traversal' s1 a tr1 . ln2 :: Traversal' s1 a ln1 . tr2 :: Traversal' s1 a which is quite remarkable. If Lens? and Traversal? were newtypes, you?d need four different operators. (I think that what Dan means by ?a terrible mptc solution? is trying to overload those four operators into one.) I don?t know if this exhausts the reasons that lenses are not abstract. I would love to know more, explained in a smilar style. Incidentally has anyone explored this? newtype PolyLens c s a = PL (forall f. c f => (a -> f a) -> s -> f s) I?ve just abstracted over the Functor/Applicative part, so that Lens? and Traversal? are both PolyLenses. Now perhaps we can do (.), with a type like (.) :: PolyLens c1 s1 s2 -> PolyLens c2 s2 a -> PolyLens (And c1 c2) s1 a where And is a type function type instance And Functor Applicative = Applicative etc I have no idea whether this could be made to work out, but it seems like an obvious avenue so I wonder if anyone has explored it. Simon From: Dan Doel [mailto:dan.doel at gmail.com] Sent: 28 January 2015 00:27 To: Edward Kmett Cc: Simon Peyton Jones; ghc-devs at haskell.org Subject: Re: GHC support for the new "record" package On Tue, Jan 27, 2015 at 6:47 PM, Edward Kmett > wrote: This works great for lenses that don't let you change types. ?This is not the only restriction required for this to be an acceptable solution. As soon as you have a distinct Lens type, and use something Category-like for composition, you are limiting yourself to composing two lenses to get back a lens (barring a terrible mptc 'solution'). And that is weak. The only reason I (personally) think lens pulls its weight, and is worth using (unlike every prior lens library, which I never bothered with), is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to properly degrade to one another and compose automatically. So if we're settling on a nominal Lens type in a proposal, then it is automatically only good for one thing to me: defining values of the better lens type.? -- Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tuncer.ayaz at gmail.com Wed Jan 28 10:45:24 2015 From: tuncer.ayaz at gmail.com (Tuncer Ayaz) Date: Wed, 28 Jan 2015 11:45:24 +0100 Subject: ANNOUNCE: GHC 7.10.1 Release Candidate 2 In-Reply-To: References: Message-ID: With all three listed build.mk settings, I'm unable to make install: arch: linux-amd64 I had no problem building rc1 (7.10.0.20141222). # config 1 BuildFlavour = perf V = 0 GhcLibWays = v DYNAMIC_GHC_PROGRAMS = NO DYNAMIC_BY_DEFAULT = NO GhcHcOpts= # config 2 BuildFlavour = perf V = 0 DYNAMIC_GHC_PROGRAMS = NO DYNAMIC_BY_DEFAULT = NO GhcHcOpts= # config 3 BuildFlavour = perf V = 0 DYNAMIC_GHC_PROGRAMS = NO DYNAMIC_BY_DEFAULT = NO # copy config to mk/build.mk $ perl boot $ ./configure --prefix=/usr/local/ghc/7.10.0.20150123 $ make install [...] Installing library in /usr/local/ghc/7.10.0.20150123/lib/ghc-7.10.0.20150123/ghcpr_FgrV6cgh2JHBlbcx1OSlwt ghc-cabal: dist-install/build/HSghcpr_FgrV6cgh2JHBlbcx1OSlwt.o: does not exist ghc.mk:918: recipe for target 'install_packages' failed make[1]: *** [install_packages] Error 1 Makefile:71: recipe for target 'install' failed make: *** [install] Error 2 From simonpj at microsoft.com Wed Jan 28 11:01:27 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Wed, 28 Jan 2015 11:01:27 +0000 Subject: Put "Error:" before error output In-Reply-To: References: Message-ID: <618BE556AADD624C9C918AA5D5911BEF562B8D88@DB3PRD3001MB020.064d.mgd.msft.net> There's a ticket for this now. https://ghc.haskell.org/trac/ghc/ticket/10021 Do add comments there, or they'll get lost. But I think you are misunderstanding the proposal (which admittedly is not stated clearly). It only affects the error messages produced by GHC itself. There is no proposal to change the behaviour of the 'error' function. Simon | -----Original Message----- | From: ghc-devs [mailto:ghc-devs-bounces at haskell.org] On Behalf Of | Yitzchak Gale | Sent: 27 January 2015 16:49 | To: Konstantine Rybnikov | Cc: ghc-devs at haskell.org | Subject: Re: Put "Error:" before error output | | -1 | | There are common idioms that rely on the current behavior, so I think | this would break a lot code. | | Examples: | | In command line programs, it is very common to use "error" for | printing the usage message. | | Many programs use "error" as a general way to exit from pure code with | a message. | | I'm not commenting about whether or not those are good practice, just | reporting that they are out there. | | I would be in favor of this though if it is off by default and is | turned on by an option or pragma. But not just -Werror, though, except | for messages that would otherwise have been prefixed by "Warning", | like the current behavior. | | Thanks, | Yitz | | | | On Fri, Jan 23, 2015 at 1:04 PM, Konstantine Rybnikov | wrote: | > Hi! | > | > I'm bringing this up once again. Can we add "Error:" in the output | of | > an error in a similar way ghc shows "Warning:" for warnings? Main | > reasoning is that, for example, on a build-server, where you have | lots | > of cores to build your program, if you get an error, it gets lost | > somewhere in the middle of compiler's output in all other "Warning" | > messages you get, since error is not always shown last on multi-core | build. | > | > Thanks. | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://www.haskell.org/mailman/listinfo/ghc-devs | > | _______________________________________________ | ghc-devs mailing list | ghc-devs at haskell.org | http://www.haskell.org/mailman/listinfo/ghc-devs From dan.doel at gmail.com Wed Jan 28 16:51:42 2015 From: dan.doel at gmail.com (Dan Doel) Date: Wed, 28 Jan 2015 11:51:42 -0500 Subject: GHC support for the new "record" package In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562B8C6F@DB3PRD3001MB020.064d.mgd.msft.net> References: <54BECC45.6010906@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <54C7679C.2080708@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B8C6F@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: The MPTC solution I was thinking of was similar to the approach one would take to making arithmetic work like it does in most other languages. I.E. class Arithmetic a b c | a b -> c where (+) :: a -> b -> c ... One takes that class and then enumerates instances for all combinations of auto-promotions that should take place. The And type function is similar to this, although it's cleaner than having separate types and an MPTC (I think). Maybe it could be made to work, but I think it's still difficult, and worry that it might never work as well as what lens currently does. For instance, when we go to write (.), how do we prove that: forall f. (c1 f, c2 f) => (a -> f b) -> s -> f t (which is what we'll have) is the same type as: forall f. And c1 c2 f => (a -> f b) -> s -> f t (which is what we'll need)? If we define And by cases, it will not reduce when c1 and c2 are opaque. But we must define And by cases or it won't be able to reduce things like 'And Applicative Functor' to just 'Applicative' automatically, which is also important (and we probably wouldn't be able to partially apply it). We could also define (.) by cases, but then we are back to enumerating all the combinations of lens-like things, Arithmetic style. And all with identical implementations. Perhaps this style solution is acceptable due to lens having a closed set of interacting types (or does it). But it seems a lot messier at first blush. -- Dan On Wed, Jan 28, 2015 at 5:32 AM, Simon Peyton Jones wrote: > As soon as you have a distinct Lens type, and use something > Category-like for composition, you are limiting yourself to composing two > lenses to get back a lens (barring a terrible mptc 'solution'). And that is > weak. The only reason I (personally) think lens pulls its weight, and is > worth using (unlike every prior lens library, which I never bothered with), > is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to > properly degrade to one another and compose automatically.? > > Aha. I keep asking whether it?s just the cute ability to re-use (.) that > justifies the lack of abstraction in the Lens type. But Dan?s comment has > made me remember something from my own talk on the subject. Here are the > types of lenses and traversals (2-parameter versions): > > > > type Lens? s a = forall f. Functor f > > => (a -> f a) -> (s -> f s) > > type Traversal? s a = forall f. Applicative f > > => (a -> f a) -> (s -> f s) > > > > Suppose we have > > ln1 :: Lens' s1 s2 > > tr1 :: Traversal' s1 s2 > > ln2 :: Lens' s2 a > > tr2 :: Traversal' s2 a > > > > Now these compositions are all well typed > > ln1 . ln2 :: Lens' s1 a > > tr1 . tr2 :: Traversal' s1 a > > tr1 . ln2 :: Traversal' s1 a > > ln1 . tr2 :: Traversal' s1 a > > > > which is quite remarkable. If Lens? and Traversal? were newtypes, you?d > need four different operators. (I think that what Dan means by ?a terrible > mptc solution? is trying to overload those four operators into one.) > > > > I don?t know if this exhausts the reasons that lenses are not abstract. I > would love to know more, explained in a smilar style. > > > > Incidentally has anyone explored this? > > > > newtype PolyLens c s a = PL (forall f. c f => (a -> f a) -> s -> f s) > > > > I?ve just abstracted over the Functor/Applicative part, so that Lens? and > Traversal? are both PolyLenses. Now perhaps we can do (.), with a type like > > > > (.) :: PolyLens c1 s1 s2 -> PolyLens c2 s2 a -> PolyLens (And c1 c2) s1 a > > > > where And is a type function > > > > type instance And Functor Applicative = Applicative > > etc > > > > I have no idea whether this could be made to work out, but it seems like > an obvious avenue so I wonder if anyone has explored it. > > > > Simon > > > > *From:* Dan Doel [mailto:dan.doel at gmail.com] > *Sent:* 28 January 2015 00:27 > *To:* Edward Kmett > *Cc:* Simon Peyton Jones; ghc-devs at haskell.org > *Subject:* Re: GHC support for the new "record" package > > > > On Tue, Jan 27, 2015 at 6:47 PM, Edward Kmett wrote: > > > > This works great for lenses that don't let you change types. > > > > ?This is not the only restriction required for this to be an acceptable > solution. > > As soon as you have a distinct Lens type, and use something Category-like > for composition, you are limiting yourself to composing two lenses to get > back a lens (barring a terrible mptc 'solution'). And that is weak. The > only reason I (personally) think lens pulls its weight, and is worth using > (unlike every prior lens library, which I never bothered with), is the > ability for lenses, prisms, ismorphisms, traversals, folds, etc. to > properly degrade to one another and compose automatically. So if we're > settling on a nominal Lens type in a proposal, then it is automatically > only good for one thing to me: defining values of the better lens type.? > > -- Dan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Wed Jan 28 21:26:19 2015 From: chrisdone at gmail.com (Christopher Done) Date: Wed, 28 Jan 2015 22:26:19 +0100 Subject: GHC support for the new "record" package In-Reply-To: <54BECC45.6010906@gmail.com> References: <54BECC45.6010906@gmail.com> Message-ID: There?s too much to absorb in this discussion at the moment and I?m late to the party anyway, but I would like to make a small note on syntax. Given that this is very similar to TRex both in behaviour and syntactic means of construction, why not just take TRex?s actual syntax? http://en.wikipedia.org/wiki/Hugs#Extensible_records type Point2D = Rec (x::Coord, y::Coord) point2D = (x=1, y=1) :: Point2D (#x point) It seems like it wouldn?t create any syntactical ambiguities (which is probably why the Hugs developers chose it). Ciao On 20 January 2015 at 22:44, Simon Marlow wrote: > For those who haven't seen this, Nikita Volkov proposed a new approach to > anonymous records, which can be found in the "record" package on Hackage: > http://hackage.haskell.org/package/record > > It had a *lot* of attention on Reddit: > http://nikita-volkov.github.io/record/ > > Now, the solution is very nice and lightweight, but because it is > implemented outside GHC it relies on quasi-quotation (amazing that it can be > done at all!). It has some limitations because it needs to parse Haskell > syntax, and Haskell is big. So we could make this a lot smoother, both for > the implementation and the user, by directly supporting anonymous record > syntax in GHC. Obviously we'd have to move the library code into base too. > > This message is by way of kicking off the discussion, since nobody else > seems to have done so yet. Can we agree that this is the right thing and > should be directly supported by GHC? At this point we'd be aiming for 7.12. > > Who is interested in working on this? Nikita? > > There are various design decisions to think about. For example, when the > quasi-quote brackets are removed, the syntax will conflict with the existing > record syntax. The syntax ends up being similar to Simon's 2003 proposal > http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html > (there are major differences though, notably the use of lenses for selection > and update). > > I created a template wiki page: > https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov > > Cheers, > Simon > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From nikita.y.volkov at mail.ru Wed Jan 28 21:48:02 2015 From: nikita.y.volkov at mail.ru (Nikita Volkov) Date: Thu, 29 Jan 2015 00:48:02 +0300 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> Message-ID: Chris, this is great! Looks like we can even get rid of the Rec prefix! - A phrase in round braces and with :: is itself unambiguous in the type context. - A phrase in round braces with = symbols is unambiguous in the expression context. Concerning the pattern context a solution needs to be found though. But the two points above are enough for me to fall in love with this direction! The {| braces had a too icky of a touch to them and the plain { required the user to choose whether to use the standard record syntax or anonymous one on the module scale, but not both. ? 2015-01-29 0:26 GMT+03:00 Christopher Done : > There?s too much to absorb in this discussion at the moment and I?m > late to the party anyway, but I would like to make a small note on > syntax. Given that this is very similar to TRex both in behaviour and > syntactic means of construction, why not just take TRex?s actual > syntax? http://en.wikipedia.org/wiki/Hugs#Extensible_records > > type Point2D = Rec (x::Coord, y::Coord) > point2D = (x=1, y=1) :: Point2D > (#x point) > > It seems like it wouldn?t create any syntactical ambiguities (which is > probably why the Hugs developers chose it). > > Ciao > > On 20 January 2015 at 22:44, Simon Marlow wrote: > > For those who haven't seen this, Nikita Volkov proposed a new approach to > > anonymous records, which can be found in the "record" package on Hackage: > > http://hackage.haskell.org/package/record > > > > It had a *lot* of attention on Reddit: > > http://nikita-volkov.github.io/record/ > > > > Now, the solution is very nice and lightweight, but because it is > > implemented outside GHC it relies on quasi-quotation (amazing that it > can be > > done at all!). It has some limitations because it needs to parse Haskell > > syntax, and Haskell is big. So we could make this a lot smoother, both > for > > the implementation and the user, by directly supporting anonymous record > > syntax in GHC. Obviously we'd have to move the library code into base > too. > > > > This message is by way of kicking off the discussion, since nobody else > > seems to have done so yet. Can we agree that this is the right thing and > > should be directly supported by GHC? At this point we'd be aiming for > 7.12. > > > > Who is interested in working on this? Nikita? > > > > There are various design decisions to think about. For example, when the > > quasi-quote brackets are removed, the syntax will conflict with the > existing > > record syntax. The syntax ends up being similar to Simon's 2003 proposal > > > http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html > > (there are major differences though, notably the use of lenses for > selection > > and update). > > > > I created a template wiki page: > > https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov > > > > Cheers, > > Simon > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://www.haskell.org/mailman/listinfo/ghc-devs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Wed Jan 28 23:40:58 2015 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 28 Jan 2015 18:40:58 -0500 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> Message-ID: There is a problem with the old TRex syntax. In a world with kind signatures and rank-2 types, it would appear that type Point2D = Rec ( x :: Coord, y :: Coord) is ambiguous. Is Coord a kind signature being applied to x and y which are type variables brought into scope implicitly as type Point2D = forall (x :: Coord, y :: Coord) => Rec (x, y) would make more explicit? e.g. type Lens s t a b = Functor f => (a -> f b) -> s -> f t works today in ghc, even though f isn't explicitly scoped and elaborates to: type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t -Edward On Wed, Jan 28, 2015 at 4:48 PM, Nikita Volkov wrote: > Chris, this is great! Looks like we can even get rid of the Rec prefix! > > - > > A phrase in round braces and with :: is itself unambiguous in the type > context. > - > > A phrase in round braces with = symbols is unambiguous in the > expression context. > > Concerning the pattern context a solution needs to be found though. But > the two points above are enough for me to fall in love with this direction! > The {| braces had a too icky of a touch to them and the plain { required > the user to choose whether to use the standard record syntax or anonymous > one on the module scale, but not both. > ? > > > 2015-01-29 0:26 GMT+03:00 Christopher Done : > >> There?s too much to absorb in this discussion at the moment and I?m >> late to the party anyway, but I would like to make a small note on >> syntax. Given that this is very similar to TRex both in behaviour and >> syntactic means of construction, why not just take TRex?s actual >> syntax? http://en.wikipedia.org/wiki/Hugs#Extensible_records >> >> type Point2D = Rec (x::Coord, y::Coord) >> point2D = (x=1, y=1) :: Point2D >> (#x point) >> >> It seems like it wouldn?t create any syntactical ambiguities (which is >> probably why the Hugs developers chose it). >> >> Ciao >> >> On 20 January 2015 at 22:44, Simon Marlow wrote: >> > For those who haven't seen this, Nikita Volkov proposed a new approach >> to >> > anonymous records, which can be found in the "record" package on >> Hackage: >> > http://hackage.haskell.org/package/record >> > >> > It had a *lot* of attention on Reddit: >> > http://nikita-volkov.github.io/record/ >> > >> > Now, the solution is very nice and lightweight, but because it is >> > implemented outside GHC it relies on quasi-quotation (amazing that it >> can be >> > done at all!). It has some limitations because it needs to parse >> Haskell >> > syntax, and Haskell is big. So we could make this a lot smoother, both >> for >> > the implementation and the user, by directly supporting anonymous record >> > syntax in GHC. Obviously we'd have to move the library code into base >> too. >> > >> > This message is by way of kicking off the discussion, since nobody else >> > seems to have done so yet. Can we agree that this is the right thing >> and >> > should be directly supported by GHC? At this point we'd be aiming for >> 7.12. >> > >> > Who is interested in working on this? Nikita? >> > >> > There are various design decisions to think about. For example, when >> the >> > quasi-quote brackets are removed, the syntax will conflict with the >> existing >> > record syntax. The syntax ends up being similar to Simon's 2003 >> proposal >> > >> http://research.microsoft.com/en-us/um/people/simonpj/Haskell/records.html >> > (there are major differences though, notably the use of lenses for >> selection >> > and update). >> > >> > I created a template wiki page: >> > https://ghc.haskell.org/trac/ghc/wiki/Records/Volkov >> > >> > Cheers, >> > Simon >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://www.haskell.org/mailman/listinfo/ghc-devs >> > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chrisdone at gmail.com Thu Jan 29 02:40:35 2015 From: chrisdone at gmail.com (Christopher Done) Date: Thu, 29 Jan 2015 03:40:35 +0100 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> Message-ID: On 29 January 2015 at 00:40, Edward Kmett ekmett at gmail.com wrote: There is a problem with the old TRex syntax. In a world with kind signatures and rank-2 types, it would appear that type Point2D = Rec ( x :: Coord, y :: Coord ) is ambiguous. The kind-signature resemblance had occurred to me, but I?d assumed Hugs treated it as syntactical sugar like [record|{ x :: Coord, y :: Coord }|]. Apparently not. From chrisdone at gmail.com Thu Jan 29 02:45:48 2015 From: chrisdone at gmail.com (Christopher Done) Date: Thu, 29 Jan 2015 03:45:48 +0100 Subject: GHC support for the new "record" package In-Reply-To: References: <54BECC45.6010906@gmail.com> Message-ID: > The latter {| ... |} might serve as a solid syntax suggestion for the anonymous row type syntax. I like this well enough. My Hugs TRex suggestion comes from not particularly caring much what characters are used to delimit some fields, but that using an existing implementation's design decisions allows one to avoid going through bikesheddery. #x like in TRex also WFM too. Personally I could care less whether it matches OO languages x.y or not. From ekmett at gmail.com Thu Jan 29 03:52:05 2015 From: ekmett at gmail.com (Edward Kmett) Date: Wed, 28 Jan 2015 22:52:05 -0500 Subject: GHC support for the new "record" package In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562B8C6F@DB3PRD3001MB020.064d.mgd.msft.net> References: <54BECC45.6010906@gmail.com> <54C21FBF.4020809@gmail.com> <54C2219D.2080103@well-typed.com> <54C2C5EE.9030100@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B4C2D@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B6EA9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B75B2@DB3PRD3001MB020.064d.mgd.msft.net> <54C7554D.9080104@well-typed.com> <54C7679C.2080708@well-typed.com> <618BE556AADD624C9C918AA5D5911BEF562B7EC9@DB3PRD3001MB020.064d.mgd.msft.net> <618BE556AADD624C9C918AA5D5911BEF562B8C6F@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Alas, the 'f' isn't the only thing in the lens library signatures that gets overloaded in practice. Isomorphisms and prisms overload the shape to look more like p a (f b) -> p s (f t), rather than (a -> f b) -> s -> f t Indexing, which matters for folding and traversing over containers with keys overloads with a shape like p a (f b) -> s -> f t By the time you reach that level of generality, and add type-changing, the newtype is sort of just dangling there actively getting in the way and providing no actual encapsulation. Now, you could make up a bunch of individual ad hoc data types for all the different lens types we happen to know about today. However, it is deeply insightful that it is the form that lenses take that let us _find_ all the different lens-likes that we use today. Half of them we had no idea were out there until we spent time exploring the impact of the design we have. Switching to a representation where these things arise from O(n^2) ad-hoc rules rather than the existing relationships between mostly "common sense" classes seems like a poor trade. In scala Julien Truffaut has a library called Monocle, which aspires to be a port of the ideas of lens to Scala. Due to the vagaries of the language the only option they have open to them is to implement things the way you are looking at exploring here. It doesn't work out well. Vastly more effort yields a library full of boilerplate that handles a much smaller scope and yields no insight into why these things are related. -Edward On Wed, Jan 28, 2015 at 5:32 AM, Simon Peyton Jones wrote: > As soon as you have a distinct Lens type, and use something > Category-like for composition, you are limiting yourself to composing two > lenses to get back a lens (barring a terrible mptc 'solution'). And that is > weak. The only reason I (personally) think lens pulls its weight, and is > worth using (unlike every prior lens library, which I never bothered with), > is the ability for lenses, prisms, ismorphisms, traversals, folds, etc. to > properly degrade to one another and compose automatically.? > > Aha. I keep asking whether it?s just the cute ability to re-use (.) that > justifies the lack of abstraction in the Lens type. But Dan?s comment has > made me remember something from my own talk on the subject. Here are the > types of lenses and traversals (2-parameter versions): > > > > type Lens? s a = forall f. Functor f > > => (a -> f a) -> (s -> f s) > > type Traversal? s a = forall f. Applicative f > > => (a -> f a) -> (s -> f s) > > > > Suppose we have > > ln1 :: Lens' s1 s2 > > tr1 :: Traversal' s1 s2 > > ln2 :: Lens' s2 a > > tr2 :: Traversal' s2 a > > > > Now these compositions are all well typed > > ln1 . ln2 :: Lens' s1 a > > tr1 . tr2 :: Traversal' s1 a > > tr1 . ln2 :: Traversal' s1 a > > ln1 . tr2 :: Traversal' s1 a > > > > which is quite remarkable. If Lens? and Traversal? were newtypes, you?d > need four different operators. (I think that what Dan means by ?a terrible > mptc solution? is trying to overload those four operators into one.) > > > > I don?t know if this exhausts the reasons that lenses are not abstract. I > would love to know more, explained in a smilar style. > > > > Incidentally has anyone explored this? > > > > newtype PolyLens c s a = PL (forall f. c f => (a -> f a) -> s -> f s) > > > > I?ve just abstracted over the Functor/Applicative part, so that Lens? and > Traversal? are both PolyLenses. Now perhaps we can do (.), with a type like > > > > (.) :: PolyLens c1 s1 s2 -> PolyLens c2 s2 a -> PolyLens (And c1 c2) s1 a > > > > where And is a type function > > > > type instance And Functor Applicative = Applicative > > etc > > > > I have no idea whether this could be made to work out, but it seems like > an obvious avenue so I wonder if anyone has explored it. > > > > Simon > > > > *From:* Dan Doel [mailto:dan.doel at gmail.com] > *Sent:* 28 January 2015 00:27 > *To:* Edward Kmett > *Cc:* Simon Peyton Jones; ghc-devs at haskell.org > *Subject:* Re: GHC support for the new "record" package > > > > On Tue, Jan 27, 2015 at 6:47 PM, Edward Kmett wrote: > > > > This works great for lenses that don't let you change types. > > > > ?This is not the only restriction required for this to be an acceptable > solution. > > As soon as you have a distinct Lens type, and use something Category-like > for composition, you are limiting yourself to composing two lenses to get > back a lens (barring a terrible mptc 'solution'). And that is weak. The > only reason I (personally) think lens pulls its weight, and is worth using > (unlike every prior lens library, which I never bothered with), is the > ability for lenses, prisms, ismorphisms, traversals, folds, etc. to > properly degrade to one another and compose automatically. So if we're > settling on a nominal Lens type in a proposal, then it is automatically > only good for one thing to me: defining values of the better lens type.? > > -- Dan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iavor.diatchki at gmail.com Thu Jan 29 05:15:08 2015 From: iavor.diatchki at gmail.com (Iavor Diatchki) Date: Wed, 28 Jan 2015 21:15:08 -0800 Subject: A simpler ORF Message-ID: Hello, I've been following the various discussions about changes to Haskell's record system, and I find that most of the current proposals are fairly complex, especially for the benefit they provide. I use Haskell records a lot, and I've been wondering if there might be a simpler alternative, one that: 1. will allow us to reuse field names across records 2. does not require any fancy types 3. it will not preclude continued research on "the right" way to get more type-based record resolution. Based on designs I've seen in the past, my experience with Haskell records, and discussions with colleagues, I put together a document describing a potential design that, I think, satisfies goals 1 to 3: https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Simple I think the proposal should be fairly simple to implement, and I'd be willing to do it, if there is enough support from the community. Let me know what you think! -Iavor -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Thu Jan 29 06:38:24 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Thu, 29 Jan 2015 01:38:24 -0500 Subject: A simpler ORF In-Reply-To: References: Message-ID: On Thu, Jan 29, 2015 at 12:15 AM, Iavor Diatchki wrote: > https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Simple Immediate reaction: is there a conflict if the update-with-a-function syntax uses $= instead of :=? Idea being to imply that application is happening. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From roma at ro-che.info Thu Jan 29 08:14:21 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Thu, 29 Jan 2015 10:14:21 +0200 Subject: A simpler ORF In-Reply-To: References: Message-ID: <54C9EBDD.7090603@ro-che.info> 1. I *love* the idea of not generating selector functions. It's worth implementing it as a separate extension regardless of this proposal's fate. 2. In H98, it is possible to do constructor-agnostic updates: data T = A { n :: Int } | B { n :: Int } f :: T -> T f x = x { n = 2 } It's not possible in your proposal. One could even argue that it was a bad idea in the first place, as it may lead to partiality. 3. Now that fields are not tied to selectors, do we need a separate mechanism/set of rules for exporting fields? Roman On 29/01/15 07:15, Iavor Diatchki wrote: > Hello, > > I've been following the various discussions about changes to Haskell's > record system, and I find that most of the current proposals are fairly > complex, especially for the benefit they provide. > > I use Haskell records a lot, and I've been wondering if there might be a > simpler alternative, one that: > 1. will allow us to reuse field names across records > 2. does not require any fancy types > 3. it will not preclude continued research on "the right" way to get > more type-based record resolution. > > Based on designs I've seen in the past, my experience with Haskell > records, and discussions with colleagues, I put together a document > describing a potential design that, I think, satisfies goals 1 to 3: > > https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Simple > > I think the proposal should be fairly simple to implement, and I'd be > willing to do it, if there is enough support from the community. > > Let me know what you think! > > -Iavor > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From roma at ro-che.info Thu Jan 29 08:19:07 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Thu, 29 Jan 2015 10:19:07 +0200 Subject: A simpler ORF In-Reply-To: <54C9EBDD.7090603@ro-che.info> References: <54C9EBDD.7090603@ro-che.info> Message-ID: <54C9ECFB.7000007@ro-che.info> > It's not possible in your proposal. One could even argue that it was a > bad idea in the first place, as it may lead to partiality. On a second thought, isn't your update partial as well? In C { e | f1 = e1, f2 = e2 } what happens if e's constructor is not in fact C? This makes me somewhat uncomfortable. Roman On 29/01/15 10:14, Roman Cheplyaka wrote: > 1. I *love* the idea of not generating selector functions. It's worth > implementing it as a separate extension regardless of this proposal's fate. > > 2. In H98, it is possible to do constructor-agnostic updates: > > data T = A { n :: Int } | B { n :: Int } > f :: T -> T > f x = x { n = 2 } > > It's not possible in your proposal. One could even argue that it was a > bad idea in the first place, as it may lead to partiality. > > 3. Now that fields are not tied to selectors, do we need a separate > mechanism/set of rules for exporting fields? > > Roman > > On 29/01/15 07:15, Iavor Diatchki wrote: >> Hello, >> >> I've been following the various discussions about changes to Haskell's >> record system, and I find that most of the current proposals are fairly >> complex, especially for the benefit they provide. >> >> I use Haskell records a lot, and I've been wondering if there might be a >> simpler alternative, one that: >> 1. will allow us to reuse field names across records >> 2. does not require any fancy types >> 3. it will not preclude continued research on "the right" way to get >> more type-based record resolution. >> >> Based on designs I've seen in the past, my experience with Haskell >> records, and discussions with colleagues, I put together a document >> describing a potential design that, I think, satisfies goals 1 to 3: >> >> https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Simple >> >> I think the proposal should be fairly simple to implement, and I'd be >> willing to do it, if there is enough support from the community. >> >> Let me know what you think! >> >> -Iavor >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> > From mail at joachim-breitner.de Thu Jan 29 08:22:16 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 29 Jan 2015 09:22:16 +0100 Subject: A simpler ORF In-Reply-To: References: Message-ID: <1422519736.1905.1.camel@joachim-breitner.de> Hi, Am Mittwoch, den 28.01.2015, 21:15 -0800 schrieb Iavor Diatchki: > https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Simple > > Let me know what you think! a small Optional Extension. If you allow Node { n | left = l, right = r} as an expression, then it would make sense to allow Node { n | left = l, right = r} as a pattern as well, meaning the same thing as n at Node {left = l, right = r} now. (Hardly important, but nice for consistency.) Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From adam at well-typed.com Thu Jan 29 08:44:28 2015 From: adam at well-typed.com (Adam Gundry) Date: Thu, 29 Jan 2015 08:44:28 +0000 Subject: A simpler ORF In-Reply-To: References: Message-ID: <54C9F2EC.3060901@well-typed.com> Thanks Iavor, It's been suggested before that the compiler should simply stop generating record selectors (see Trac #5972). I'm unconvinced that the benefit (being able to use the same name repeatedly) is worth the cost (being unable to use selector functions at all, without writing them out by hand or using TH). In particular, as an application programmer I would be dependent on what my libraries choose to do about records: they might not give me any selectors at all, they might give me lenses with the same names, etc. Given a piece of code that uses field names in expressions, I would no longer know what it means without finding out how it chooses to define fields. Consider, however, the current version of the redesigned ORF proposal: https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Redesign If you ignore all the stuff about the #x syntax (and hence are free to ignore all the types stuff), you end up with something extremely simple: * record field names can be reused * selectors and updates can be used only if unambiguous * construction and pattern-matching can be used freely An unambiguous update syntax could be useful, but that's mostly orthogonal to the ORF proposal (and an easy extension of it). I think this satisfies your goals 1 and 2. With the proposed design, however, we also gain: * the ability to use overloaded field names, either as selectors or as lenses, without using TH; * a syntax which generalises to other uses, completely separate from records (e.g. we can turn #x into a Symbol singleton); * interoperability with anonymous records proposals. Moreover, we have an implementation that shows the design works: the implementation needs some revision to use the #x syntax rather than renamer magic, and adapt to some finer points of the design, but the core ideas are all there. We've been putting this problem off for literally decades on the basis that more experimentation is needed. Now that we have a plausible-looking solution in the form of ORF, let's get it into GHC so people can try it. Adam On 29/01/15 05:15, Iavor Diatchki wrote: > Hello, > > I've been following the various discussions about changes to Haskell's > record system, and I find that most of the current proposals are fairly > complex, especially for the benefit they provide. > > I use Haskell records a lot, and I've been wondering if there might be a > simpler alternative, one that: > 1. will allow us to reuse field names across records > 2. does not require any fancy types > 3. it will not preclude continued research on "the right" way to get > more type-based record resolution. > > Based on designs I've seen in the past, my experience with Haskell > records, and discussions with colleagues, I put together a document > describing a potential design that, I think, satisfies goals 1 to 3: > > https://ghc.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields/Simple > > I think the proposal should be fairly simple to implement, and I'd be > willing to do it, if there is enough support from the community. > > Let me know what you think! > > -Iavor -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From simonpj at microsoft.com Thu Jan 29 17:58:34 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 29 Jan 2015 17:58:34 +0000 Subject: Delaying 7.10? Message-ID: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Friends In a call with a bunch of type hackers, we were discussing https://ghc.haskell.org/trac/ghc/ticket/9858 This is a pretty serious bug. It allows a malicious person to construct his own unsafeCoerce, and so completely subverts Safe Haskell. Actually there are two bugs (see comment:19). The first is easily fixed. But the second is not. We explored various quick fixes, but the real solution is not far out of reach. It amounts to this: * Every data type is automatically in Typeable. No need to say "deriving(Typeable)" or "AutoDeriveTypeable" (which would become deprecated) * In implementation terms, the constraint solver treats Typeable specially, much as it already treats Coercible specially. It's not a huge job. It'd probably take a couple of days of implementation work, and some time for shaking out bugs and consequential changes. The biggest thing might be simply working out implementation design choices. (For example, there is a modest code-size cost to making everything Typeable, esp because that includes the data constructors of the type (which can be used in types, with DataKinds). Does that matter? Should we provide a way to suppress it? If so, we'd also need a way to express whether or not the Typable instance exists in the interface file.) But it is a substantial change that will touch a lot of lines of code. Moreover, someone has to do it, and Iavor (who heroically volunteered) happens to be travelling next week. So it's really not the kind of thing we would usually do after RC2. But (a) it's serious and, as it happens, (b) there is also the BBP Prelude debate going on. Hence the question: should we simply delay 7.10 by, say, a month? After all, the timetable is up to us. Doing so might give a bit more breathing space to the BBP debate, which might allow time for reflection and/or implementation of modest features to help the transition. (I know that several are under discussion.) Plus, anyone waiting for 7.10 can simply use RC2, which is pretty good. Would that be a relief to the BBP debate? Or any other opinions. Simon PS: I know, I know: there is endless pressure to delay releases to get stuff in. If we give in to that pressure, we never make a release. But we should know when to break our own rules. Perhaps this is such an occasion. -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Thu Jan 29 18:08:31 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Thu, 29 Jan 2015 10:08:31 -0800 Subject: Delaying 7.10? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I think delaying is OK, but we should probably say something like "we're delaying for X and Y, but that doesn't mean that you can not sneak in Z*". * Unless Z is the StrictData language pragma and your name is Johan. ;) On Thu, Jan 29, 2015 at 9:58 AM, Simon Peyton Jones wrote: > Friends > > In a call with a bunch of type hackers, we were discussing > > https://ghc.haskell.org/trac/ghc/ticket/9858 > > This is a pretty serious bug. It allows a malicious person to construct > his own unsafeCoerce, and so completely subverts Safe Haskell. > > Actually there are two bugs (see comment:19). The first is easily fixed. > But the second is not. > > We explored various quick fixes, but the real solution is not far out of > reach. It amounts to this: > > ? Every data type is automatically in Typeable. No need to say > ?deriving(Typeable)? or ?AutoDeriveTypeable? (which would become deprecated) > > ? In implementation terms, the constraint solver treats Typeable > specially, much as it already treats Coercible specially. > > It?s not a huge job. It?d probably take a couple of days of > implementation work, and some time for shaking out bugs and consequential > changes. The biggest thing might be simply working out implementation > design choices. (For example, there is a modest code-size cost to making > everything Typeable, esp because that includes the data constructors of the > type (which can be used in types, with DataKinds). Does that matter? > Should we provide a way to suppress it? If so, we?d also need a way to > express whether or not the Typable instance exists in the interface file.) > > But it is a substantial change that will touch a lot of lines of code. > Moreover, someone has to do it, and Iavor (who heroically volunteered) > happens to be travelling next week. > > So it?s really not the kind of thing we would usually do after RC2. > > But (a) it?s serious and, as it happens, (b) there is also the BBP Prelude > debate going on. > > Hence the question: should we simply delay 7.10 by, say, a month? After > all, the timetable is up to us. Doing so might give a bit more breathing > space to the BBP debate, which might allow time for reflection and/or > implementation of modest features to help the transition. (I know that > several are under discussion.) Plus, anyone waiting for 7.10 can simply > use RC2, which is pretty good. > > Would that be a relief to the BBP debate? Or any other opinions. > > Simon > > PS: I know, I know: there is endless pressure to delay releases to get > stuff in. If we give in to that pressure, we never make a release. But we > should know when to break our own rules. Perhaps this is such an occasion. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Thu Jan 29 18:18:46 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 29 Jan 2015 14:18:46 -0400 Subject: Delaying 7.10? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I agree with Johan although my Z is different and will be left unspecifed :). In the meantime we should get out bindists for RC2 for all platforms that we intend to do so. On Thu, Jan 29, 2015 at 2:08 PM, Johan Tibell wrote: > I think delaying is OK, but we should probably say something like "we're > delaying for X and Y, but that doesn't mean that you can not sneak in Z*". > > * Unless Z is the StrictData language pragma and your name is Johan. ;) > > On Thu, Jan 29, 2015 at 9:58 AM, Simon Peyton Jones > wrote: > >> Friends >> >> In a call with a bunch of type hackers, we were discussing >> >> https://ghc.haskell.org/trac/ghc/ticket/9858 >> >> This is a pretty serious bug. It allows a malicious person to construct >> his own unsafeCoerce, and so completely subverts Safe Haskell. >> >> Actually there are two bugs (see comment:19). The first is easily >> fixed. But the second is not. >> >> We explored various quick fixes, but the real solution is not far out of >> reach. It amounts to this: >> >> ? Every data type is automatically in Typeable. No need to say >> ?deriving(Typeable)? or ?AutoDeriveTypeable? (which would become deprecated) >> >> ? In implementation terms, the constraint solver treats Typeable >> specially, much as it already treats Coercible specially. >> >> It?s not a huge job. It?d probably take a couple of days of >> implementation work, and some time for shaking out bugs and consequential >> changes. The biggest thing might be simply working out implementation >> design choices. (For example, there is a modest code-size cost to making >> everything Typeable, esp because that includes the data constructors of the >> type (which can be used in types, with DataKinds). Does that matter? >> Should we provide a way to suppress it? If so, we?d also need a way to >> express whether or not the Typable instance exists in the interface file.) >> >> But it is a substantial change that will touch a lot of lines of code. >> Moreover, someone has to do it, and Iavor (who heroically volunteered) >> happens to be travelling next week. >> >> So it?s really not the kind of thing we would usually do after RC2. >> >> But (a) it?s serious and, as it happens, (b) there is also the BBP >> Prelude debate going on. >> >> Hence the question: should we simply delay 7.10 by, say, a month? After >> all, the timetable is up to us. Doing so might give a bit more breathing >> space to the BBP debate, which might allow time for reflection and/or >> implementation of modest features to help the transition. (I know that >> several are under discussion.) Plus, anyone waiting for 7.10 can simply >> use RC2, which is pretty good. >> >> Would that be a relief to the BBP debate? Or any other opinions. >> >> Simon >> >> PS: I know, I know: there is endless pressure to delay releases to get >> stuff in. If we give in to that pressure, we never make a release. But we >> should know when to break our own rules. Perhaps this is such an occasion. >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spam at scientician.net Thu Jan 29 18:27:26 2015 From: spam at scientician.net (Bardur Arantsson) Date: Thu, 29 Jan 2015 19:27:26 +0100 Subject: Delaying 7.10? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On 01/29/2015 06:58 PM, Simon Peyton Jones wrote: > Friends > In a call with a bunch of type hackers, we were discussing > https://ghc.haskell.org/trac/ghc/ticket/9858 > This is a pretty serious bug. It allows a malicious person to construct his own unsafeCoerce, and so completely subverts Safe Haskell. > Actually there are two bugs (see comment:19). The first is easily fixed. But the second is not. > We explored various quick fixes, but the real solution is not far out of reach. It amounts to this: > I'm definitely not qualified to "vote" on this, but out of curiosity is this something which will affect *existing* and *deployed* (or, I guess, soon-to-be-deployed-after-being-recompiled-with-7.10-without-changes) code? It it something which will "just" affect Try Haskell and similar initiatives which must use Safe Haskell to avoid trivial DoS and exploitation? Would the "do not derive Typeable for polykinded type constructors" break huge amounts of existing pre-7.10 code, etc.? It's pretty hard to evaluate *consequences* of available choices from the Trac thread, so maybe a little write-up of what the current choices (and consequences) are would be in order. > But (a) it's serious and, as it happens, (b) there is also the BBP Prelude debate going on. > Hence the question: should we simply delay 7.10 by, say, a month? After all, the timetable is up to us. Doing so might give a bit more breathing space to the BBP debate, which might allow time for reflection and/or implementation of modest features to help the transition. (I know that several are under discussion.) Plus, anyone waiting for 7.10 can simply use RC2, which is pretty good. > Would that be a relief to the BBP debate? Or any other opinions. > Simon > PS: I know, I know: there is endless pressure to delay releases to get stuff in. If we give in to that pressure, we never make a release. But we should know when to break our own rules. Perhaps this is such an occasion. As a mostly disinterested observer of the BBP debate, I'd say letting that influence a decision on this matter is veering somewhat close to "endless pressure to delay releases to get stuff in" -- either the issue is serious enough on its own or it isn't. I understand and acknowledge that there are valid arguments on either side and that reasonable people can disagree on these matters :). I'm just offering an opinion. Regards, From dreixel at gmail.com Thu Jan 29 18:32:33 2015 From: dreixel at gmail.com (=?UTF-8?Q?Jos=C3=A9_Pedro_Magalh=C3=A3es?=) Date: Thu, 29 Jan 2015 18:32:33 +0000 Subject: Delaying 7.10? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On Thu, Jan 29, 2015 at 6:27 PM, Bardur Arantsson wrote: > On 01/29/2015 06:58 PM, Simon Peyton Jones wrote: > > Friends > > In a call with a bunch of type hackers, we were discussing > > https://ghc.haskell.org/trac/ghc/ticket/9858 > > This is a pretty serious bug. It allows a malicious person to construct > his own unsafeCoerce, and so completely subverts Safe Haskell. > > Actually there are two bugs (see comment:19). The first is easily > fixed. But the second is not. > > We explored various quick fixes, but the real solution is not far out of > reach. It amounts to this: > > > > I'm definitely not qualified to "vote" on this, but out of curiosity is > this something which will affect *existing* and *deployed* (or, I guess, > soon-to-be-deployed-after-being-recompiled-with-7.10-without-changes) > code? It it something which will "just" affect Try Haskell and similar > initiatives which must use Safe Haskell to avoid trivial DoS and > exploitation? > > Would the "do not derive Typeable for polykinded type constructors" > break huge amounts of existing pre-7.10 code, etc.? > I am particularly afraid of that, yes. Not being able to derive Typeable for polykinded type constructors also means no SYB for those types. And many libraries have moved to generalise their type constructors to be polykinded whenever possible. Cheers, Pedro -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Thu Jan 29 18:33:24 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Thu, 29 Jan 2015 14:33:24 -0400 Subject: Delaying 7.10? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: Just wanted to add that I'm not really qualified to vote either and that we can use this time to ask package maintainers to make sure their stuff at least compiles on 7.10.RC2 On Thu, Jan 29, 2015 at 2:18 PM, George Colpitts wrote: > I agree with Johan although my Z is different and will be left unspecifed > :). In the meantime we should get out bindists for RC2 for all platforms > that we intend to do so. > > On Thu, Jan 29, 2015 at 2:08 PM, Johan Tibell > wrote: > >> I think delaying is OK, but we should probably say something like "we're >> delaying for X and Y, but that doesn't mean that you can not sneak in Z*". >> >> * Unless Z is the StrictData language pragma and your name is Johan. ;) >> >> On Thu, Jan 29, 2015 at 9:58 AM, Simon Peyton Jones < >> simonpj at microsoft.com> wrote: >> >>> Friends >>> >>> In a call with a bunch of type hackers, we were discussing >>> >>> https://ghc.haskell.org/trac/ghc/ticket/9858 >>> >>> This is a pretty serious bug. It allows a malicious person to construct >>> his own unsafeCoerce, and so completely subverts Safe Haskell. >>> >>> Actually there are two bugs (see comment:19). The first is easily >>> fixed. But the second is not. >>> >>> We explored various quick fixes, but the real solution is not far out of >>> reach. It amounts to this: >>> >>> ? Every data type is automatically in Typeable. No need to say >>> ?deriving(Typeable)? or ?AutoDeriveTypeable? (which would become deprecated) >>> >>> ? In implementation terms, the constraint solver treats Typeable >>> specially, much as it already treats Coercible specially. >>> >>> It?s not a huge job. It?d probably take a couple of days of >>> implementation work, and some time for shaking out bugs and consequential >>> changes. The biggest thing might be simply working out implementation >>> design choices. (For example, there is a modest code-size cost to making >>> everything Typeable, esp because that includes the data constructors of the >>> type (which can be used in types, with DataKinds). Does that matter? >>> Should we provide a way to suppress it? If so, we?d also need a way to >>> express whether or not the Typable instance exists in the interface file.) >>> >>> But it is a substantial change that will touch a lot of lines of code. >>> Moreover, someone has to do it, and Iavor (who heroically volunteered) >>> happens to be travelling next week. >>> >>> So it?s really not the kind of thing we would usually do after RC2. >>> >>> But (a) it?s serious and, as it happens, (b) there is also the BBP >>> Prelude debate going on. >>> >>> Hence the question: should we simply delay 7.10 by, say, a month? >>> After all, the timetable is up to us. Doing so might give a bit more >>> breathing space to the BBP debate, which might allow time for reflection >>> and/or implementation of modest features to help the transition. (I know >>> that several are under discussion.) Plus, anyone waiting for 7.10 can >>> simply use RC2, which is pretty good. >>> >>> Would that be a relief to the BBP debate? Or any other opinions. >>> >>> Simon >>> >>> PS: I know, I know: there is endless pressure to delay releases to get >>> stuff in. If we give in to that pressure, we never make a release. But we >>> should know when to break our own rules. Perhaps this is such an occasion. >>> >>> _______________________________________________ >>> ghc-devs mailing list >>> ghc-devs at haskell.org >>> http://www.haskell.org/mailman/listinfo/ghc-devs >>> >>> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at joachim-breitner.de Thu Jan 29 19:44:09 2015 From: mail at joachim-breitner.de (Joachim Breitner) Date: Thu, 29 Jan 2015 20:44:09 +0100 Subject: Delaying 7.10? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <1422560649.32710.7.camel@joachim-breitner.de> Hi, Am Donnerstag, den 29.01.2015, 17:58 +0000 schrieb Simon Peyton Jones: > Hence the question: should we simply delay 7.10 by, say, a month? am I right that the bug is also in 7.8, i.e. it is not a regression? In that case, on its own, it is not a good reason to delay the release if we want to release. (There were releases while we knew about the GND unsoundness bug.) But if it looks like there is a big benefit from having a release a bit later, such as a not-too-ugly work-around or even a fix, then of course we _can_ delay it. Personally, I?m not urgently waiting for this release ? Debian is too much behind anyways :-) How well would a work-around or fix fit in a 7.8.2 release? Will the fix practically affect what code compiles and what code does not? If the fix would not break reasonable code, then it might be better aimed for 7.8.2. Greetings, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From austin at well-typed.com Thu Jan 29 19:54:58 2015 From: austin at well-typed.com (Austin Seipp) Date: Thu, 29 Jan 2015 13:54:58 -0600 Subject: Delaying 7.10? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: After thinking about it a little, I'm fine with pushing the release out to March. I think #9858 is the more serious of our concerns vs a raging debate, too. My only concern really is dealing with the merging of such a patch. For example, if the patch to fix this is actually as wide ranging as we believe to the type hacker, I can definitely foresee a merge conflict, with, say, the recent -fwarn-redundant-constraints, which I've managed to leave out of 7.10 so far. In any case, with some more time, we can work those details out. On Thursday, January 29, 2015, Simon Peyton Jones wrote: > Friends > > In a call with a bunch of type hackers, we were discussing > > https://ghc.haskell.org/trac/ghc/ticket/9858 > > This is a pretty serious bug. It allows a malicious person to construct > his own unsafeCoerce, and so completely subverts Safe Haskell. > > Actually there are two bugs (see comment:19). The first is easily fixed. > But the second is not. > > We explored various quick fixes, but the real solution is not far out of > reach. It amounts to this: > > ? Every data type is automatically in Typeable. No need to say > ?deriving(Typeable)? or ?AutoDeriveTypeable? (which would become deprecated) > > ? In implementation terms, the constraint solver treats Typeable > specially, much as it already treats Coercible specially. > > It?s not a huge job. It?d probably take a couple of days of > implementation work, and some time for shaking out bugs and consequential > changes. The biggest thing might be simply working out implementation > design choices. (For example, there is a modest code-size cost to making > everything Typeable, esp because that includes the data constructors of the > type (which can be used in types, with DataKinds). Does that matter? > Should we provide a way to suppress it? If so, we?d also need a way to > express whether or not the Typable instance exists in the interface file.) > > But it is a substantial change that will touch a lot of lines of code. > Moreover, someone has to do it, and Iavor (who heroically volunteered) > happens to be travelling next week. > > So it?s really not the kind of thing we would usually do after RC2. > > But (a) it?s serious and, as it happens, (b) there is also the BBP Prelude > debate going on. > > Hence the question: should we simply delay 7.10 by, say, a month? After > all, the timetable is up to us. Doing so might give a bit more breathing > space to the BBP debate, which might allow time for reflection and/or > implementation of modest features to help the transition. (I know that > several are under discussion.) Plus, anyone waiting for 7.10 can simply > use RC2, which is pretty good. > > Would that be a relief to the BBP debate? Or any other opinions. > > Simon > > PS: I know, I know: there is endless pressure to delay releases to get > stuff in. If we give in to that pressure, we never make a release. But we > should know when to break our own rules. Perhaps this is such an occasion. > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From spam at scientician.net Thu Jan 29 20:20:44 2015 From: spam at scientician.net (Bardur Arantsson) Date: Thu, 29 Jan 2015 21:20:44 +0100 Subject: Delaying 7.10? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: On 01/29/2015 08:54 PM, Austin Seipp wrote: > After thinking about it a little, I'm fine with pushing the release out to > March. I think #9858 is the more serious of our concerns vs a raging > debate, too. > > My only concern really is dealing with the merging of such a patch. For > example, if the patch to fix this is actually as wide ranging as we believe > to the type hacker, I can definitely foresee a merge conflict, with, say, > the recent -fwarn-redundant-constraints, which I've managed to leave out of > 7.10 so far. > > In any case, with some more time, we can work those details out. Oh, you silly implementers, you! :D Regards, From ekmett at gmail.com Thu Jan 29 22:07:10 2015 From: ekmett at gmail.com (Edward Kmett) Date: Thu, 29 Jan 2015 17:07:10 -0500 Subject: Delaying 7.10? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: I personally would rather see this issue given the time to be resolved correctly than rush to release 7.10 now because of a self-imposed deadline. An unsafeCoerce bug, especially one which affects SafeHaskell, pretty much trumps all in my eyes. -Edward On Thu, Jan 29, 2015 at 2:54 PM, Austin Seipp wrote: > After thinking about it a little, I'm fine with pushing the release out to > March. I think #9858 is the more serious of our concerns vs a raging > debate, too. > > My only concern really is dealing with the merging of such a patch. For > example, if the patch to fix this is actually as wide ranging as we believe > to the type hacker, I can definitely foresee a merge conflict, with, say, > the recent -fwarn-redundant-constraints, which I've managed to leave out of > 7.10 so far. > > In any case, with some more time, we can work those details out. > > On Thursday, January 29, 2015, Simon Peyton Jones > wrote: > >> Friends >> >> In a call with a bunch of type hackers, we were discussing >> >> https://ghc.haskell.org/trac/ghc/ticket/9858 >> >> This is a pretty serious bug. It allows a malicious person to construct >> his own unsafeCoerce, and so completely subverts Safe Haskell. >> >> Actually there are two bugs (see comment:19). The first is easily >> fixed. But the second is not. >> >> We explored various quick fixes, but the real solution is not far out of >> reach. It amounts to this: >> >> ? Every data type is automatically in Typeable. No need to say >> ?deriving(Typeable)? or ?AutoDeriveTypeable? (which would become deprecated) >> >> ? In implementation terms, the constraint solver treats Typeable >> specially, much as it already treats Coercible specially. >> >> It?s not a huge job. It?d probably take a couple of days of >> implementation work, and some time for shaking out bugs and consequential >> changes. The biggest thing might be simply working out implementation >> design choices. (For example, there is a modest code-size cost to making >> everything Typeable, esp because that includes the data constructors of the >> type (which can be used in types, with DataKinds). Does that matter? >> Should we provide a way to suppress it? If so, we?d also need a way to >> express whether or not the Typable instance exists in the interface file.) >> >> But it is a substantial change that will touch a lot of lines of code. >> Moreover, someone has to do it, and Iavor (who heroically volunteered) >> happens to be travelling next week. >> >> So it?s really not the kind of thing we would usually do after RC2. >> >> But (a) it?s serious and, as it happens, (b) there is also the BBP >> Prelude debate going on. >> >> Hence the question: should we simply delay 7.10 by, say, a month? After >> all, the timetable is up to us. Doing so might give a bit more breathing >> space to the BBP debate, which might allow time for reflection and/or >> implementation of modest features to help the transition. (I know that >> several are under discussion.) Plus, anyone waiting for 7.10 can simply >> use RC2, which is pretty good. >> >> Would that be a relief to the BBP debate? Or any other opinions. >> >> Simon >> >> PS: I know, I know: there is endless pressure to delay releases to get >> stuff in. If we give in to that pressure, we never make a release. But we >> should know when to break our own rules. Perhaps this is such an occasion. >> > > > -- > Regards, > > Austin Seipp, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Thu Jan 29 22:30:12 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 29 Jan 2015 17:30:12 -0500 Subject: Delaying 7.10? In-Reply-To: References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: additionally, the bug seems to work in GHC 7.8.4 too. On Thu, Jan 29, 2015 at 5:07 PM, Edward Kmett wrote: > I personally would rather see this issue given the time to be resolved > correctly than rush to release 7.10 now because of a self-imposed deadline. > > An unsafeCoerce bug, especially one which affects SafeHaskell, pretty much > trumps all in my eyes. > > -Edward > > On Thu, Jan 29, 2015 at 2:54 PM, Austin Seipp > wrote: > >> After thinking about it a little, I'm fine with pushing the release out >> to March. I think #9858 is the more serious of our concerns vs a raging >> debate, too. >> >> My only concern really is dealing with the merging of such a patch. For >> example, if the patch to fix this is actually as wide ranging as we believe >> to the type hacker, I can definitely foresee a merge conflict, with, say, >> the recent -fwarn-redundant-constraints, which I've managed to leave out of >> 7.10 so far. >> >> In any case, with some more time, we can work those details out. >> >> On Thursday, January 29, 2015, Simon Peyton Jones >> wrote: >> >>> Friends >>> >>> In a call with a bunch of type hackers, we were discussing >>> >>> https://ghc.haskell.org/trac/ghc/ticket/9858 >>> >>> This is a pretty serious bug. It allows a malicious person to construct >>> his own unsafeCoerce, and so completely subverts Safe Haskell. >>> >>> Actually there are two bugs (see comment:19). The first is easily >>> fixed. But the second is not. >>> >>> We explored various quick fixes, but the real solution is not far out of >>> reach. It amounts to this: >>> >>> ? Every data type is automatically in Typeable. No need to say >>> ?deriving(Typeable)? or ?AutoDeriveTypeable? (which would become deprecated) >>> >>> ? In implementation terms, the constraint solver treats Typeable >>> specially, much as it already treats Coercible specially. >>> >>> It?s not a huge job. It?d probably take a couple of days of >>> implementation work, and some time for shaking out bugs and consequential >>> changes. The biggest thing might be simply working out implementation >>> design choices. (For example, there is a modest code-size cost to making >>> everything Typeable, esp because that includes the data constructors of the >>> type (which can be used in types, with DataKinds). Does that matter? >>> Should we provide a way to suppress it? If so, we?d also need a way to >>> express whether or not the Typable instance exists in the interface file.) >>> >>> But it is a substantial change that will touch a lot of lines of code. >>> Moreover, someone has to do it, and Iavor (who heroically volunteered) >>> happens to be travelling next week. >>> >>> So it?s really not the kind of thing we would usually do after RC2. >>> >>> But (a) it?s serious and, as it happens, (b) there is also the BBP >>> Prelude debate going on. >>> >>> Hence the question: should we simply delay 7.10 by, say, a month? >>> After all, the timetable is up to us. Doing so might give a bit more >>> breathing space to the BBP debate, which might allow time for reflection >>> and/or implementation of modest features to help the transition. (I know >>> that several are under discussion.) Plus, anyone waiting for 7.10 can >>> simply use RC2, which is pretty good. >>> >>> Would that be a relief to the BBP debate? Or any other opinions. >>> >>> Simon >>> >>> PS: I know, I know: there is endless pressure to delay releases to get >>> stuff in. If we give in to that pressure, we never make a release. But we >>> should know when to break our own rules. Perhaps this is such an occasion. >>> >> >> >> -- >> Regards, >> >> Austin Seipp, Haskell Consultant >> Well-Typed LLP, http://www.well-typed.com/ >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://www.haskell.org/mailman/listinfo/ghc-devs >> >> > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kazu at iij.ad.jp Fri Jan 30 00:44:57 2015 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Fri, 30 Jan 2015 09:44:57 +0900 (JST) Subject: Make one-shot a per-registration property Message-ID: <20150130.094457.600215294012753010.kazu@iij.ad.jp> Hi, This is just confirmation. Ben's one-shot patch (*1) is included in master but not included in the ghc-7.10 branch. Is this intentional? Is it supposed to be merged in GHC 7.12? (*1) 023439980f6ef6ec051f676279ed2be5f031efe6 https://phabricator.haskell.org/D347 --Kazu From kazu at iij.ad.jp Fri Jan 30 01:08:23 2015 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Fri, 30 Jan 2015 10:08:23 +0900 (JST) Subject: the performance of IO manager in GHC 7.10.1rc2 Message-ID: <20150130.100823.2080113533284093257.kazu@iij.ad.jp> Hi, As usual, I took benchmark of IO manager to check if preformance regression is introduced. Fortunately, I don't see any preformance regression. +RTS -N 1 2 4 8 16 --------------------------------------------------------- GHC7.8.4 81,413 153,478 270,178 406,448 50,3203 GHC7.10.1rc2 88,247 148,471 265,392 409,514 49,3890 Environment: two real 20 cores machines directly connected by 20G - Xeon E5-2650Lv2 (1.70GHz/10cores/25MB) x 2, 8GM, no HT - CentOS 7.0 - Two 10Gs are aggregated Server: witty -a -m -r 8080 +RTS -A32m -N Benchmark: weighttp -n 100000 -c 1000 -k -t 16 http:/// P.S. If Ben's patch will be included in GHC 7.10, I will try this again. --Kazu From ben at smart-cactus.org Fri Jan 30 02:14:05 2015 From: ben at smart-cactus.org (Ben Gamari) Date: Thu, 29 Jan 2015 21:14:05 -0500 Subject: Make one-shot a per-registration property In-Reply-To: <20150130.094457.600215294012753010.kazu@iij.ad.jp> References: <20150130.094457.600215294012753010.kazu@iij.ad.jp> Message-ID: <87twz9uiaa.fsf@gmail.com> Kazu Yamamoto writes: > Hi, > > This is just confirmation. Ben's one-shot patch (*1) is included in > master but not included in the ghc-7.10 branch. Is this intentional? > Is it supposed to be merged in GHC 7.12? > I merged it to master, asked thoughtpolice on #ghc how I should merge it to the 7.10 branch and promptly forgot about it, sadly. Thanks for mentioning this, Kazu. Austin, should I just cherry-pick it now? Cheers, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From spam at scientician.net Fri Jan 30 02:45:46 2015 From: spam at scientician.net (Bardur Arantsson) Date: Fri, 30 Jan 2015 03:45:46 +0100 Subject: the performance of IO manager in GHC 7.10.1rc2 In-Reply-To: <20150130.100823.2080113533284093257.kazu@iij.ad.jp> References: <20150130.100823.2080113533284093257.kazu@iij.ad.jp> Message-ID: On 01/30/2015 02:08 AM, Kazu Yamamoto (????) wrote: > Hi, > > As usual, I took benchmark of IO manager to check if preformance > regression is introduced. Fortunately, I don't see any preformance > regression. > > +RTS -N 1 2 4 8 16 > --------------------------------------------------------- > GHC7.8.4 81,413 153,478 270,178 406,448 50,3203 > GHC7.10.1rc2 88,247 148,471 265,392 409,514 49,3890 > > Environment: two real 20 cores machines directly connected by 20G > - Xeon E5-2650Lv2 (1.70GHz/10cores/25MB) x 2, 8GM, no HT > - CentOS 7.0 > - Two 10Gs are aggregated > > Server: witty -a -m -r 8080 +RTS -A32m -N > > Benchmark: weighttp -n 100000 -c 1000 -k -t 16 http:/// Out of curiosity: What are the units for numbers in the table? Requests per second per core? Regards, From kazu at iij.ad.jp Fri Jan 30 03:36:48 2015 From: kazu at iij.ad.jp (Kazu Yamamoto (=?iso-2022-jp?B?GyRCOzNLXE9CSScbKEI=?=)) Date: Fri, 30 Jan 2015 12:36:48 +0900 (JST) Subject: the performance of IO manager in GHC 7.10.1rc2 In-Reply-To: References: <20150130.100823.2080113533284093257.kazu@iij.ad.jp> Message-ID: <20150130.123648.1474089521629644731.kazu@iij.ad.jp> Bardur, > Out of curiosity: What are the units for numbers in the table? Requests > per second per core? Yes. reqs/s --Kazu From austin at well-typed.com Fri Jan 30 04:36:21 2015 From: austin at well-typed.com (Austin Seipp) Date: Thu, 29 Jan 2015 22:36:21 -0600 Subject: Make one-shot a per-registration property In-Reply-To: <87twz9uiaa.fsf@gmail.com> References: <20150130.094457.600215294012753010.kazu@iij.ad.jp> <87twz9uiaa.fsf@gmail.com> Message-ID: You won't have permissions to push it to 7.10. I can try to get to it soon, but I make no guarantees until next week (out of town atm). CC Herbert, who can probably get to it more promptly than I can. On Thursday, January 29, 2015, Ben Gamari wrote: > Kazu Yamamoto > writes: > > > Hi, > > > > This is just confirmation. Ben's one-shot patch (*1) is included in > > master but not included in the ghc-7.10 branch. Is this intentional? > > Is it supposed to be merged in GHC 7.12? > > > I merged it to master, asked thoughtpolice on #ghc how I should merge it > to the 7.10 branch and promptly forgot about it, sadly. Thanks for > mentioning this, Kazu. > > Austin, should I just cherry-pick it now? > > Cheers, > > - Ben > > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvr at gnu.org Fri Jan 30 08:09:45 2015 From: hvr at gnu.org (Herbert Valerio Riedel) Date: Fri, 30 Jan 2015 09:09:45 +0100 Subject: Make one-shot a per-registration property In-Reply-To: (Austin Seipp's message of "Thu, 29 Jan 2015 22:36:21 -0600") References: <20150130.094457.600215294012753010.kazu@iij.ad.jp> <87twz9uiaa.fsf@gmail.com> Message-ID: <87r3uc665y.fsf@gnu.org> On 2015-01-30 at 05:36:21 +0100, Austin Seipp wrote: > You won't have permissions to push it to 7.10. I can try to get to it soon, > but I make no guarantees until next week (out of town atm). > > CC Herbert, who can probably get to it more promptly than I can. I'll look into it later today ...for the future: Please set the respective Trac tickets to which a commit belongs into the 'fixed in HEAD, please merge to STABLE'-ticket-state, as well as setting the proper milestone-value. That makes sure we don't miss it when preparing releases. Cheers, hvr From hvriedel at gmail.com Fri Jan 30 11:02:30 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Fri, 30 Jan 2015 12:02:30 +0100 Subject: 76-fold regression GHC 7.10->7.11 in T9961 byte-allocation Message-ID: <87mw505y61.fsf@gmail.com> Hello *, I noticed something odd while validating the GHC 7.10 branch: bytes allocated value is too low: (If this is because you have improved GHC, please update the test so that GHC doesn't regress again) Expected T9961(normal) bytes allocated: 772510192 +/-5% Lower bound T9961(normal) bytes allocated: 733884682 Upper bound T9961(normal) bytes allocated: 811135702 Actual T9961(normal) bytes allocated: 9766160 Deviation T9961(normal) bytes allocated: -98.7 % *** unexpected stat test failure for T9961(normal) ...then I also ran ./validate against today's GHC HEAD, and re-ran the T9961 test: Expected T9961(normal) bytes allocated: 772510192 +/-5% Lower bound T9961(normal) bytes allocated: 733884682 Upper bound T9961(normal) bytes allocated: 811135702 Actual T9961(normal) bytes allocated: 748225848 Deviation T9961(normal) bytes allocated: -3.1 % I'm not sure if it's just the test-case being broken, or there's something real regression between 7.10 and HEAD... However, I don't have time to investigate this. Cheers, hvr From ben at smart-cactus.org Fri Jan 30 14:28:53 2015 From: ben at smart-cactus.org (Ben Gamari) Date: Fri, 30 Jan 2015 09:28:53 -0500 Subject: Make one-shot a per-registration property In-Reply-To: <87r3uc665y.fsf@gnu.org> References: <20150130.094457.600215294012753010.kazu@iij.ad.jp> <87twz9uiaa.fsf@gmail.com> <87r3uc665y.fsf@gnu.org> Message-ID: <87pp9wuyu2.fsf@gmail.com> Herbert Valerio Riedel writes: > On 2015-01-30 at 05:36:21 +0100, Austin Seipp wrote: >> You won't have permissions to push it to 7.10. I can try to get to it soon, >> but I make no guarantees until next week (out of town atm). >> >> CC Herbert, who can probably get to it more promptly than I can. > > I'll look into it later today > Thanks Herbert! > ...for the future: > > Please set the respective Trac tickets to which a commit belongs into > the 'fixed in HEAD, please merge to STABLE'-ticket-state, as well as > setting the proper milestone-value. That makes sure we don't miss it > when preparing releases. > Hmm, alright. In this case I only opened a Differential so there was no Trac ticket to flag; I'll open a ticket in the future to ensure that things don't slip through the cracks. Thanks, - Ben -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From howard_b_golden at yahoo.com Fri Jan 30 20:19:27 2015 From: howard_b_golden at yahoo.com (Howard B. Golden) Date: Fri, 30 Jan 2015 20:19:27 +0000 (UTC) Subject: A straw-man compatibility proposal Message-ID: <1991750832.1942026.1422649167182.JavaMail.yahoo@mail.yahoo.com> (This is intended to be a serious, though partially-complete, proposal. I offer it in the hope it starts a constructive discussion.) The rapid changes in GHC have both good and bad aspects. I am specifically discussing proposals which break existing code. My goal is to offer ground rules to make changes easier to implement by easing the pain they cause to current programmers, their existing code and language learners. _DESIRABLE_ FEATURES OF GHC CHANGES: 1. Breaking existing code should be avoided UNLESS a simple method is provided to continue to compile the old code for a reasonable transition period. 1a. Corollary: When changes are made to the Prelude, provision should be made to allow the older Prelude(s) for a reasonable transition period, so existing code can be used during a reasonable transition period. 1b. Corollary: Since GHC changes (especially to the Prelude) will require changes in commonly used libraries, which will take time to complete and stabilize, the transition period should be long enough to incorporate this as well as changes to programs using these libraries. 1c. Corollary: A suitable subset of code on Hackage should compile either under the changed feature or (by a simple method) the prior feature for a reasonable transition period. Any failure of this subset to compile should be an absolute stop to releasing a new version of GHC. 1d. Corollary: Haskell 98 and Haskell 2010 standard-conforming code should compile either under the changed feature or (by a simple method) the prior feature for a reasonable transition period. Any failure of standard-conforming code to compile should be an absolute stop to releasing a new version of GHC. (Reason: Currently available books and other teaching material still generally are written to conform to Haskell 98. It is desirable to allow language learners to compile code included in their existing learning materials to reduce their learning curve.) 2. One method of satisfying (1) is to include compiler flag(s) to select various versions of Haskell Preludes and commonly used libraries compatible with those Preludes. Alternatively, other simple methods are acceptable. 3. It is desirable to provide automated tools that will update existing code to the newer Prelude and libraries. The existence of such tools will influence the length of the transition period. DISCUSSION 1. Adoption of Haskell is retarded by difficulty learning the language, which is increased if learning materials don't compile. 2. Adoption of Haskell for production use is retarded by GHC changes that break existing code, necessitating ongoing maintenance. This can be ameliorated by automated tools to update existing code. 3. If GHC developers and library maintainers consider the impact of their changes and seek to minimize their immediate and longer-term impact, this will enable greater adoption of Haskell which will expand the Haskell community to everyone's benefit. From marlowsd at gmail.com Fri Jan 30 20:25:20 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Fri, 30 Jan 2015 20:25:20 +0000 Subject: Delaying 7.10? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <54CBE8B0.70907@gmail.com> I'm worried about the code-size regression. We should definitely measure how bad it is before making a decision on whether to enable Typeable by default. +1 to delaying the release. Cheers, Simon On 29/01/2015 17:58, Simon Peyton Jones wrote: > Friends > > In a call with a bunch of type hackers, we were discussing > > https://ghc.haskell.org/trac/ghc/ticket/9858 > > This is a pretty serious bug. It allows a malicious person to construct > his own unsafeCoerce, and so completely subverts Safe Haskell. > > Actually there are two bugs (see comment:19). The first is easily > fixed. But the second is not. > > We explored various quick fixes, but the real solution is not far out of > reach. It amounts to this: > > ?Every data type is automatically in Typeable. No need to say > ?deriving(Typeable)? or ?AutoDeriveTypeable? (which would become deprecated) > > ?In implementation terms, the constraint solver treats Typeable > specially, much as it already treats Coercible specially. > > It?s not a huge job. It?d probably take a couple of days of > implementation work, and some time for shaking out bugs and > consequential changes. The biggest thing might be simply working out > implementation design choices. (For example, there is a modest > code-size cost to making everything Typeable, esp because that includes > the data constructors of the type (which can be used in types, with > DataKinds). Does that matter? Should we provide a way to suppress it? > If so, we?d also need a way to express whether or not the Typable > instance exists in the interface file.) > > But it is a substantial change that will touch a lot of lines of code. > Moreover, someone has to do it, and Iavor (who heroically volunteered) > happens to be travelling next week. > > So it?s really not the kind of thing we would usually do after RC2. > > But (a) it?s serious and, as it happens, (b) there is also the BBP > Prelude debate going on. > > Hence the question: should we simply delay 7.10 by, say, a month? > After all, the timetable is up to us. Doing so might give a bit more > breathing space to the BBP debate, which might allow time for reflection > and/or implementation of modest features to help the transition. (I > know that several are under discussion.) Plus, anyone waiting for 7.10 > can simply use RC2, which is pretty good. > > Would that be a relief to the BBP debate? Or any other opinions. > > Simon > > PS: I know, I know: there is endless pressure to delay releases to get > stuff in. If we give in to that pressure, we never make a release. But > we should know when to break our own rules. Perhaps this is such an > occasion. > > > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs > From greg at gregweber.info Fri Jan 30 23:39:27 2015 From: greg at gregweber.info (Greg Weber) Date: Fri, 30 Jan 2015 15:39:27 -0800 Subject: Restricted Template Haskell Message-ID: Hello GHC friends! I am starting up a proposal for variants of Template Haskell that restrict what operations are available. The goal is to make TH easier for users to reason about and to allow for an easier compilation story. Here is the proposal page: https://ghc.haskell.org/trac/ghc/wiki/TemplateHaskell/Restricted Right now the proposal does not have any details and the goal is to write out a clear specification. If this sounds interesting to you, let me know or leave some feedback on the wiki. Thanks, Greg Weber -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Sat Jan 31 04:58:01 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Fri, 30 Jan 2015 23:58:01 -0500 Subject: integer-gmp-1.0.0.0 ? Message-ID: <67F36853-BAFF-45BA-8D6B-99749DBF414E@cis.upenn.edu> Hi devs, I've just hit on a strange bug, and I don't know where to start looking. In my branch where I'm building support for dependent types (github.com/goldfirere/ghc.git; branch: nokinds.... but you don't need to look there) I'm going through the testsuite and picking off bugs one at a time. I'm very puzzled by the output I'm getting from typecheck/should_compile/tc231, which does a -ddump-tc. At the end, I see ... Dependent modules: [] Dependent packages: [base-4.8.0.0, ghc-prim-0.3.1.0, integer-gmp-1.0.0.0] Notice the version number of integer-gmp. I haven't touched anything near there! And, my integer-gmp.cabal file says 0.5.1.0, which is what I'd expect. Sure enough, when I use the inplace `ghc-pkg list`, I see that I have integer-gmp-1.0.0.0 installed, along with all the other boot packages with their correct version numbers. Any hints here? I can surely work around this, but it's very strange! Thanks! Richard From stegeman at gmail.com Sat Jan 31 07:35:58 2015 From: stegeman at gmail.com (Luite Stegeman) Date: Sat, 31 Jan 2015 20:35:58 +1300 Subject: integer-gmp-1.0.0.0 ? In-Reply-To: <67F36853-BAFF-45BA-8D6B-99749DBF414E@cis.upenn.edu> References: <67F36853-BAFF-45BA-8D6B-99749DBF414E@cis.upenn.edu> Message-ID: That's the version number of /libraries/integer-gmp2, which is installed by default now instead of the old integer-gmp package (it still uses the integer-gmp package name). luite On Sat, Jan 31, 2015 at 5:58 PM, Richard Eisenberg wrote: > Hi devs, > > I've just hit on a strange bug, and I don't know where to start looking. > > In my branch where I'm building support for dependent types (github.com/goldfirere/ghc.git; branch: nokinds.... but you don't need to look there) I'm going through the testsuite and picking off bugs one at a time. I'm very puzzled by the output I'm getting from typecheck/should_compile/tc231, which does a -ddump-tc. At the end, I see > > ... > Dependent modules: [] > Dependent packages: [base-4.8.0.0, ghc-prim-0.3.1.0, > integer-gmp-1.0.0.0] > > > Notice the version number of integer-gmp. I haven't touched anything near there! And, my integer-gmp.cabal file says 0.5.1.0, which is what I'd expect. Sure enough, when I use the inplace `ghc-pkg list`, I see that I have integer-gmp-1.0.0.0 installed, along with all the other boot packages with their correct version numbers. > > Any hints here? I can surely work around this, but it's very strange! > > Thanks! > Richard > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs From chak at cse.unsw.edu.au Sat Jan 31 10:16:05 2015 From: chak at cse.unsw.edu.au (Manuel M T Chakravarty) Date: Sat, 31 Jan 2015 21:16:05 +1100 Subject: Delaying 7.10? In-Reply-To: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> References: <618BE556AADD624C9C918AA5D5911BEF562BA460@DB3PRD3001MB020.064d.mgd.msft.net> Message-ID: <359B51D8-AA54-4A49-BFE8-8CA5C0F8649C@cse.unsw.edu.au> A safety issue of that magnitude is a valid reason to hold up a release at the last minute IMHO. Manuel > Simon Peyton Jones : > > Friends > > In a call with a bunch of type hackers, we were discussing > > https://ghc.haskell.org/trac/ghc/ticket/9858 > This is a pretty serious bug. It allows a malicious person to construct his own unsafeCoerce, and so completely subverts Safe Haskell. > > Actually there are two bugs (see comment:19). The first is easily fixed. But the second is not. > > We explored various quick fixes, but the real solution is not far out of reach. It amounts to this: > > ? Every data type is automatically in Typeable. No need to say ?deriving(Typeable)? or ?AutoDeriveTypeable? (which would become deprecated) > > ? In implementation terms, the constraint solver treats Typeable specially, much as it already treats Coercible specially. > > It?s not a huge job. It?d probably take a couple of days of implementation work, and some time for shaking out bugs and consequential changes. The biggest thing might be simply working out implementation design choices. (For example, there is a modest code-size cost to making everything Typeable, esp because that includes the data constructors of the type (which can be used in types, with DataKinds). Does that matter? Should we provide a way to suppress it? If so, we?d also need a way to express whether or not the Typable instance exists in the interface file.) > > But it is a substantial change that will touch a lot of lines of code. Moreover, someone has to do it, and Iavor (who heroically volunteered) happens to be travelling next week. > > So it?s really not the kind of thing we would usually do after RC2. > > But (a) it?s serious and, as it happens, (b) there is also the BBP Prelude debate going on. > > Hence the question: should we simply delay 7.10 by, say, a month? After all, the timetable is up to us. Doing so might give a bit more breathing space to the BBP debate, which might allow time for reflection and/or implementation of modest features to help the transition. (I know that several are under discussion.) Plus, anyone waiting for 7.10 can simply use RC2, which is pretty good. > > Would that be a relief to the BBP debate? Or any other opinions. > > Simon > > PS: I know, I know: there is endless pressure to delay releases to get stuff in. If we give in to that pressure, we never make a release. But we should know when to break our own rules. Perhaps this is such an occasion. > > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://www.haskell.org/mailman/listinfo/ghc-devs -------------- next part -------------- An HTML attachment was scrubbed... URL: From george.colpitts at gmail.com Sat Jan 31 13:34:35 2015 From: george.colpitts at gmail.com (George Colpitts) Date: Sat, 31 Jan 2015 09:34:35 -0400 Subject: 7.10.1RC2 compile problem? ghc: internal error: PAP object entered! Message-ID: Maybe this is something I shouldn't be doing, but I thought it was worth mentioning in case I have found a compiler bug. Should I file a bug for this? cabal install *--allow-newer=base* accelerate ... [10 of 10] Compiling Data.Label.Base ( src/Data/Label/Base.hs, dist/build/Data/Label/Base.o ) ghc: internal error: PAP object entered! (GHC version 7.10.0.20150123 for x86_64_apple_darwin) Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sat Jan 31 15:16:12 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Sat, 31 Jan 2015 10:16:12 -0500 Subject: 7.10.1RC2 compile problem? ghc: internal error: PAP object entered! In-Reply-To: References: Message-ID: On Sat, Jan 31, 2015 at 8:34 AM, George Colpitts wrote: > cabal install *--allow-newer=base* accelerate > Never safe, because base contains the runtime and the runtime and the compiler are very tightly tied together. Crashes are not surprising. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwbarton at gmail.com Sat Jan 31 15:47:58 2015 From: rwbarton at gmail.com (Reid Barton) Date: Sat, 31 Jan 2015 10:47:58 -0500 Subject: 7.10.1RC2 compile problem? ghc: internal error: PAP object entered! In-Reply-To: References: Message-ID: On Sat, Jan 31, 2015 at 10:16 AM, Brandon Allbery wrote: > > On Sat, Jan 31, 2015 at 8:34 AM, George Colpitts < > george.colpitts at gmail.com> wrote: > >> cabal install *--allow-newer=base* accelerate >> > > Never safe, because base contains the runtime and the runtime and the > compiler are very tightly tied together. Crashes are not surprising. > Actually it should always be safe: --allow-newer=base is essentially the equivalent of removing the upper bound on base from the .cabal file (of every package that was installed during that run). However, I'm quite confused about something, namely that as far as I can tell, neither accelerate nor any of its dependencies contain a module Data.Label.Base. What package was GHC trying to build when it crashed? Regards, Reid Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From rwbarton at gmail.com Sat Jan 31 15:52:16 2015 From: rwbarton at gmail.com (Reid Barton) Date: Sat, 31 Jan 2015 10:52:16 -0500 Subject: 7.10.1RC2 compile problem? ghc: internal error: PAP object entered! In-Reply-To: References: Message-ID: On Sat, Jan 31, 2015 at 10:47 AM, Reid Barton wrote: > On Sat, Jan 31, 2015 at 10:16 AM, Brandon Allbery > wrote: > >> >> On Sat, Jan 31, 2015 at 8:34 AM, George Colpitts < >> george.colpitts at gmail.com> wrote: >> >>> cabal install *--allow-newer=base* accelerate >>> >> >> Never safe, because base contains the runtime and the runtime and the >> compiler are very tightly tied together. Crashes are not surprising. >> > > Actually it should always be safe: --allow-newer=base is essentially the > equivalent of removing the upper bound on base from the .cabal file (of > every package that was installed during that run). > > However, I'm quite confused about something, namely that as far as I can > tell, neither accelerate nor any of its dependencies contain a module > Data.Label.Base. What package was GHC trying to build when it crashed? > Oops, I was running the wrong command: it's in fclabels. Please file a bug report and attach the output of `cabal install --ghc-options=-v fclabels`, thanks! Regards, Reid Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From hvriedel at gmail.com Sat Jan 31 16:09:54 2015 From: hvriedel at gmail.com (Herbert Valerio Riedel) Date: Sat, 31 Jan 2015 17:09:54 +0100 Subject: 7.10.1RC2 compile problem? ghc: internal error: PAP object entered! In-Reply-To: (George Colpitts's message of "Sat, 31 Jan 2015 09:34:35 -0400") References: Message-ID: <87egqbx771.fsf@gmail.com> On 2015-01-31 at 14:34:35 +0100, George Colpitts wrote: > Maybe this is something I shouldn't be doing, but I thought it was worth > mentioning in case I have found a compiler bug. > Should I file a bug for this? > > cabal install *--allow-newer=base* accelerate > ... > [10 of 10] Compiling Data.Label.Base ( src/Data/Label/Base.hs, > dist/build/Data/Label/Base.o ) > ghc: internal error: PAP object entered! > (GHC version 7.10.0.20150123 for x86_64_apple_darwin) > Please report this as a GHC bug: http://www.haskell.org/ghc/reportabug I suspect this has to do with `--allow-newer=base` allowing template-haskell-2.9.0.0 to be re-installed (which then becomes a build-dependency of `fclabels`). GHC 7.10, however, comes with template-haskell-2.10.0.0 e.g. try the following in an empty sandbox: cabal install fclabels --allow-newer=base --constraint 'template-haskell == 2.9.*' (wheares a plain `cabal install fclabels` in a fresh sandbox should succeed w/o panics with GHC 7.10) From rwbarton at gmail.com Sat Jan 31 16:21:10 2015 From: rwbarton at gmail.com (Reid Barton) Date: Sat, 31 Jan 2015 11:21:10 -0500 Subject: 7.10.1RC2 compile problem? ghc: internal error: PAP object entered! In-Reply-To: <87egqbx771.fsf@gmail.com> References: <87egqbx771.fsf@gmail.com> Message-ID: On Sat, Jan 31, 2015 at 11:09 AM, Herbert Valerio Riedel wrote: > On 2015-01-31 at 14:34:35 +0100, George Colpitts wrote: > > Maybe this is something I shouldn't be doing, but I thought it was worth > > mentioning in case I have found a compiler bug. > > Should I file a bug for this? > > > > cabal install *--allow-newer=base* accelerate > > ... > > [10 of 10] Compiling Data.Label.Base ( src/Data/Label/Base.hs, > > dist/build/Data/Label/Base.o ) > > ghc: internal error: PAP object entered! > > (GHC version 7.10.0.20150123 for x86_64_apple_darwin) > > Please report this as a GHC bug: > http://www.haskell.org/ghc/reportabug > > I suspect this has to do with `--allow-newer=base` allowing > template-haskell-2.9.0.0 to be re-installed (which then becomes a > build-dependency of `fclabels`). GHC 7.10, however, comes with > template-haskell-2.10.0.0 > Ah yes, you're exactly right. I didn't encounter this when I tried to reproduce the issue because I have a bunch of lines like "constraint: template-haskell installed" in my .cabal/config file. Regards, Reid Barton -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at gregweber.info Sat Jan 31 16:33:49 2015 From: greg at gregweber.info (Greg Weber) Date: Sat, 31 Jan 2015 08:33:49 -0800 Subject: Restricted Template Haskell In-Reply-To: References: Message-ID: On Fri, Jan 30, 2015 at 7:05 PM, adam vogt wrote: > Hi Greg, > > Perhaps a less-invasive way to implement the -XSafe part of your > proposal would be to provide a module like: > > module Language.Haskell.TH.Safe ( > module Language.Haskell.TH, > reifyWithoutNameG, > ) where > import Language.Haskell.TH hiding (runIO, reify*) > > where reifyWithoutNameG is the same as reify, except definitions that > are out of scope are either missing or modified such that they use > NameQ instead of NameG for out-of-scope names. > Thanks, I added this concept to the wiki. > That way there is no new syntax needed, and safe TH can be called by > unsafe TH without any conversions. > > I think defining another monad like Q that can do less is too > inconvenient because you have to disambiguate between Safe.listE and > Unsafe.listE, or make those functions more polymorphic (which makes > type errors worse). Another option would be if there were > Oh, you are getting into more concrete details now than I have even thought about! For the restricted monad route, we might look into a more capable method of using capabilities that would end up looking like this: reify :: Name -> Restrict (TH :+: Reify) Info runIO :: IO a -> Restrict (TH :+: RunIO) a There are still a lot of details to work out, thanks for getting things started. -------------- next part -------------- An HTML attachment was scrubbed... URL: