From gracjanpolak at gmail.com Fri Jan 1 09:19:04 2016 From: gracjanpolak at gmail.com (Gracjan Polak) Date: Fri, 1 Jan 2016 10:19:04 +0100 Subject: [Haskell-cafe] Month in Haskell Mode December 2015 Message-ID: Welcome Haskell Mode users, Haskell Mode progress report for December 2015. For previous issue see November 2015 . Reddit discussion . What is Haskell Mode? Haskell Mode is an umbrella project for multiple Emacs tools for efficient Haskell development. Haskell Mode is an open source project developed by a group of volunteers constantly looking for contributions. For more information see https://github.com/haskell/haskell-mode. Important developments Font locking now knows how to use language specific syntax coloring for quasi quotes. {-# LANGUAGE QuasiQuotes #-} query conn [sql| SELECT column_a, column_b FROM table1 NATURAL JOIN table2 WHERE ? <= time AND time < ? AND name LIKE ? ORDER BY size DESC LIMIT 100 |] (beginTime,endTime,string) Indentation learned how to indent multiline strings with continuation escapes, for example: main = putStrLn "Multiline\n\ \Hello\n\ \World!\n" Current project focus Current project focus is to lower entry barrier for newcomers by defining bite-sized tasks. Get 50 'well-defined-tasks' done as by the metric: https://github.com/haskell/haskell-mode/issues?q=is%3Aissue+label%3Awell-defined-task+is%3Aclosed A 'well-defined-task' is a category of tasks that have the field cleared for them, questions already sorted out and detailed information how to get them done. So you can just sit, search for 'well-defined-task' label and enjoy the coding! The point is to lower the entry barrier for new users, new issue reporters and advanced programmers but Emacs lisp beginners to contribute to the project. Current status: 14 well-defined-tasks closed plus 13 more open . If only you can help with reaching our targets please do so! Issues closed in December - Flymake produce temp file without cleaning it up #130 - haskell-indentation's behaviour is slightly different from haskell-indent #208 - Support shallow indentation #366 - flymake-init uses nil haskell-saved-check-command #384 - haskell-mode prompts for a new session every emacs session #407 - On opening a file, haskell-doc opens several files defined in import section #742 - haskell-indentation-indent-region and smartparens interact poorly #796 - Is it possible to use ghci-ng with "stack-ghci" haskell-process-type? #889 - Use caching docker TravisCI infrastructure #910 - Makefile doesn't work in a shell under emacs #972 - Emacs hangs when typing behind whitespace #980 - Make haskell-indentation-phrase-rest non-recursive #998 - Case indentation error #1000 - Stray ^H characters appear in haskell-process-log #1009 - Broken indentation in Emacs 24.5 #1013 - Haskell indentation weirdly re-indents lines after sp-kill-sexp #1031 Pull requests merged in December - Remove horizontal whitespace based smart indentation mode haskell-simple-indent #958 - Remove haskell-bot.el #960 - Bump version to 13.17-git #1007 - Guard stack overflow, introduce a test #1008 - Failing testcase for bug #981: M-j to continue a comment on the next lines indents the next line #1010 - Non-recursive haskell-indentation-phrase-rest #1011 - Add test for a case expression with multiple paths on their own lines. #1016 - Check indentation per line #1017 - Show expected result first in haskell indentation tests #1018 - Support shallow indentation #1019 - Use vanilla buffer file name for hlint command #1020 - Use Trusty platform for TravisCI #1021 - Find Emacs once #1023 - Case expression indentation fix #1024 - Simpler apt-get #1025 - Fix align imports for modules named "Instance" #1028 - Add some common extensions to haskell-rgrep #1029 - Remove haskell-indentation-dyn-first-position #1030 - Cleanup tests with with-temp-switch-to-buffer #1033 - Indent multiline strings #1035 - Implement font-lock for quasi quoted XML, HTML and JavaScript #1036 Contributors active in December Emmanuel Touzery, Gracjan Polak, Sergey Vinokurov, Wayne Lewis, vwyu Contributing Haskell Mode needs volunteers like any other open source project. For more information see: https://github.com/haskell/haskell-mode/wiki Also drop by our IRC channel: #haskell-emacs at irc.freenode.net. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Fri Jan 1 19:13:44 2016 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Fri, 1 Jan 2016 21:13:44 +0200 Subject: [Haskell-cafe] Month in haskell-ide-engine December 2015 Message-ID: Welcome Haskell IDE Engine (future) users, [also available online at https://github.com/haskell/haskell-ide-engine/blob/master/docs/Report-2015-12.md] Haskell IDE Engine progress report for December 2015. What is Haskell IDE Engine? Not an IDE, still no release, it is a work in progress. It is a common point to join together the effort of integrating tools into an IDE and the effort of writing tools for an IDE, by providing an API in the middle that each of these parties can work from and to. Important developments This month has been mostly about turning ideas into practice, and dealing with the details that come out when things become concrete. The more concrete things we have are: 1. Auto-generated API documentation (@cocreature) One of the principles of HIE is that each plugin/command provides a PluginDescriptor documenting the commands it provides, parameters they take and return types. Moritz Kiefer has created a module[1] to take this information and generate API documentation[2]. The information is present, but help with styling / presentation will be gladly accepted. [1] https://github.com/haskell/haskell-ide-engine/tree/master/hie-docs-generator [2] https://haskell.github.io/haskell-ide-engine/ 2. Leksah context menu (@jpmoresmau) JP Moresmau has extended the Leksah integration[3][4] to now provide a context-specific menu providing available HIE commands. This includes the ability to invoke HaRe to do refactorings. To enable this we now have a specific hie-base[5] module with just the basic types which can be included in the IDE side if it is fortunate enough to use haskell. [3] https://github.com/JPMoresmau/leksah/tree/hie_integration [4] https://github.com/JPMoresmau/leksah-server/tree/hie_integration [5] https://github.com/haskell/haskell-ide-engine/tree/master/hie-base 3. Deeper Emacs integration (@cocreature) The Emacs integration uses the PluginDescriptor to generate a set of elisp functions corresponding to each plugin command. The macro generating these has been extended to populate the docstrings for these. It also handles the HieDiff return type which means it can apply the changes from a HaRe refactoring, or any other plugin generating a diff. It prompts interactively for any additional parameters required. 4. New plugins applyrefact is a wrapper around Matthew Pickering's library to apply hlint changes to source code. egasync is an example wrapper showing how a plugin can launch its own process in the server and stream output from it back to the IDE, if the transport used supports this. Otherwise the transport will batch it up when a request is received for the output. Current project focus The current project focus is still on getting our collective heads straight on what actually needs to be done, and providing working integrations to at least 2 IDEs to get a better feel for what is needed. Both of these are well in hand, and if anyone would like to join in the discussion happens via the github issue tracker and docs section of the project, as well as IRC at #haskell-ide-engine on freenode. Issues closed in December Decide on matching an existing IDE protocol or designing a new one #2 Representation of arbitrary types for a plugin #14 Plugin function definition #16 Add QuickCheck tests for FromJSON / ToJSON instances #48 Check PluginDescriptor on loading #52 Add a signal handler to flush logs on exit #58 Plugin startup / private data #83 Create a diff type #95 Replace logging package #112 Split between haskell-ide-engine and haskell-plugin-api #118 Document existance of module-management package and ghc-vis #124 Provide ghc-modi transport? #125 Sort out ghc-mod session management #127 Git integration #128 apply-refact plugin #131 emacs: how to run hie with arguments / logging #134 Change HieDiff to represent a standard patch format #138 Pull requests merged in December Detect plugin param name collisions #115 Bring in a diff type, and introduce semantic types #116 New logger #120 Rework api split #122 (elisp) Add log of hie process input / output #123 Ghc mod session #130 Apply refact plugin #133 Ghc mod find #135 Plugin state #137 Change HieDiff to use a patch format #139 Hare context #141 Initial example async process plugin #143 Base split #144 Support more commands #145 Add tests for hie-create-command #146 Add integration test for hie-hare-rename #147 Move to a plugin/command url scheme #148 Documentation generator #149 Contributors active in December Alan Zimmerman, Daniel Gr?ber, JP Moresmau, Michael Sloan, Moritz Kiefer, Tobias G. Waaler Contributing Haskell IDE Engine needs volunteers like any other open source project. For more information see: https://github.com/haskell/haskell-ide-engine Also drop by our IRC channel: #haskell-ide-engine at irc.freenode.net. Thanks! From hjgtuyl at chello.nl Fri Jan 1 23:34:16 2016 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Sat, 02 Jan 2016 00:34:16 +0100 Subject: [Haskell-cafe] wxHaskell + GHCi Message-ID: L.S., For people who gave up on wxHaskell in the past, because wxHaskell didn't run (properly) in GHCi: I just found out, that wxHaskell programs run properly in GHCi on Windows, if you use the newest wxHaskell plus GHC 7.10.3 (both the 32 bit and the 64 bit version). Regards, Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From conal at conal.net Sat Jan 2 05:08:21 2016 From: conal at conal.net (Conal Elliott) Date: Fri, 1 Jan 2016 21:08:21 -0800 Subject: [Haskell-cafe] wxHaskell + GHCi In-Reply-To: References: Message-ID: Fantastic news! Thanks for the update. Does wxHaskell work (non-fatally) with GHCi on Mac OS also? Are there sample programs for an easy test run? -- Conal On Fri, Jan 1, 2016 at 3:34 PM, Henk-Jan van Tuyl wrote: > > L.S., > > For people who gave up on wxHaskell in the past, because wxHaskell didn't > run (properly) in GHCi: > I just found out, that wxHaskell programs run properly in GHCi on Windows, > if you use the newest wxHaskell plus GHC 7.10.3 (both the 32 bit and the > 64 bit version). > > Regards, > Henk-Jan van Tuyl > > > -- > Folding at home > What if you could share your unused computer power to help find a cure? In > just 5 minutes you can join the world's biggest networked computer and get > us closer sooner. Watch the video. > http://folding.stanford.edu/ > > > http://Van.Tuyl.eu/ > http://members.chello.nl/hjgtuyl/tourdemonad.html > Haskell programming > -- > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjgtuyl at chello.nl Sat Jan 2 12:58:55 2016 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Sat, 02 Jan 2016 13:58:55 +0100 Subject: [Haskell-cafe] wxHaskell + GHCi In-Reply-To: References: Message-ID: I don't have a Mac OS computer available, I hope someone else will tell us. There are a lot of sample programs at https://github.com/wxHaskell/wxHaskell/tree/master/samples Regards, Henk-Jan van Tuyl On Sat, 02 Jan 2016 06:08:21 +0100, Conal Elliott wrote: > Fantastic news! Thanks for the update. Does wxHaskell work (non-fatally) > with GHCi on Mac OS also? Are there sample programs for an easy test run? > -- Conal > > On Fri, Jan 1, 2016 at 3:34 PM, Henk-Jan van Tuyl > wrote: > >> >> L.S., >> >> For people who gave up on wxHaskell in the past, because wxHaskell >> didn't >> run (properly) in GHCi: >> I just found out, that wxHaskell programs run properly in GHCi on >> Windows, >> if you use the newest wxHaskell plus GHC 7.10.3 (both the 32 bit and the >> 64 bit version). >> >> Regards, >> Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From robstewart57 at gmail.com Sat Jan 2 17:46:45 2016 From: robstewart57 at gmail.com (Rob Stewart) Date: Sat, 2 Jan 2016 17:46:45 +0000 Subject: [Haskell-cafe] For one cabal'ised library: running specific benchmarks & tests, and obtaining multiple Travis reports Message-ID: Hi, Three questions: 1) running specific criterion benchmark groups, 2) running specific test-framework test groups, and 3) getting multiple Travis CI reports for one cabal'ised library. 1. Running specific criterion benchmark groups Given this benchmark harness: main = defaultMain [ bgroup "fib" [ bench "1" $ whnf fib 1 , bench "5" $ whnf fib 5 , bench "9" $ whnf fib 9 , bench "11" $ whnf fib 11 ] ] How do I run one specific group benchmar, or a specific benchmark in a specific group? Can I run: $ cabal bench "fib" or $ cabal bench "fib/1" 2. Running specific test-framework test groups Given this test harness with test-framework: main = defaultMain tests tests = [ testGroup "Sorting-Group-1" [ testProperty "sort1" prop_sort1, testProperty "sort2" prop_sort2, testProperty "sort3" prop_sort3 ], testGroup "Sorting-Group-2" [ testProperty "sort4" prop_sort4, testProperty "sort5" prop_sort5, testProperty "sort6" prop_sort6, testCase "sort7" test_sort7, testCase "sort8" test_sort8 ] ] Can I run: $ cabal test "Sorting-Group-1" or $ cabal test "Sorting-Group-1/sort3" 3. Separating cabal'ised project to separate Travis CI badges For my current project, I get one Travis CI report for all of HUnit and QuikCheck tests I write for a cabal'ised library expressed with test-framework. For example, if 108 tests fail I get one Travis CI report saying #108 failing tests, and I get one Travis CI badge of the form: https://travis-ci.org//.png?branch=master What I'd really like is to retrieve multiple Travis CI reports and multiple Travis CI badges for one Haskell libray. E.g. API tests: passing (with a URL for a green badge). Parser tests: #108 failing (with a URL for a red badge). Messaging tests: #16 failing (with a URL for a red badge). How could I refactor my test groups so that Travis CI will provide me multiple reports for one cabal'ised project? Thanks! -- Rob Stewart -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjgtuyl at chello.nl Sat Jan 2 23:24:35 2016 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Sun, 03 Jan 2016 00:24:35 +0100 Subject: [Haskell-cafe] Haskell popularity Message-ID: L.S., Haskell scores quite high on GitHub and Stack Overflow, see: http://redmonk.com/sogrady/2015/07/01/language-rankings-6-15/ Regards, Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From conal at conal.net Sun Jan 3 00:43:51 2016 From: conal at conal.net (Conal Elliott) Date: Sat, 2 Jan 2016 16:43:51 -0800 Subject: [Haskell-cafe] wxHaskell + GHCi In-Reply-To: References: Message-ID: Thanks for the pointer! I was able to compile and run the sample program Resize.hs (?ghc ?make Resize.hs?), but when I load that module into GHCi and run main, I get a run-time error: *Main> main 2016-01-02 16:24:42.245 ghc[52790:1003] *** Assertion failure in +[NSUndoManager _endTopLevelGroupings], /SourceCache/Foundation/Foundation-1056.17/Misc.subproj/NSUndoManager.m:328 2016-01-02 16:24:42.245 ghc[52790:1003] +[NSUndoManager(NSInternal) _endTopLevelGroupings] is only safe to invoke on the main thread. 2016-01-02 16:24:42.246 ghc[52790:1003] ( 0 CoreFoundation 0x00007fff913b225c __exceptionPreprocess + 172 1 libobjc.A.dylib 0x00007fff93581e75 objc_exception_throw + 43 2 CoreFoundation 0x00007fff913b2038 +[NSException raise:format:arguments:] + 104 3 Foundation 0x00007fff90bb9361 -[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:] + 189 4 Foundation 0x00007fff90b238ac +[NSUndoManager(NSPrivate) _endTopLevelGroupings] + 156 5 AppKit 0x00007fff914a7a23 -[NSApplication run] + 688 6 libwx_osx_cocoau_core-3.0.0.2.0.dylib 0x0000000113101ce3 _ZN5wxApp10CallOnInitEv + 143 7 libwx_baseu-3.0.0.2.0.dylib 0x00000001135eb396 _Z7wxEntryRiPPw + 47 8 libwxc.dylib 0x0000000115dfbd3c ELJApp_InitializeC + 124 9 libHSwxcore-0.92.2.0-14assQ7lWYy0vwBRqyjk7D-ghc7.10.3.dylib 0x0000000114e8a5cc cc2eh_info + 132 ) 2016-01-02 16:24:42.321 ghc[52790:1003] *** Assertion failure in +[NSUndoManager _endTopLevelGroupings], /SourceCache/Foundation/Foundation-1056.17/Misc.subproj/NSUndoManager.m:328 *Main> A little googling found an issue: Crash when running a sample program in ghci on OSX . One comment recommends the following in ghci: :set -fno-ghci-sandbox When I use this command in a fresh ghci process (not after a crash), the sample works. However, when I run ?main? a second time, the window doesn?t appear. Instead, I get some sort of undead process (called ?ghc?) that I have to kill manually. I installed wxWidgets via ?brew update && brew install wxWidgets? and wxHaskell via ?cabal update && cabal install wx?. I?m running Mac OS 10.9.5. Has anyone gotten this latest wxHaskell to play well with ghci on Mac OS? -- Conal On Sat, Jan 2, 2016 at 4:58 AM, Henk-Jan van Tuyl wrote: > > I don't have a Mac OS computer available, I hope someone else will tell > us. There are a lot of sample programs at > https://github.com/wxHaskell/wxHaskell/tree/master/samples > > Regards, > Henk-Jan van Tuyl > > > On Sat, 02 Jan 2016 06:08:21 +0100, Conal Elliott wrote: > > Fantastic news! Thanks for the update. Does wxHaskell work (non-fatally) >> with GHCi on Mac OS also? Are there sample programs for an easy test run? >> -- Conal >> >> On Fri, Jan 1, 2016 at 3:34 PM, Henk-Jan van Tuyl >> wrote: >> >> >>> L.S., >>> >>> For people who gave up on wxHaskell in the past, because wxHaskell didn't >>> run (properly) in GHCi: >>> I just found out, that wxHaskell programs run properly in GHCi on >>> Windows, >>> if you use the newest wxHaskell plus GHC 7.10.3 (both the 32 bit and the >>> 64 bit version). >>> >>> Regards, >>> Henk-Jan van Tuyl >>> >> > > > > -- > Folding at home > What if you could share your unused computer power to help find a cure? In > just 5 minutes you can join the world's biggest networked computer and get > us closer sooner. Watch the video. > http://folding.stanford.edu/ > > > http://Van.Tuyl.eu/ > http://members.chello.nl/hjgtuyl/tourdemonad.html > Haskell programming > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From magnus at therning.org Sun Jan 3 01:29:25 2016 From: magnus at therning.org (Magnus Therning) Date: Sun, 03 Jan 2016 02:29:25 +0100 Subject: [Haskell-cafe] Haskell popularity In-Reply-To: References: Message-ID: <87oad3zc0a.fsf@therning.org> Henk-Jan van Tuyl writes: > L.S., > > Haskell scores quite high on GitHub and Stack Overflow, see: > http://redmonk.com/sogrady/2015/07/01/language-rankings-6-15/ Time to write a bug report then I suppose ;) /M -- Magnus Therning OpenPGP: 0x927912051716CE39 email: magnus at therning.org jabber: magnus at therning.org twitter: magthe http://therning.org/magnus Some operating systems are called 'user friendly', Linux however is 'expert friendly'. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 800 bytes Desc: not available URL: From rf at rufflewind.com Sun Jan 3 06:19:21 2016 From: rf at rufflewind.com (Phil Ruffwind) Date: Sun, 3 Jan 2016 01:19:21 -0500 Subject: [Haskell-cafe] For one cabal'ised library: running specific benchmarks & tests, and obtaining multiple Travis reports In-Reply-To: References: Message-ID: > $ cabal bench "fib" Cabal adds a layer of indirection to the workflow. In order to pass arguments to your benchmark program, you must prefix each argument with '--benchmark-option=', leading to this: $ cabal bench --bench-mark-option="fib" The quotes are optional in this case. For more information, run 'cabal bench --help'. > $ cabal test "Sorting-Group-1" Same here. To pass the argument "Sorting-Group-1" to your test program, you must prefix each argument with '--test-option=', leading to this: $ cabal test --test-option="Sorting-Group-1" For more information, run 'cabal test --help'. From eric at erickow.com Sun Jan 3 07:12:41 2016 From: eric at erickow.com (Eric Kow) Date: Sun, 03 Jan 2016 07:12:41 +0000 Subject: [Haskell-cafe] [wxhaskell-users] wxHaskell + GHCi In-Reply-To: References: Message-ID: Hi Conal, Does the EnableGui trick help? https://wiki.haskell.org/WxHaskell/Mac If so, I think the source should be put in the reps if not already Cheers, On Sun, 3 Jan 2016 at 08:45, Conal Elliott wrote: > Thanks for the pointer! > > I was able to compile and run the sample program Resize.hs (?ghc ?make > Resize.hs?), but when I load that module into GHCi and run main, I get a > run-time error: > > *Main> main > 2016-01-02 16:24:42.245 ghc[52790:1003] *** Assertion failure in +[NSUndoManager _endTopLevelGroupings], /SourceCache/Foundation/Foundation-1056.17/Misc.subproj/NSUndoManager.m:328 > 2016-01-02 16:24:42.245 ghc[52790:1003] +[NSUndoManager(NSInternal) _endTopLevelGroupings] is only safe to invoke on the main thread. > 2016-01-02 16:24:42.246 ghc[52790:1003] ( > 0 CoreFoundation 0x00007fff913b225c __exceptionPreprocess + 172 > 1 libobjc.A.dylib 0x00007fff93581e75 objc_exception_throw + 43 > 2 CoreFoundation 0x00007fff913b2038 +[NSException raise:format:arguments:] + 104 > 3 Foundation 0x00007fff90bb9361 -[NSAssertionHandler handleFailureInMethod:object:file:lineNumber:description:] + 189 > 4 Foundation 0x00007fff90b238ac +[NSUndoManager(NSPrivate) _endTopLevelGroupings] + 156 > 5 AppKit 0x00007fff914a7a23 -[NSApplication run] + 688 > 6 libwx_osx_cocoau_core-3.0.0.2.0.dylib 0x0000000113101ce3 _ZN5wxApp10CallOnInitEv + 143 > 7 libwx_baseu-3.0.0.2.0.dylib 0x00000001135eb396 _Z7wxEntryRiPPw + 47 > 8 libwxc.dylib 0x0000000115dfbd3c ELJApp_InitializeC + 124 > 9 libHSwxcore-0.92.2.0-14assQ7lWYy0vwBRqyjk7D-ghc7.10.3.dylib 0x0000000114e8a5cc cc2eh_info + 132 > ) > 2016-01-02 16:24:42.321 ghc[52790:1003] *** Assertion failure in +[NSUndoManager _endTopLevelGroupings], /SourceCache/Foundation/Foundation-1056.17/Misc.subproj/NSUndoManager.m:328 > *Main> > > A little googling found an issue: Crash when running a sample program in > ghci on OSX . One comment > recommends the following in ghci: > > :set -fno-ghci-sandbox > > When I use this command in a fresh ghci process (not after a crash), the > sample works. However, when I run ?main? a second time, the window doesn?t > appear. Instead, I get some sort of undead process (called ?ghc?) that I > have to kill manually. > > I installed wxWidgets via ?brew update && brew install wxWidgets? and > wxHaskell via ?cabal update && cabal install wx?. I?m running Mac OS 10.9.5. > > Has anyone gotten this latest wxHaskell to play well with ghci on Mac OS? > > -- Conal > > > On Sat, Jan 2, 2016 at 4:58 AM, Henk-Jan van Tuyl > wrote: > >> >> I don't have a Mac OS computer available, I hope someone else will tell >> us. There are a lot of sample programs at >> https://github.com/wxHaskell/wxHaskell/tree/master/samples >> >> Regards, >> Henk-Jan van Tuyl >> >> >> On Sat, 02 Jan 2016 06:08:21 +0100, Conal Elliott >> wrote: >> >> Fantastic news! Thanks for the update. Does wxHaskell work (non-fatally) >>> with GHCi on Mac OS also? Are there sample programs for an easy test run? >>> -- Conal >>> >>> On Fri, Jan 1, 2016 at 3:34 PM, Henk-Jan van Tuyl >>> wrote: >>> >>> >>>> L.S., >>>> >>>> For people who gave up on wxHaskell in the past, because wxHaskell >>>> didn't >>>> run (properly) in GHCi: >>>> I just found out, that wxHaskell programs run properly in GHCi on >>>> Windows, >>>> if you use the newest wxHaskell plus GHC 7.10.3 (both the 32 bit and the >>>> 64 bit version). >>>> >>>> Regards, >>>> Henk-Jan van Tuyl >>>> >>> >> >> >> >> -- >> Folding at home >> What if you could share your unused computer power to help find a cure? >> In just 5 minutes you can join the world's biggest networked computer and >> get us closer sooner. Watch the video. >> http://folding.stanford.edu/ >> >> >> http://Van.Tuyl.eu/ >> http://members.chello.nl/hjgtuyl/tourdemonad.html >> Haskell programming >> -- >> > > > ------------------------------------------------------------------------------ > _______________________________________________ > wxhaskell-users mailing list > wxhaskell-users at lists.sourceforge.net > https://lists.sourceforge.net/lists/listinfo/wxhaskell-users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephen.tetley at gmail.com Sun Jan 3 12:28:15 2016 From: stephen.tetley at gmail.com (Stephen Tetley) Date: Sun, 3 Jan 2016 12:28:15 +0000 Subject: [Haskell-cafe] Haskell popularity In-Reply-To: <87oad3zc0a.fsf@therning.org> References: <87oad3zc0a.fsf@therning.org> Message-ID: I wouldn't lose any sleep over it if you put some figures back in. Stack overflow tags as of this morning: javascript x 1017064 java x 987353 haskell x 25719 I don't know where RedMonk pulled that plot from. On 3 January 2016 at 01:29, Magnus Therning wrote: > > Time to write a bug report then I suppose ;) From hjgtuyl at chello.nl Sun Jan 3 15:14:17 2016 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Sun, 03 Jan 2016 16:14:17 +0100 Subject: [Haskell-cafe] [wxhaskell-users] wxHaskell + GHCi In-Reply-To: References: Message-ID: The EnableGui trick is already implemented in wxcore 0.90.0.1 Regards, Henk-Jan van Tuyl On Sun, 03 Jan 2016 08:12:41 +0100, Eric Kow wrote: > Hi Conal, > > Does the EnableGui trick help? https://wiki.haskell.org/WxHaskell/Mac > > If so, I think the source should be put in the reps if not already > > > Cheers, > > On Sun, 3 Jan 2016 at 08:45, Conal Elliott wrote: > >> Thanks for the pointer! >> >> I was able to compile and run the sample program Resize.hs (?ghc ?make >> Resize.hs?), but when I load that module into GHCi and run main, I get a >> run-time error: >> >> *Main> main >> 2016-01-02 16:24:42.245 ghc[52790:1003] *** Assertion failure in >> +[NSUndoManager _endTopLevelGroupings], >> /SourceCache/Foundation/Foundation-1056.17/Misc.subproj/NSUndoManager.m:328 >> 2016-01-02 16:24:42.245 ghc[52790:1003] +[NSUndoManager(NSInternal) >> _endTopLevelGroupings] is only safe to invoke on the main thread. >> 2016-01-02 16:24:42.246 ghc[52790:1003] ( >> 0 CoreFoundation 0x00007fff913b225c >> __exceptionPreprocess + 172 >> 1 libobjc.A.dylib 0x00007fff93581e75 >> objc_exception_throw + 43 >> 2 CoreFoundation 0x00007fff913b2038 >> +[NSException raise:format:arguments:] + 104 >> 3 Foundation 0x00007fff90bb9361 >> -[NSAssertionHandler >> handleFailureInMethod:object:file:lineNumber:description:] + 189 >> 4 Foundation 0x00007fff90b238ac >> +[NSUndoManager(NSPrivate) _endTopLevelGroupings] + 156 >> 5 AppKit 0x00007fff914a7a23 >> -[NSApplication run] + 688 >> 6 libwx_osx_cocoau_core-3.0.0.2.0.dylib 0x0000000113101ce3 >> _ZN5wxApp10CallOnInitEv + 143 >> 7 libwx_baseu-3.0.0.2.0.dylib 0x00000001135eb396 >> _Z7wxEntryRiPPw + 47 >> 8 libwxc.dylib 0x0000000115dfbd3c >> ELJApp_InitializeC + 124 >> 9 libHSwxcore-0.92.2.0-14assQ7lWYy0vwBRqyjk7D-ghc7.10.3.dylib >> 0x0000000114e8a5cc cc2eh_info + 132 >> ) >> 2016-01-02 16:24:42.321 ghc[52790:1003] *** Assertion failure in >> +[NSUndoManager _endTopLevelGroupings], >> /SourceCache/Foundation/Foundation-1056.17/Misc.subproj/NSUndoManager.m:328 >> *Main> >> >> A little googling found an issue: Crash when running a sample program in >> ghci on OSX . One comment >> recommends the following in ghci: >> >> :set -fno-ghci-sandbox >> >> When I use this command in a fresh ghci process (not after a crash), the >> sample works. However, when I run ?main? a second time, the window >> doesn?t >> appear. Instead, I get some sort of undead process (called ?ghc?) that I >> have to kill manually. >> >> I installed wxWidgets via ?brew update && brew install wxWidgets? and >> wxHaskell via ?cabal update && cabal install wx?. I?m running Mac OS >> 10.9.5. >> >> Has anyone gotten this latest wxHaskell to play well with ghci on Mac >> OS? >> >> -- Conal >> >> >> On Sat, Jan 2, 2016 at 4:58 AM, Henk-Jan van Tuyl >> wrote: >> >>> >>> I don't have a Mac OS computer available, I hope someone else will tell >>> us. There are a lot of sample programs at >>> https://github.com/wxHaskell/wxHaskell/tree/master/samples >>> >>> Regards, >>> Henk-Jan van Tuyl >>> >>> >>> On Sat, 02 Jan 2016 06:08:21 +0100, Conal Elliott >>> wrote: >>> >>> Fantastic news! Thanks for the update. Does wxHaskell work >>> (non-fatally) >>>> with GHCi on Mac OS also? Are there sample programs for an easy test >>>> run? >>>> -- Conal >>>> >>>> On Fri, Jan 1, 2016 at 3:34 PM, Henk-Jan van Tuyl >>>> wrote: >>>> >>>> >>>>> L.S., >>>>> >>>>> For people who gave up on wxHaskell in the past, because wxHaskell >>>>> didn't >>>>> run (properly) in GHCi: >>>>> I just found out, that wxHaskell programs run properly in GHCi on >>>>> Windows, >>>>> if you use the newest wxHaskell plus GHC 7.10.3 (both the 32 bit and >>>>> the >>>>> 64 bit version). >>>>> >>>>> Regards, >>>>> Henk-Jan van Tuyl >>>>> >>>> >>> >>> >>> >>> -- >>> Folding at home >>> What if you could share your unused computer power to help find a cure? >>> In just 5 minutes you can join the world's biggest networked computer >>> and >>> get us closer sooner. Watch the video. >>> http://folding.stanford.edu/ -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From m.farkasdyck at gmail.com Sun Jan 3 17:20:44 2016 From: m.farkasdyck at gmail.com (M Farkas-Dyck) Date: Sun, 3 Jan 2016 09:20:44 -0800 Subject: [Haskell-cafe] Haskell popularity In-Reply-To: References: <87oad3zc0a.fsf@therning.org> Message-ID: On 03/01/2016, Stephen Tetley wrote: > I wouldn't lose any sleep over it if you put some figures back in. > Stack overflow tags as of this morning: > > javascript x 1017064 > java x 987353 > haskell x 25719 > > I don't know where RedMonk pulled that plot from. It says "popularity rank" so it may not be linear in number of tags. From will.yager at gmail.com Sun Jan 3 20:27:01 2016 From: will.yager at gmail.com (Will Yager) Date: Sun, 3 Jan 2016 14:27:01 -0600 Subject: [Haskell-cafe] Haskell popularity In-Reply-To: References: <87oad3zc0a.fsf@therning.org> Message-ID: <1795E7B7-3E0C-48FB-9646-132E19D4A896@gmail.com> IIRC, it has something to do with number of stars, forks, etc. I bet 99.9% of JavaScript projects are someone's personal webpage project or "hello world". On the other hand, I suspect a relatively large portion of Haskell projects are noteworthy. -Will > On Jan 3, 2016, at 06:28, Stephen Tetley wrote: > > I wouldn't lose any sleep over it if you put some figures back in. > Stack overflow tags as of this morning: > > javascript x 1017064 > java x 987353 > haskell x 25719 > > I don't know where RedMonk pulled that plot from. From cma at bitemyapp.com Mon Jan 4 14:26:33 2016 From: cma at bitemyapp.com (Christopher Allen) Date: Mon, 4 Jan 2016 08:26:33 -0600 Subject: [Haskell-cafe] Spurious memory leak example from the wiki Message-ID: I can't get the two examples (sum/product) in: https://wiki.haskell.org/Memory_leak#Holding_a_reference_for_a_too_long_time to behave differently under O2 or O0. Same profile report each time. Have changes to prelude (FTP? Rewrite rules/build stuff for folds?) made these do the same thing? The foldl/foldl' examples behave the same as you might expect. (216mb vs. 1mb of heap) --- Chris Allen -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Mon Jan 4 16:18:08 2016 From: svenpanne at gmail.com (Sven Panne) Date: Mon, 4 Jan 2016 17:18:08 +0100 Subject: [Haskell-cafe] Merging the OpenGLRaw and gl packages In-Reply-To: References: Message-ID: FYI: I've released a new OpenGLRaw version 3.0.0.0 which is now quite close to the gl package. The changes: * Use pattern synonyms for OpenGL enums. * Changed module name prefix from Graphics.Rendering.OpenGL.Raw to Graphics.GL. * Use slightly different type synonyms for GL type (introducing Fixed on the way): * CDouble => Double (for GLclampd, GLdouble) * CFloat => Float (for GLclampf, GLfloat) * CInt => Fixed (for GLclampx, GLfixed) * CInt => Int32 (for GLint, GLsizei) * CSChar => Int8 (for GLbyte) * CShort => Int16 (for GLshort) * CUChar => Word8 (for GLboolean, GLubyte) * CUInt => Word32 (for GLbitfield, GLenum, GLhandleARB, GLuint) * CUShort => Word16 (for GLushort) There are still a few minor differences between OpenGLRaw and gl (see https://github.com/haskell-opengl/OpenGLRaw/wiki/Merging-OpenGLRaw-and-gl), but nothing serious: As a test, I modified the luminance package to make it compatible with the new OpenGLRaw, and the diff is really small (see https://github.com/phaazon/luminance/pull/39). So I think that the gl package can be retired, but that's of course totally up to Edward and Gabr?el. A few remarks: * Using pattern synonyms means losing support for GHC < 7.8, which I consider OK now that 8.0 is coming soon. But to be sure, there is a branch ("classic") for the previous OpenGLRaw API if the need for minor changes/bug fixes arises. * To stay consistent, GLURaw has been changed in a similar way. * The OpenGL package has been adapted to use the new APIs internally, but its external API is still the same. Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gershomb at gmail.com Mon Jan 4 18:50:07 2016 From: gershomb at gmail.com (Gershom B) Date: Mon, 4 Jan 2016 13:50:07 -0500 Subject: [Haskell-cafe] Compose Conference Call for Participation [NYC, Feb 4-5] Message-ID: =============================================== Call for Participation Compose Conference 2016 February 4-5, 2016 New York, NY http://www.composeconference.org/ =============================================== The practice and craft of functional programming :: Conference Compose is a conference for typed functional programmers, focused specifically on Haskell, OCaml, F#, SML, and related technologies. Typed functional programming has been taken up widely, by industry and hobbyists alike. For many of us it has renewed our belief that code should be beautiful, and that programming can be as enjoyable as it is practical. Compose is about bringing together functional programmers of all levels of skill and experience ? from technical leads to novices, and from long-time hackers to students just getting started. It will feature a keynote by Eugenia Cheng on her work popularizing mathematics, two days of great talks, and plans are underway for a weekend hackathon/unconference. * Invited Talks: Eugenia Cheng: How to Bake 'How to Bake Pi': reflections on making abstract mathematics palatable * Local Information (venue): http://www.composeconference.org/2016/ * Accepted Talks and Tutorials Aditya Siram: FLTKHS - Easy Native GUIs in Haskell, Today! Austin Seipp: Cryptography and Verification with Cryptol Kenneth Foner: 'There and Back Again' and What Happened After Krzysztof Cieslak: Ionide and state of F# open source environment Leonid Rozenberg: The Intersection of Machine Learning, Types and Testing Luite Stegeman: Fun with GHCJSi Markus Mottl: AD-OCaml - Parallel Algorithmic Differentiation for OCaml Mindy Preston: Composing Network Operating Systems Niki Vazou: Liquid Types for Haskell Paulmichael Blasucci: (Nearly) Everything You Ever Wanted to Know About F# Active Patterns but were Afraid to Ask Rachel Reese: Chaos Testing at Jet Riccardo Terrell: Functional Reactive Programming for Natural User Interface Stephen Compall: Add a type parameter! One 'simple' design change, a panoply of outcomes Stephanie Weirich: Dynamic Typing in GHC Tikhon Jelvis: Analyzing Programs with Z3 Zvonimir Pavlinovic, Tim King and Thomas Wies: Improving Type Error Localization for Languages with Type Inference * Full abstracts: http://www.composeconference.org/2016/speakers/ * Registration: http://composeconference.eventbrite.com * Follow @composeconf on twitter for news: https://twitter.com/composeconf * On freenode irc, chat will fellow attendees at #composeconference * Corporate sponsorships are welcome. Current sponsors list forthcoming. * Policies (diversity and anti-harassment): http://www.composeconference.org/conduct * Email us with any questions at info at composeconference.org * Please forward this announcement to interested parties and lists. From mail at joachim-breitner.de Mon Jan 4 20:21:08 2016 From: mail at joachim-breitner.de (Joachim Breitner) Date: Mon, 04 Jan 2016 21:21:08 +0100 Subject: [Haskell-cafe] Spurious memory leak example from the wiki In-Reply-To: References: Message-ID: <1451938868.27196.4.camel@joachim-breitner.de> Hi, Am Montag, den 04.01.2016, 08:26 -0600 schrieb Christopher Allen: > I can't get the two examples (sum/product) in: > > https://wiki.haskell.org/Memory_leak#Holding_a_reference_for_a_too_lo > ng_time > > to behave differently under O2 or O0. > > Same profile report each time. do you observe the leaking or the non-leaking behavior? Gru?, Joachim -- Joachim ?nomeata? Breitner mail at joachim-breitner.de ? http://www.joachim-breitner.de/ Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F Debian Developer: nomeata at debian.org -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From cma at bitemyapp.com Tue Jan 5 17:35:46 2016 From: cma at bitemyapp.com (Christopher Allen) Date: Tue, 5 Jan 2016 11:35:46 -0600 Subject: [Haskell-cafe] Spurious memory leak example from the wiki In-Reply-To: <1451938868.27196.4.camel@joachim-breitner.de> References: <1451938868.27196.4.camel@joachim-breitner.de> Message-ID: Here's the profiling report: 11,371,237,400 bytes allocated in the heap 6,880,974,648 bytes copied during GC 17,531,336 bytes maximum residency (848 sample(s)) 4,580,080 bytes maximum slop 51 MB total memory in use (5 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 18716 colls, 0 par 0.140s 0.137s 0.0000s 0.0003s Gen 1 848 colls, 0 par 4.324s 4.219s 0.0050s 0.0095s TASKS: 4 (1 bound, 3 peak workers (3 total), using -N1) SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled) INIT time 0.000s ( 0.001s elapsed) MUT time 1.680s ( 1.884s elapsed) GC time 4.464s ( 4.356s elapsed) RP time 0.000s ( 0.000s elapsed) PROF time 0.000s ( 0.000s elapsed) EXIT time 0.004s ( 0.000s elapsed) Total time 6.280s ( 6.241s elapsed) Alloc rate 6,768,593,690 bytes per MUT second Productivity 28.9% of total user, 29.1% of total elapsed gc_alloc_block_sync: 0 whitehole_spin: 0 gen[0].sync: 0 gen[1].sync: 0 6.30user 0.10system 0:06.44elapsed 99%CPU (0avgtext+0avgdata 109020maxresident)k 0inputs+24outputs (0major+41639minor)pagefaults 0swaps I've attached a postscript of the heap profile, but that's from running it just now and it's dying on segfault before finishing execution now for...some reason. The above profiling report is from when I originally ran it. >From the segfault: 56109148054202365158913895741900770318778Command terminated by signal 11 6.79user 0.08system 0:06.96elapsed 98%CPU (0avgtext+0avgdata 125036maxresident)k 384inputs+120outputs (9major+44372minor)pagefaults 0swaps Makefile:2: recipe for target 'profile' failed make: *** [profile] Error 139 The numbers are from it trying to finish printing the result. On Mon, Jan 4, 2016 at 2:21 PM, Joachim Breitner wrote: > Hi, > > Am Montag, den 04.01.2016, 08:26 -0600 schrieb Christopher Allen: > > I can't get the two examples (sum/product) in: > > > > https://wiki.haskell.org/Memory_leak#Holding_a_reference_for_a_too_lo > > ng_time > > > > to behave differently under O2 or O0. > > > > Same profile report each time. > > do you observe the leaking or the non-leaking behavior? > > Gru?, > Joachim > > -- > Joachim ?nomeata? Breitner > mail at joachim-breitner.de ? http://www.joachim-breitner.de/ > Jabber: nomeata at joachim-breitner.de ? GPG-Key: 0xF0FBF51F > Debian Developer: nomeata at debian.org > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -- Chris Allen Currently working on http://haskellbook.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: evil-loop.ps Type: application/postscript Size: 7010 bytes Desc: not available URL: From conal at conal.net Tue Jan 5 22:19:44 2016 From: conal at conal.net (Conal Elliott) Date: Tue, 5 Jan 2016 14:19:44 -0800 Subject: [Haskell-cafe] hscolour color set for white/light background? Message-ID: Does anyone have handy an hscolour ColourPrefs for use with a white/light background? The defaultColourPrefs look okay with a dark background but not light. Thanks, - Conal -------------- next part -------------- An HTML attachment was scrubbed... URL: From zans.lancs at googlemail.com Wed Jan 6 03:23:27 2016 From: zans.lancs at googlemail.com (Zans Tangle) Date: Wed, 6 Jan 2016 03:23:27 +0000 Subject: [Haskell-cafe] FRP : Best way to save signal history if the program needs to temporarily terminate? Message-ID: Hi guys, so I'm getting my head around FRP, and the theory is starting to make sense (just seen Conal Elliott's explanation of the original formulation and it was very illuminative). Overall I'm finding signals to be a really nice level of abstraction to reason about my code state. However I'm a little bit stuck on something: lets say I want to switch off the program, but I want it to save the signal history so that I can resume where I left off when I switch it back on. Is there a ... simple.. way to do it? Because at the moment I'm thinking of something along the lines of "build an SQL schema that matches the semantics of each FRP signal, and then build functions to translate between signal and SQL when needed". And at this precise instant I don't really feel too excited about implementing Signal-to-SQL translation from scratch (and I'm not even completely sure what sort of SQL schema would be needed to support higher-order FRP either), so I was wondering if anything like that exists already? Or even something as simple as storing the signal history in a file and just parsing it to recover it? Zans -------------- next part -------------- An HTML attachment was scrubbed... URL: From tanuki at gmail.com Wed Jan 6 03:39:01 2016 From: tanuki at gmail.com (Theodore Lief Gannon) Date: Tue, 5 Jan 2016 19:39:01 -0800 Subject: [Haskell-cafe] FRP : Best way to save signal history if the program needs to temporarily terminate? In-Reply-To: References: Message-ID: At least one FRP-ish library, Auto, has built-in serialization. It uses discrete (integral) time so it's not appropriate for all FRP applications; but at the very least, looking at how it's done there may be helpful. -------------- next part -------------- An HTML attachment was scrubbed... URL: From petr.mvd at gmail.com Wed Jan 6 09:10:12 2016 From: petr.mvd at gmail.com (=?UTF-8?B?UGV0ciBQdWRsw6Fr?=) Date: Wed, 06 Jan 2016 09:10:12 +0000 Subject: [Haskell-cafe] the state of Yarr? Message-ID: Hi Dominic, what is the current state of Yarr? Is it being actively developed? Is there some tutorial or documentation available? I'm deciding between repa and yarr for some linear algebra computations. I found some references that yarr is more performant, but I couldn't find much documentation and the hackage page [1] hasn't indexed most modules for some reason, so there seems to be no good place to start from. And the last commit was 9 months ago. [1] https://hackage.haskell.org/package/yarr Thank you, Petr -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominic at steinitz.org Wed Jan 6 09:48:59 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Wed, 6 Jan 2016 09:48:59 +0000 Subject: [Haskell-cafe] Fwd: Fwd: Re: the state of Yarr? In-Reply-To: <568CE283.1020704@blueyonder.co.uk> References: <568CE283.1020704@blueyonder.co.uk> Message-ID: <568CE30B.9010201@steinitz.org> A problem with my email prevented this making it on to the mailing list. Hi Petr, I am not actively developing Yarr but I would very much like to. I keep it from bit-rotting. The problem as always is finding time. On the other hand I don't think repa is very active e.g. upgrading to vector-0.11 took a while to happen although clearly more active than me on Yarr! What I'd like is something like Python's numpy but safer and faster. If you look at the static module in the hmatrix package (https://hackage.haskell.org/package/hmatrix-0.17.0.1/docs/Numeric-LinearAlgebra-Static.html) you can see how type level literals can be used to prevent e.g. multiplying two inconsistent matrices together at compile time. I am sure we could do something better with either Yarr or repa (repa will currently give out of bounds errors at runtime). For reasons I don't understand (I think a bug in Haddock) the documentation does not get generated. There are examples of its use here: https://github.com/leventov/yarr/tree/master/tests. I wrote a blog using repa and Yarr here: https://idontgetoutmuch.wordpress.com/2013/08/06/planetary-simulation-with-excursions-in-symplectic-manifolds-6/ and compare performance. You can safely ignore the theory and need only look at "Repa Implementation", "Yarr Implementation" and "Performance". I think performance will depend on your application. I believe (but haven't confirmed) that repa will outperform Yarr on e.g grid based problems such as numerical methods for diffusions and Poisson. In the case of planets (or stars or particles) where everything is influenced by everything else then repa is a bad fit and Yarr outperforms. If your application is linear algebra, I would think that hmatrix would have what you want or could be extended to give what you want since it is LAPACK under the covers. I am very excited that you are interested in this area; it often feels very lonely. Best wishes, Dominic. On 06/01/2016 09:10, Petr Pudl?k wrote: > Hi Dominic, > > what is the current state of Yarr? Is it being actively developed? Is > there some tutorial or documentation available? > > I'm deciding between repa and yarr for some linear algebra > computations. I found some references that yarr is more performant, > but I couldn't find much documentation and the hackage page [1] hasn't > indexed most modules for some reason, so there seems to be no good > place to start from. And the last commit was 9 months ago. > > [1]https://hackage.haskell.org/package/yarr > > Thank you, > Petr From tdammers at gmail.com Wed Jan 6 10:29:14 2016 From: tdammers at gmail.com (Tobias Dammers) Date: Wed, 6 Jan 2016 11:29:14 +0100 Subject: [Haskell-cafe] FRP : Best way to save signal history if the program needs to temporarily terminate? In-Reply-To: References: Message-ID: <20160106102914.GB29941@barbados> Haven't done it myself yet, but toying with the idea for a project that's been in my head for a long while. As far as I gathered, you don't need to log all your signals, only the ones that go into the network from outside - everything else is by definition derived from those, unless your FRP implementation is internally impure (which, I would argue, wouldn't be deserving of the "F" in "FRP"). SQL might not be the most suitable backend, because the shape of signal data isn't a good match for the relational model. Most likely, if I were to store event occurrences in a database, I'd use a schema similar to this (postgresql): CREATE TABLE signal_occurrences ( signal_id TEXT , occurrence_timestamp INT NOT NULL , occurrence_data JSONB NOT NULL ) ...and then derive or TH-generate ToJSON/FromJSON for the event payload type. Depending on the architecture, I might have only one signal going into the FRP network, in which case I could drop the signal_id field. Other serialization formats would also work, e.g. using cereal and plain binary BLOBs; JSON has the advantage of being completely language-agnostic and having some native support in postgres. The other problem you need to tackle is how to hook into your FRP framework, and how to keep things performant when there's a long history. In the latter case, you'll probably want a "snapshot" feature as well, i.e., store not only the incoming event occurrences, but also regular snapshots of your application state, such that you always have a limited number of inputs to restore from. This, however, is a lot harder than just logging inputs and replaying them, and I don't know if any existing FRP library supports this as of yet; OTOH, IIRC I read an article a while ago describing a high-frequency trading system that worked much like this. HTH, Tobias On Wed, Jan 06, 2016 at 03:23:27AM +0000, Zans Tangle wrote: > Hi guys, > > so I'm getting my head around FRP, and the theory is starting to make sense > (just seen Conal Elliott's explanation of the original formulation and it > was very illuminative). Overall I'm finding signals to be a really nice > level of abstraction to reason about my code state. > > However I'm a little bit stuck on something: lets say I want to switch off > the program, but I want it to save the signal history so that I can resume > where I left off when I switch it back on. > > Is there a ... simple.. way to do it? Because at the moment I'm thinking of > something along the lines of "build an SQL schema that matches the > semantics of each FRP signal, and then build functions to translate between > signal and SQL when needed". > > And at this precise instant I don't really feel too excited about > implementing Signal-to-SQL translation from scratch (and I'm not even > completely sure what sort of SQL schema would be needed to support > higher-order FRP either), so I was wondering if anything like that exists > already? Or even something as simple as storing the signal history in a > file and just parsing it to recover it? > > Zans > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From abhisandhyasp.ap at gmail.com Wed Jan 6 11:53:02 2016 From: abhisandhyasp.ap at gmail.com (Abhijit Patel) Date: Wed, 6 Jan 2016 17:23:02 +0530 Subject: [Haskell-cafe] Regarding GSoC 2016 Message-ID: I am Abhijit Patel, B.Tech second year student from Dhirubhai Ambani Institute of Information and Communication Technology (DA-IICT) . I have learnt haskell from the learnyouahaskell.com and I was informed that the tasks for GSoC .I need some guidance and someone who can give me some tasks to raise my level to start contributing to the haskell community. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sumit.sahrawat.apm13 at iitbhu.ac.in Wed Jan 6 13:02:50 2016 From: sumit.sahrawat.apm13 at iitbhu.ac.in (Sumit Sahrawat, Maths & Computing, IIT (BHU)) Date: Wed, 6 Jan 2016 18:32:50 +0530 Subject: [Haskell-cafe] Regarding GSoC 2016 In-Reply-To: References: Message-ID: You'll have to hunt for worthwhile tasks yourself. Lists of ideas are available on reddit [1] and trac [2]. For learning haskell, I'll suggest you take a look at the learnhaskell guide [3]. Try finding a mentor early on and get feedback from him/her on your proposal drafts. All the best. [1]: https://www.reddit.com/r/haskell_proposals/ [2]: https://ghc.haskell.org/trac/summer-of-code/ [3]: https://github.com/bitemyapp/learnhaskell Regards, Sumit On 6 January 2016 at 17:23, Abhijit Patel wrote: > I am Abhijit Patel, B.Tech second year student from Dhirubhai Ambani > Institute of Information and Communication Technology (DA-IICT) . I have > learnt haskell from the learnyouahaskell.com and I was informed that the > tasks for GSoC .I need some guidance and someone who can give me some tasks > to raise my level to start contributing to the haskell community. > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes.waldmann at htwk-leipzig.de Wed Jan 6 14:03:26 2016 From: johannes.waldmann at htwk-leipzig.de (Johannes Waldmann) Date: Wed, 6 Jan 2016 15:03:26 +0100 Subject: [Haskell-cafe] long-range performance comparisons (GHC over the years) Message-ID: <568D1EAE.7080900@htwk-leipzig.de> Dear Cafe, I recently noticed a performance problem due to a fusion rule not firing for a straightforward piece of code. In fact it turned out this was already fixed in HEAD, see https://ghc.haskell.org/trac/ghc/ticket/11344 https://ghc.haskell.org/trac/ghc/ticket/9848 What worries me is that such a regression had been sitting there for over a year (and did not make it to 7.10.3) and I got to thinking: What long-range performance metering do we have? So I tried to made nofib runs for some ghc-{6,7} versions http://www.imn.htwk-leipzig.de/~waldmann/etc/nofib/comparison-k.text This is slightly broken (not all tests can be built for all compilers, and I don't known how to fix this) but there are some interesting numbers already. It seems nofib programs are self-contained (not using any libraries) so they are mainly using numbers, lists, tuples, and user-defined data. This is the heart of (traditional) Haskell, so this is supposed to work really well. The table shows that there are a lot of benchmarks where performance has been increasing. That's good. But not for all! We should certainly ignore all runtimes that are absolutely small. I think it is most interesting to look allocation numbers. A few examples from this list: * exp3_8 (allocation goes up 50 % from 6.* to 7.*) this is addition of Peano numbers. * gcd (allocation goes up 20 % from 7.8 to 7.10) using Integers, tuples (for extended Euclid), lists (for control) * tak (runtime goes up 20 % from 7.6 to 7.8) the plain Takeuchi function, just Int and recursion (it should not allocate at all?) (and I confirmed these by manually running them for more inputs, all measurements done on debian on x86_64 X5365, ghc-6.* installed from binary packages, ghc-7.* built from source) So, can this be explained? Improved? I think we should resist the temptation to change these benchmarks (using seq and ! and Int# and whatnot) Assuming nofib contains typical code, it is the task of the compiler to handle it well. In case you're wondering about my motivation - this was prompted by teaching, I wanted to show that ghc creates efficient code (by fusion) - but it's not just for the show, I generally try to believe in what I teach and I do rely on this for my real code. (Well, by definition, "real" for me might still be "academic" for others...) - Johannes. From petr.mvd at gmail.com Wed Jan 6 20:12:34 2016 From: petr.mvd at gmail.com (=?UTF-8?B?UGV0ciBQdWRsw6Fr?=) Date: Wed, 06 Jan 2016 20:12:34 +0000 Subject: [Haskell-cafe] Fwd: Fwd: Re: the state of Yarr? In-Reply-To: <568CE30B.9010201@steinitz.org> References: <568CE283.1020704@blueyonder.co.uk> <568CE30B.9010201@steinitz.org> Message-ID: Hi Dominic, thank you for the detailed answer! I'm looking for linear algebra over finite fields (in particular GF(2)), and while hmatrix has some support, it doesn't support inverting matrices, which is something I need. So I'll need to look further. Just a moment ago I discovered the 'tensor' package, which is flexible enough to add a new representation. Currently it offers Vector, I'll try it out, and perhaps it'd be possible to include yarr too, if Vector won't perform well enough. All the best, Petr st 6. 1. 2016 v 10:50 odes?latel Dominic Steinitz napsal: > A problem with my email prevented this making it on to the mailing list. > > Hi Petr, > > I am not actively developing Yarr but I would very much like to. I keep > it from bit-rotting. The problem as always is finding time. On the other > hand I don't think repa is very active e.g. upgrading to vector-0.11 > took a while to happen although clearly more active than me on Yarr! > > What I'd like is something like Python's numpy but safer and faster. If > you look at the static module in the hmatrix package > ( > https://hackage.haskell.org/package/hmatrix-0.17.0.1/docs/Numeric-LinearAlgebra-Static.html > ) > you can see how type level literals can be used to prevent e.g. > multiplying two inconsistent matrices together at compile time. I am > sure we could do something better with either Yarr or repa (repa will > currently give out of bounds errors at runtime). > > For reasons I don't understand (I think a bug in Haddock) the > documentation does not get generated. > > There are examples of its use here: > https://github.com/leventov/yarr/tree/master/tests. I wrote a blog using > repa and Yarr here: > > https://idontgetoutmuch.wordpress.com/2013/08/06/planetary-simulation-with-excursions-in-symplectic-manifolds-6/ > and compare performance. You can safely ignore the theory and need only > look at "Repa Implementation", "Yarr Implementation" and "Performance". > > I think performance will depend on your application. I believe (but > haven't confirmed) that repa will outperform Yarr on e.g grid based > problems such as numerical methods for diffusions and Poisson. In the > case of planets (or stars or particles) where everything is influenced > by everything else then repa is a bad fit and Yarr outperforms. > > If your application is linear algebra, I would think that hmatrix would > have what you want or could be extended to give what you want since it > is LAPACK under the covers. > > I am very excited that you are interested in this area; it often feels > very lonely. > > Best wishes, Dominic. > > On 06/01/2016 09:10, Petr Pudl?k wrote: > > Hi Dominic, > > > > what is the current state of Yarr? Is it being actively developed? Is > > there some tutorial or documentation available? > > > > I'm deciding between repa and yarr for some linear algebra > > computations. I found some references that yarr is more performant, > > but I couldn't find much documentation and the hackage page [1] hasn't > > indexed most modules for some reason, so there seems to be no good > > place to start from. And the last commit was 9 months ago. > > > > [1]https://hackage.haskell.org/package/yarr > > > > Thank you, > > Petr > > > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From petr.mvd at gmail.com Wed Jan 6 20:21:24 2016 From: petr.mvd at gmail.com (=?UTF-8?B?UGV0ciBQdWRsw6Fr?=) Date: Wed, 06 Jan 2016 20:21:24 +0000 Subject: [Haskell-cafe] ideas for extending 'tensor' Message-ID: Hi Nicola, I was very excited to see your type-safe tensor package. Are the sources available somewhere for contributing? I'd be interested in trying to extend it to use unboxed vectors, or perhaps even repa or yarr. Are there any future plans like that? Also it'd be interesting to add the distinction between covariant and contravariant indices, although this would be most likely a major change. Or perhaps it'd be possible to add it just as some kind of a wrapper. All the best, Petr -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonslists at gmail.com Wed Jan 6 20:41:55 2016 From: noonslists at gmail.com (Noon Silk) Date: Thu, 7 Jan 2016 07:41:55 +1100 Subject: [Haskell-cafe] Fwd: Fwd: Re: the state of Yarr? In-Reply-To: <568CE30B.9010201@steinitz.org> References: <568CE283.1020704@blueyonder.co.uk> <568CE30B.9010201@steinitz.org> Message-ID: > I am very excited that you are interested in this area; it often feels > very lonely. FWIW I would also be extremely excited/interested in a Haskell-themed numpy package; I didn't know about yarr. Nice to see it! On Wed, Jan 6, 2016 at 8:48 PM, Dominic Steinitz wrote: > A problem with my email prevented this making it on to the mailing list. > > Hi Petr, > > I am not actively developing Yarr but I would very much like to. I keep > it from bit-rotting. The problem as always is finding time. On the other > hand I don't think repa is very active e.g. upgrading to vector-0.11 > took a while to happen although clearly more active than me on Yarr! > > What I'd like is something like Python's numpy but safer and faster. If > you look at the static module in the hmatrix package > ( > https://hackage.haskell.org/package/hmatrix-0.17.0.1/docs/Numeric-LinearAlgebra-Static.html > ) > you can see how type level literals can be used to prevent e.g. > multiplying two inconsistent matrices together at compile time. I am > sure we could do something better with either Yarr or repa (repa will > currently give out of bounds errors at runtime). > > For reasons I don't understand (I think a bug in Haddock) the > documentation does not get generated. > > There are examples of its use here: > https://github.com/leventov/yarr/tree/master/tests. I wrote a blog using > repa and Yarr here: > > https://idontgetoutmuch.wordpress.com/2013/08/06/planetary-simulation-with-excursions-in-symplectic-manifolds-6/ > and compare performance. You can safely ignore the theory and need only > look at "Repa Implementation", "Yarr Implementation" and "Performance". > > I think performance will depend on your application. I believe (but > haven't confirmed) that repa will outperform Yarr on e.g grid based > problems such as numerical methods for diffusions and Poisson. In the > case of planets (or stars or particles) where everything is influenced > by everything else then repa is a bad fit and Yarr outperforms. > > If your application is linear algebra, I would think that hmatrix would > have what you want or could be extended to give what you want since it > is LAPACK under the covers. > > I am very excited that you are interested in this area; it often feels > very lonely. > > Best wishes, Dominic. > > > On 06/01/2016 09:10, Petr Pudl?k wrote: > >> Hi Dominic, >> >> what is the current state of Yarr? Is it being actively developed? Is >> there some tutorial or documentation available? >> >> I'm deciding between repa and yarr for some linear algebra >> computations. I found some references that yarr is more performant, >> but I couldn't find much documentation and the hackage page [1] hasn't >> indexed most modules for some reason, so there seems to be no good >> place to start from. And the last commit was 9 months ago. >> >> [1]https://hackage.haskell.org/package/yarr >> >> Thank you, >> Petr >> > > > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- Noon Silk, ? https://silky.github.io/ "Every morning when I wake up, I experience an exquisite joy ? the joy of being this signature." -------------- next part -------------- An HTML attachment was scrubbed... URL: From petr.mvd at gmail.com Wed Jan 6 20:50:39 2016 From: petr.mvd at gmail.com (=?UTF-8?B?UGV0ciBQdWRsw6Fr?=) Date: Wed, 06 Jan 2016 20:50:39 +0000 Subject: [Haskell-cafe] ideas for extending 'tensor' In-Reply-To: References: Message-ID: It looks like the package doesn't compile on the latest GHC due to changes in Prelude. I'd be happy to contribute a fix. st 6. 1. 2016 v 21:21 odes?latel Petr Pudl?k napsal: > Hi Nicola, > > I was very excited to see your type-safe tensor package. Are the sources > available somewhere for contributing? I'd be interested in trying to extend > it to use unboxed vectors, or perhaps even repa or yarr. Are there any > future plans like that? > > Also it'd be interesting to add the distinction between covariant and > contravariant indices, although this would be most likely a major change. > Or perhaps it'd be possible to add it just as some kind of a wrapper. > > All the best, > Petr > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomasmiedema at gmail.com Thu Jan 7 00:51:33 2016 From: thomasmiedema at gmail.com (Thomas Miedema) Date: Thu, 7 Jan 2016 01:51:33 +0100 Subject: [Haskell-cafe] long-range performance comparisons (GHC over the years) In-Reply-To: <568D1EAE.7080900@htwk-leipzig.de> References: <568D1EAE.7080900@htwk-leipzig.de> Message-ID: > > I recently noticed a performance problem > ... What worries me is that such a regression > had been sitting there for over a year > There are 215 open runtime performance tickets (out of a total of 1655 open tickets, that makes 13%). Compared to say new typesystem features, they don't get much attention. Only a few were fixed this year, most of them by Joachim Breitner (the new performance tsar ?). He also created https://perf.haskell.org/ghc. In case you want to help out, start here: https://ghc.haskell.org/trac/ghc/wiki/Newcomers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominic at steinitz.org Thu Jan 7 07:51:29 2016 From: dominic at steinitz.org (Dominic Steinitz) Date: Thu, 7 Jan 2016 07:51:29 +0000 Subject: [Haskell-cafe] Fwd: Fwd: Re: the state of Yarr? In-Reply-To: References: <568CE283.1020704@blueyonder.co.uk> <568CE30B.9010201@steinitz.org> Message-ID: <568E1901.9030908@steinitz.org> Hi Petr, It's not too hard to write an inverse for matrices > inv :: (KnownNat n, (1 <=? n) ~ 'True) => Sq n -> Sq n > inv m = fromJust $ linSolve m eye But since hmatrix is LAPACK this is not going to help you with GF(2). I wasn't aware of the 'tensor' package either but it does contain some interesting ideas and by the looks of it should be able to invert matrices even for finite fields. I'd be interested in its performance. Perhaps you could write a benchmark and I can try it with Yarr? Probably one could improve 'tensor' by using type level literals a la the static part of hmatrix and maybe base it over Yarr for performance. I note it is GPL which I am not a great fan of and has no publicy available repository (at least I wasn't able to find one). Good luck and please report back with anything interesting you discover on your journey. Dominic. On 06/01/2016 20:12, Petr Pudl?k wrote: > Hi Dominic, > > thank you for the detailed answer! I'm looking for linear algebra over > finite fields (in particular GF(2)), and while hmatrix has some > support, it doesn't support inverting matrices, which is something I > need. So I'll need to look further. Just a moment ago I discovered the > 'tensor' package, which is flexible enough to add a new > representation. Currently it offers Vector, I'll try it out, and > perhaps it'd be possible to include yarr too, if Vector won't perform > well enough. > > All the best, > Petr > > st 6. 1. 2016 v 10:50 odes?latel Dominic Steinitz > > napsal: > > A problem with my email prevented this making it on to the mailing > list. > > Hi Petr, > > I am not actively developing Yarr but I would very much like to. I > keep > it from bit-rotting. The problem as always is finding time. On the > other > hand I don't think repa is very active e.g. upgrading to vector-0.11 > took a while to happen although clearly more active than me on Yarr! > > What I'd like is something like Python's numpy but safer and > faster. If > you look at the static module in the hmatrix package > (https://hackage.haskell.org/package/hmatrix-0.17.0.1/docs/Numeric-LinearAlgebra-Static.html) > you can see how type level literals can be used to prevent e.g. > multiplying two inconsistent matrices together at compile time. I am > sure we could do something better with either Yarr or repa (repa will > currently give out of bounds errors at runtime). > > For reasons I don't understand (I think a bug in Haddock) the > documentation does not get generated. > > There are examples of its use here: > https://github.com/leventov/yarr/tree/master/tests. I wrote a blog > using > repa and Yarr here: > https://idontgetoutmuch.wordpress.com/2013/08/06/planetary-simulation-with-excursions-in-symplectic-manifolds-6/ > and compare performance. You can safely ignore the theory and need > only > look at "Repa Implementation", "Yarr Implementation" and > "Performance". > > I think performance will depend on your application. I believe (but > haven't confirmed) that repa will outperform Yarr on e.g grid based > problems such as numerical methods for diffusions and Poisson. In the > case of planets (or stars or particles) where everything is influenced > by everything else then repa is a bad fit and Yarr outperforms. > > If your application is linear algebra, I would think that hmatrix > would > have what you want or could be extended to give what you want since it > is LAPACK under the covers. > > I am very excited that you are interested in this area; it often feels > very lonely. > > Best wishes, Dominic. > > On 06/01/2016 09:10, Petr Pudl?k wrote: > > Hi Dominic, > > > > what is the current state of Yarr? Is it being actively > developed? Is > > there some tutorial or documentation available? > > > > I'm deciding between repa and yarr for some linear algebra > > computations. I found some references that yarr is more performant, > > but I couldn't find much documentation and the hackage page [1] > hasn't > > indexed most modules for some reason, so there seems to be no good > > place to start from. And the last commit was 9 months ago. > > > > [1]https://hackage.haskell.org/package/yarr > > > > Thank you, > > Petr > > > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Jan 7 08:55:02 2016 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 7 Jan 2016 08:55:02 +0000 Subject: [Haskell-cafe] long-range performance comparisons (GHC over the years) In-Reply-To: References: <568D1EAE.7080900@htwk-leipzig.de> Message-ID: <299294d50b82417abf2383865f3a3498@DB4PR30MB030.064d.mgd.msft.net> Yes we?d love help with investigating, characterising, and fixing performance bugs. Simon From: Haskell-Cafe [mailto:haskell-cafe-bounces at haskell.org] On Behalf Of Thomas Miedema Sent: 07 January 2016 00:52 To: Johannes Waldmann Cc: Joachim Breitner ; Haskell cafe Subject: Re: [Haskell-cafe] long-range performance comparisons (GHC over the years) I recently noticed a performance problem ... What worries me is that such a regression had been sitting there for over a year There are 215 open runtime performance tickets (out of a total of 1655 open tickets, that makes 13%). Compared to say new typesystem features, they don't get much attention. Only a few were fixed this year, most of them by Joachim Breitner (the new performance tsar?). He also created https://perf.haskell.org/ghc. In case you want to help out, start here: https://ghc.haskell.org/trac/ghc/wiki/Newcomers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomasmiedema at gmail.com Thu Jan 7 11:17:02 2016 From: thomasmiedema at gmail.com (Thomas Miedema) Date: Thu, 7 Jan 2016 12:17:02 +0100 Subject: [Haskell-cafe] long-range performance comparisons (GHC over the years) In-Reply-To: <299294d50b82417abf2383865f3a3498@DB4PR30MB030.064d.mgd.msft.net> References: <568D1EAE.7080900@htwk-leipzig.de> <299294d50b82417abf2383865f3a3498@DB4PR30MB030.064d.mgd.msft.net> Message-ID: Hi Johannes, see also this wiki page: https://ghc.haskell.org/trac/ghc/wiki/Performance/Runtime. It mentions the `gcd` regression that you found (and has some analysis), but not the others (`tak` and `exp3_8`). Maybe you could start with updating that page. Thomas On Thu, Jan 7, 2016 at 9:55 AM, Simon Peyton Jones wrote: > Yes we?d love help with investigating, characterising, and fixing > performance bugs. > > > > Simon > > > > *From:* Haskell-Cafe [mailto:haskell-cafe-bounces at haskell.org] *On Behalf > Of *Thomas Miedema > *Sent:* 07 January 2016 00:52 > *To:* Johannes Waldmann > *Cc:* Joachim Breitner ; Haskell cafe < > haskell-cafe at haskell.org> > *Subject:* Re: [Haskell-cafe] long-range performance comparisons (GHC > over the years) > > > > I recently noticed a performance problem > > ... > > What worries me is that such a regression > had been sitting there for over a year > > > > There are 215 open *runtime performance > * tickets > (out of a total of 1655 open tickets, that makes 13%). > > > > Compared to say new typesystem features, they don't get much attention. > Only a few were fixed > this > year, most of them by Joachim Breitner (the new performance tsar > ?). He > also created https://perf.haskell.org/ghc > > . > > > > In case you want to help out, start here: > https://ghc.haskell.org/trac/ghc/wiki/Newcomers. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aruiz at um.es Thu Jan 7 12:12:13 2016 From: aruiz at um.es (Alberto Ruiz) Date: Thu, 7 Jan 2016 13:12:13 +0100 Subject: [Haskell-cafe] Fwd: Fwd: Re: the state of Yarr? In-Reply-To: <568E1901.9030908@steinitz.org> References: <568CE283.1020704@blueyonder.co.uk> <568CE30B.9010201@steinitz.org> <568E1901.9030908@steinitz.org> Message-ID: <568E561D.9030003@um.es> Hi Petr and Dominic, The last version of hmatrix supports integer elements and modular arithmetic, including inverting matrices: https://hackage.haskell.org/package/hmatrix-0.17.0.1/docs/Numeric-LinearAlgebra.html#v:luSolve-39- This is experimental and probably not very useful for serious applications, but in any case here you can find some toy examples: http://dis.um.es/~alberto/hmatrix/finite.html Alberto On 07/01/16 08:51, Dominic Steinitz wrote: > Hi Petr, > > It's not too hard to write an inverse for matrices > >> inv :: (KnownNat n, (1 <=? n) ~ 'True) => Sq n -> Sq n >> inv m = fromJust $ linSolve m eye > But since hmatrix is LAPACK this is not going to help you with GF(2). > > I wasn't aware of the 'tensor' package either but it does contain some > interesting ideas and by the looks of it should be able to invert > matrices even for finite fields. I'd be interested in its performance. > Perhaps you could write a benchmark and I can try it with Yarr? > > Probably one could improve 'tensor' by using type level literals a la > the static part of hmatrix and maybe base it over Yarr for performance. > I note it is GPL which I am not a great fan of and has no publicy > available repository (at least I wasn't able to find one). > > Good luck and please report back with anything interesting you discover > on your journey. > > Dominic. > > On 06/01/2016 20:12, Petr Pudl?k wrote: >> Hi Dominic, >> >> thank you for the detailed answer! I'm looking for linear algebra over >> finite fields (in particular GF(2)), and while hmatrix has some >> support, it doesn't support inverting matrices, which is something I >> need. So I'll need to look further. Just a moment ago I discovered the >> 'tensor' package, which is flexible enough to add a new >> representation. Currently it offers Vector, I'll try it out, and >> perhaps it'd be possible to include yarr too, if Vector won't perform >> well enough. >> >> All the best, >> Petr >> >> st 6. 1. 2016 v 10:50 odes?latel Dominic Steinitz >> > napsal: >> >> A problem with my email prevented this making it on to the mailing >> list. >> >> Hi Petr, >> >> I am not actively developing Yarr but I would very much like to. I >> keep >> it from bit-rotting. The problem as always is finding time. On the >> other >> hand I don't think repa is very active e.g. upgrading to vector-0.11 >> took a while to happen although clearly more active than me on Yarr! >> >> What I'd like is something like Python's numpy but safer and >> faster. If >> you look at the static module in the hmatrix package >> (https://hackage.haskell.org/package/hmatrix-0.17.0.1/docs/Numeric-LinearAlgebra-Static.html) >> you can see how type level literals can be used to prevent e.g. >> multiplying two inconsistent matrices together at compile time. I am >> sure we could do something better with either Yarr or repa (repa will >> currently give out of bounds errors at runtime). >> >> For reasons I don't understand (I think a bug in Haddock) the >> documentation does not get generated. >> >> There are examples of its use here: >> https://github.com/leventov/yarr/tree/master/tests. I wrote a blog >> using >> repa and Yarr here: >> https://idontgetoutmuch.wordpress.com/2013/08/06/planetary-simulation-with-excursions-in-symplectic-manifolds-6/ >> and compare performance. You can safely ignore the theory and need >> only >> look at "Repa Implementation", "Yarr Implementation" and >> "Performance". >> >> I think performance will depend on your application. I believe (but >> haven't confirmed) that repa will outperform Yarr on e.g grid based >> problems such as numerical methods for diffusions and Poisson. In the >> case of planets (or stars or particles) where everything is influenced >> by everything else then repa is a bad fit and Yarr outperforms. >> >> If your application is linear algebra, I would think that hmatrix >> would >> have what you want or could be extended to give what you want since it >> is LAPACK under the covers. >> >> I am very excited that you are interested in this area; it often feels >> very lonely. >> >> Best wishes, Dominic. >> >> On 06/01/2016 09:10, Petr Pudl?k wrote: >> > Hi Dominic, >> > >> > what is the current state of Yarr? Is it being actively >> developed? Is >> > there some tutorial or documentation available? >> > >> > I'm deciding between repa and yarr for some linear algebra >> > computations. I found some references that yarr is more performant, >> > but I couldn't find much documentation and the hackage page [1] >> hasn't >> > indexed most modules for some reason, so there seems to be no good >> > place to start from. And the last commit was 9 months ago. >> > >> > [1]https://hackage.haskell.org/package/yarr >> > >> > Thank you, >> > Petr >> >> >> >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From hyangfji at gmail.com Thu Jan 7 16:03:10 2016 From: hyangfji at gmail.com (Hong Yang) Date: Thu, 7 Jan 2016 10:03:10 -0600 Subject: [Haskell-cafe] wrong GHC source files Message-ID: https://www.haskell.org/ghc/download_ghc_7_10_3#sources reads Source Distribution ----------------------------------------------------------------------------- The source tarballs provide everything necessary to build the compiler, interactive system, and a minimal set of libraries. For more information on building, see the building guide. ghc-7.10.3b-windows-extra-src.tar.bz2 (92 MB) ghc-7.10.3b-windows-extra-src.tar.xz (91 MB) Shouldn't the two files be as follows? ghc-7.10.3b-src.tar.bz2 ghc-7.10.3b-src.tar.xz -------------- next part -------------- An HTML attachment was scrubbed... URL: From aditya.siram at gmail.com Thu Jan 7 18:31:51 2016 From: aditya.siram at gmail.com (aditya siram) Date: Thu, 7 Jan 2016 12:31:51 -0600 Subject: [Haskell-cafe] 7.10.3 source link missing Message-ID: Hi all, Just wanted to make haskell.org maintainers aware that 7.10.3 release does not provide a link to the source distribution on the Download page. https://www.haskell.org/ghc/download_ghc_7_10_3#sources. Thanks! -deech -------------- next part -------------- An HTML attachment was scrubbed... URL: From jykang22 at gmail.com Thu Jan 7 21:54:47 2016 From: jykang22 at gmail.com (Jeon-Young Kang) Date: Thu, 7 Jan 2016 16:54:47 -0500 Subject: [Haskell-cafe] Pattern Matching for record syntax Message-ID: Dear All. Hope your 2016 is off to a great start. I would like to get results from pattern matching or something. Here is the my code. data Person = Person {name :: String, age :: Int} names = ["tom", "sara"] -- list of names, String persons = [Person {name = "tom", age = 10}, Person {name="sara", age=9}, Person {name = "susan", age = 8}]. Is there any solution to get the age of "tom" and "sara"? I have no idea of pattern matching for this one. I've tried to use recursion, but I couldn't find any solution for list of records. Sincerely, Young -------------- next part -------------- An HTML attachment was scrubbed... URL: From john at degoes.net Thu Jan 7 22:00:40 2016 From: john at degoes.net (John A. De Goes) Date: Thu, 7 Jan 2016 15:00:40 -0700 Subject: [Haskell-cafe] LambdaConf 2016: Call for Proposals (May 26 - 29) Message-ID: Hi all, We have officially opened the call for proposals for LambdaConf 2016 LambdaConf 2016 ! LambdaConf is the one of the largest and most respected conferences on functional programming in the world. With more than a hundred hours of high-quality content from leading practitioners and researchers, LambdaConf is designed to take attendees' skills to the next level, whether they are just beginning or are highly experienced in the art of functional programming. Last year, Haskell content truly dominated the conference, with many workshops and well more than a full track of Haskell presentations. This year, we'd like to repeat that, as well as see strong submissions from related languages such as PureScript (PureScript Conf 2016 is co-located with and will immediately precede LambdaConf). We are looking for speakers who want to share their knowledge of the Haskell language (past, present, and future), Haskell libraries, Haskell tools, real-world Haskell use cases, and theory relevant to functional programming (such as type theory, programming language theory, dependent-types, program derivation, category theory). We welcome first-time speakers, experienced presenters, and presentations at all levels. For more information or to submit your proposal, please visit the conference home page (http://lambdaconf.us ), or the submission form . Please share this CFP with anyone you think might be interested! Regards, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From magnus at therning.org Thu Jan 7 22:24:18 2016 From: magnus at therning.org (Magnus Therning) Date: Thu, 07 Jan 2016 23:24:18 +0100 Subject: [Haskell-cafe] [Haskell-beginners] Pattern Matching for record syntax In-Reply-To: References: Message-ID: <871t9tkoz1.fsf@therning.org> Jeon-Young Kang writes: > Dear All. > > Hope your 2016 is off to a great start. > > I would like to get results from pattern matching or something. > > Here is the my code. > > data Person = Person {name :: String, age :: Int} > > names = ["tom", "sara"] -- list of names, String > > persons = [Person {name = "tom", age = 10}, Person {name="sara", age=9}, > Person {name = "susan", age = 8}]. > > Is there any solution to get the age of "tom" and "sara"? > > I have no idea of pattern matching for this one. > > I've tried to use recursion, but I couldn't find any solution for list of > records. How about using `filter`[1] over `persons` with a function checking if the name is in `names`? I'm sorry, but this sounds like home work so you won't get more than this from me. /M [1]: http://hackage.haskell.org/package/base-4.8.1.0/docs/Data-List.html#v:filter -- Magnus Therning OpenPGP: 0x927912051716CE39 email: magnus at therning.org jabber: magnus at therning.org twitter: magthe http://therning.org/magnus Would you go to war without a helmet? Would you drive without the seat belt? Then why do you develop software as if shit doesn?t happen? -- Alberto G ( http://makinggoodsoftware.com/2009/05/12/hdd/ ) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 800 bytes Desc: not available URL: From magnus at therning.org Thu Jan 7 22:42:08 2016 From: magnus at therning.org (Magnus Therning) Date: Thu, 07 Jan 2016 23:42:08 +0100 Subject: [Haskell-cafe] wrong GHC source files In-Reply-To: References: Message-ID: <87y4c1j9kv.fsf@therning.org> Hong Yang writes: > https://www.haskell.org/ghc/download_ghc_7_10_3#sources reads > > Source Distribution > ----------------------------------------------------------------------------- > The source tarballs provide everything necessary to build the compiler, > interactive system, and a minimal set of libraries. For more information on > building, see the building guide. > > ghc-7.10.3b-windows-extra-src.tar.bz2 (92 MB) > ghc-7.10.3b-windows-extra-src.tar.xz (91 MB) > > Shouldn't the two files be as follows? > ghc-7.10.3b-src.tar.bz2 > ghc-7.10.3b-src.tar.xz That's probably the case. Until that's been fixed one can find the sources at https://downloads.haskell.org/~ghc/7.10.3/ /M -- Magnus Therning OpenPGP: 0x927912051716CE39 email: magnus at therning.org jabber: magnus at therning.org twitter: magthe http://therning.org/magnus Finagle's First Law: To study a subject best, understand it thoroughly before you start. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 800 bytes Desc: not available URL: From hon.lianhung at gmail.com Fri Jan 8 09:55:45 2016 From: hon.lianhung at gmail.com (Lian Hung Hon) Date: Fri, 8 Jan 2016 17:55:45 +0800 Subject: [Haskell-cafe] Data declaration vs type classes Message-ID: Dear haskellers, What is the difference between writing data Task = GroceryTask String | LaundryTask Int doTask :: Task -> IO () doTask (GroceryTask s) = print "Going to " ++ s doTask (LaundryTask n) = print (show n ++ " pieces washed" and class Task a where work :: a -> IO () data GroceryTask = GroceryTask String data LaundryTask = LaundryTask Int instance Task GroceryTask where .. instance Task LaundryTask where .. doTask :: Task a => a -> IO () doTask = work They seem to be similar functionality wise, except that one is on the data level and another is on the class level. How should one go about deciding to use data or class? Is there a semantic difference? Which is more appropriate here? Happy new year, Hon -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelimo38 at yandex.ru Fri Jan 8 10:11:16 2016 From: miguelimo38 at yandex.ru (Miguel Mitrofanov) Date: Fri, 08 Jan 2016 13:11:16 +0300 Subject: [Haskell-cafe] Data declaration vs type classes In-Reply-To: References: Message-ID: <458891452247876@web20o.yandex.ru> First one is closed: there is a very clear list of all possibilities, kept in one place. Even if it's exported, it's impossible to add anything to the list of tasks without modifying that module. Second is open; if it's exported, users of your module can add their own tasks. On the other hand, adding new function that works on all tasks is, in the first case, simple: you can just write it in the same way as your `doTask`. Users can do that without modifying the module. In the second case you have to change your `Task` class if you want to add a function. 08.01.2016, 12:56, "Lian Hung Hon" : > Dear haskellers, > > What is the difference between writing > > data Task = GroceryTask String | LaundryTask Int > > doTask :: Task -> IO () > doTask (GroceryTask s) = print "Going to " ++ s > doTask (LaundryTask n) = print (show n ++ " pieces washed" > > and > > class Task a where > ? work :: a -> IO () > > data GroceryTask = GroceryTask String > data LaundryTask = LaundryTask Int > > instance Task GroceryTask where .. > > instance Task LaundryTask where .. > > doTask :: Task a => a -> IO () > doTask = work > > They seem to be similar functionality wise, except that one is on the data level and another is on the class level. How should one go about deciding to use data or class? Is there a semantic difference? Which is more appropriate here? > > Happy new year, > Hon > , > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From imantc at gmail.com Fri Jan 8 10:38:47 2016 From: imantc at gmail.com (Imants Cekusins) Date: Fri, 8 Jan 2016 11:38:47 +0100 Subject: [Haskell-cafe] Data declaration vs type classes In-Reply-To: <458891452247876@web20o.yandex.ru> References: <458891452247876@web20o.yandex.ru> Message-ID: > How should one go about deciding to use data or class? class: class lets specify more than one method. when you define instance yet do not implement all methods, compiler warns. if you try to call class method without an instance for that type, compiler warns. pattern matching: compiler does not warn if methods do not match every constructor of the data type. one way to decide if not sure, is to pick one way which seems easier to refactor. when more code is written, it usually becomes obvious if this approach does not fit. then refactor. > Which is more appropriate here? depends on the rest of the code. if this is it, then there is no real difference. From anselm.scholl at tu-harburg.de Fri Jan 8 10:56:43 2016 From: anselm.scholl at tu-harburg.de (Jonas Scholl) Date: Fri, 8 Jan 2016 11:56:43 +0100 Subject: [Haskell-cafe] Data declaration vs type classes In-Reply-To: References: <458891452247876@web20o.yandex.ru> Message-ID: <568F95EB.2080605@tu-harburg.de> On 01/08/2016 11:38 AM, Imants Cekusins wrote: >> How should one go about deciding to use data or class? > > class: > class lets specify more than one method. when you define instance yet > do not implement all methods, compiler warns. > if you try to call class method without an instance for that type, > compiler warns. > > pattern matching: > compiler does not warn if methods do not match every constructor of > the data type. Well, there is -fwarn-incomplete-patterns, which should be included in -Wall, which does exactly this. > > one way to decide if not sure, is to pick one way which seems easier > to refactor. when more code is written, it usually becomes obvious if > this approach does not fit. then refactor. A few things to keep in mind: If you define a type class and instances for Int and String, and later want to add another case for String, you have to add a newtype, otherwise the compiler can not differentiate. Additionally, adding a constructor with multiple fields gets complicated if you choose the type class solution, here you have to add an instance for a tuple. Also you need FlexibleInstances as soon as you want an instance for String or a Tuple more specific than (a, b) (or introduce newtypes). You can also not process a Task if it is hidden in a class. For example, how do you implement doOnlyShoppingTask? In the end you restrict yourself with a type class about the things you can do with the data. So when is this useful? I would argue, if you are writing a library and want your users to be able to define their own tasks. Otherwise I think abstracting a data type with a type class is not worth the hassle. > > >> Which is more appropriate here? > depends on the rest of the code. if this is it, then there is no real > difference. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: OpenPGP digital signature URL: From imantc at gmail.com Fri Jan 8 11:06:14 2016 From: imantc at gmail.com (Imants Cekusins) Date: Fri, 8 Jan 2016 12:06:14 +0100 Subject: [Haskell-cafe] Data declaration vs type classes In-Reply-To: <568F95EB.2080605@tu-harburg.de> References: <458891452247876@web20o.yandex.ru> <568F95EB.2080605@tu-harburg.de> Message-ID: > Well, there is -fwarn-incomplete-patterns, which should be included in -Wall, which does exactly this. cheers Jonas. will try this. another thing: class lets reuse the same method name for several types. with pattern matching, different types require different function names. it is possible to place methods in different modules and call them qualified, but class solution seems cleaner. basically, classes are very convenient for standardization and extending code. a bit like Java interfaces :-P From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Jan 8 11:13:06 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 8 Jan 2016 11:13:06 +0000 Subject: [Haskell-cafe] Data declaration vs type classes In-Reply-To: References: Message-ID: <20160108111306.GQ21171@weber> On Fri, Jan 08, 2016 at 05:55:45PM +0800, Lian Hung Hon wrote: > How should one go about deciding to use data or class? Is there a > semantic difference? Classes are not first class citizens in Haskell, and it's very hard to pass them around, manipulate them and compute with them without using non-standard and awkward techniques. > Which is more appropriate here? Almost certainly data. My rule of thumb is to only introduce a typeclass once it becomes incredibly repetitive passing around the data explicitly. Tom From imantc at gmail.com Fri Jan 8 11:26:45 2016 From: imantc at gmail.com (Imants Cekusins) Date: Fri, 8 Jan 2016 12:26:45 +0100 Subject: [Haskell-cafe] Data declaration vs type classes In-Reply-To: <20160108111306.GQ21171@weber> References: <20160108111306.GQ21171@weber> Message-ID: > it's very hard to pass them around, manipulate them and compute with them without using non-standard and awkward techniques. well here is one simple use case when class is very convenient: class ConvertByteString a where toByteString::a -> ByteString fromByteString::ByteString -> a no problems defining instances of this class, passing and calling them whatsoever. From hesselink at gmail.com Fri Jan 8 11:29:46 2016 From: hesselink at gmail.com (Erik Hesselink) Date: Fri, 8 Jan 2016 12:29:46 +0100 Subject: [Haskell-cafe] Data declaration vs type classes In-Reply-To: References: <20160108111306.GQ21171@weber> Message-ID: On 8 January 2016 at 12:26, Imants Cekusins wrote: >> it's very hard to pass them around, manipulate them and compute with them without using non-standard and awkward techniques. > > well here is one simple use case when class is very convenient: > > class ConvertByteString a where > toByteString::a -> ByteString > fromByteString::ByteString -> a > > no problems defining instances of this class, passing and calling them > whatsoever. One problem with this class would be if you convert String or Text: what encoding would you use? Probably UTF8, but there are others, and if you need those you need a newtype at least. Erik From ky3 at atamo.com Fri Jan 8 11:34:16 2016 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Fri, 8 Jan 2016 18:34:16 +0700 Subject: [Haskell-cafe] Haskell Weekly News Message-ID: Folks: Recall the quote from the May Day 2015 issue: The MLs and Haskell remind me of Brian Eno's line about how the first Velvet Underground album only sold 30,000 copies, but "everyone who bought one of those 30,000 copies started a band". This issue spotlights Elm and Idris, two languages implemented in Haskell. Enjoy! *Top Picks:* - Evan Czaplicki of the Elm web-front-end language leaves Prezi for NoRedInk . A startup dedicated to improving high-school English grammar, NoRedInk already employs 5 engineers writing Elm full-time. A HN comment hyperbolizes that Elm "is Clojure without parens, it's Haskell without academy, it's Redux without facebook, it's duck-typing without quacks, it's MVC without objects, and last but not least Evan Czaplicki (the creator) is the new Aaron Patterson (bright and fun!)." [Ed. Aaron is a Ruby and also Rails core dev.] - Janos Dobronszki, a self-described "Haskell addict, latent Idris fan", introduces Idris as "a language that will change the way you think about programming ." He motivates dependent types using the classic list vector example. The Hacker News community enthuses over the article with healthy signs of grassroots static-typing evangelism. Elsewhere, a haskell redditor obtains valuable answers about the tradeoffs that dependently typed programming incurs . - In "Monads to Machine Code (Part 1)" , Stephen Diehl walks his readers through an LLVM-like runtime machine code generation while introducing the x86 architecture all at the same time. No mean feat, what more x86 as opposed to a RISC architecture. Much-loved on HN . Also on haskell reddit . Compare to Lennart Augustsson's older series on code generation . Quality packages on hackage for runtime code generation include harpy and llvm-general . - A redditor wonders whether 3 nested loops written as a list comprehension compiles into the tight machine code version of 3 nested loops. Conspicuously absent in the discussion is mention of the Vector package and Don Stewart's 2010 achievements of tight loop optimization . *Quotes of the Week:* - ReinH: thanks puregreen for Lens over Tea series puregreen: is grinning all around ReinH: also thanks for not titling it "You could have written lens" johnw: ReinH, just skip to, "You could have been edwardk", it answer all other questions (Thanks to Gesh for the link.) - From HN: One thing I've learned from using immutable, functional languages (Elixir) is: "Don't tell your computer what to do, tell it how to transform data." While it may seem obvious, it's been a revelation for me and it has totally transformed how I write code, and especially how I test it. - From HN: FP people are nailing composability and reusability to never seen levels just in front of your eyes. You just have to keep them open to see. OOP did it at its time too, it just hit a ceiling; but there's one reason every imperative language is OOP nowadays. - From HN: As someone who learned Haskell and subsequently have been writing a lot of Python, I keep a mental tally of how many of my bugs (some of which took ages to track down) would have been caught immediately by a type system like Haskell's or Idris'. I'd say it's well over half. - From HN: Haskell syntax is the lingua franca when discussing anything related to data types and functional programming these days. *Videos of the Week:* - Watch LambdaConf 2015 , organized by John A. De Goes and professionally recorded by Confreaks . Richard Eisenberg presented on "A practical Introduction to GADTs" . The video recording gets love over at haskell reddit and even a talk summary. -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From imantc at gmail.com Fri Jan 8 11:46:57 2016 From: imantc at gmail.com (Imants Cekusins) Date: Fri, 8 Jan 2016 12:46:57 +0100 Subject: [Haskell-cafe] Data declaration vs type classes In-Reply-To: References: <20160108111306.GQ21171@weber> Message-ID: > you convert String or Text: what encoding would you use? let's say, this is very specific conversion where newtypes are used a lot. There are many different formats for Int (even the same type of int), String may be ascii, UTF8, ISO-..., you name it. using class does not make a difference re: type definition in this case. From abhisandhyasp.ap at gmail.com Fri Jan 8 13:42:47 2016 From: abhisandhyasp.ap at gmail.com (Abhijit Patel) Date: Fri, 8 Jan 2016 19:12:47 +0530 Subject: [Haskell-cafe] error while setting it up Message-ID: for the below command I am getting errror !! please suggest me a solution $make ===--- building phase 0 make --no-print-directory -f ghc.mk phase=0 phase_0_builds ghc.mk:159: *** dyn is not in $(GhcLibWays), but $(DYNAMIC_GHC_PROGRAMS) is YES. Stop. make: *** [all] Error 2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at cs.dartmouth.edu Fri Jan 8 20:18:38 2016 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Fri, 08 Jan 2016 15:18:38 -0500 Subject: [Haskell-cafe] Haskell-cafe] Preventing sharing Message-ID: <201601082018.u08KIcXu119591@tahoe.cs.Dartmouth.EDU> Oleg rewrote my power-series one-liners to illustrate the fact that lazy lists defined in a strict language need not be very clumsy to use. I had implemented those funstions in many strict languages, one of which was ML using a lazy-list implementation from Dave MacQueen. But after I finally did it in Haskell, I never looked back. Though ML was the best of the strict bunch. Haskell's overloading and cleaner notation made for more perspicuous code. It also enabled elegant code like this for the exponential series: exps = 1 + integral exps As Oleg pointed out, this won't work in a strict language, and must be replaced with something much less vivid: exps = fix (\s 1 + integral s) In slightly more involved cases, such as sins = integral coss coss = 1 - integral sins the strict equivalent becomes murky: sins = fix (\s -> integral (1 - integral s)) coss = derivative sins or (sins, cosx) = fix (\sc -> (integral $ snd sc, 1 - integral $ fst sc) Further, because lazy lists are a new type, they can't be manipulated directly with the standard list vocabulary (map, take, filter, etc.). All such functions must be lifted to the new type. Thus, while lazy lists can be *programmed* in a strict language, they *play* somewhat awkwardly in it. In fairness, I must admit that the beauty of representation by naked lists may not survive when power series are embedded in a more comprehensive algebraic system. For example, if both matrices and power series were defined as list instances of Num, matrices might be confusEd with nested power series. Combinations--matrices of power series and vice versa--could be similarly ambiguous. But it could also turn out to cause no more difficulty than the already extensive degree of polymorphism in arithmetic expressions. Doug From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Jan 8 20:33:42 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 8 Jan 2016 20:33:42 +0000 Subject: [Haskell-cafe] Haskell-cafe] Preventing sharing In-Reply-To: <201601082018.u08KIcXu119591@tahoe.cs.Dartmouth.EDU> References: <201601082018.u08KIcXu119591@tahoe.cs.Dartmouth.EDU> Message-ID: <20160108203342.GU21171@weber> On Fri, Jan 08, 2016 at 03:18:38PM -0500, Doug McIlroy wrote: > Though ML was the best of the strict bunch. Haskell's overloading > and cleaner notation made for more perspicuous code. It also > enabled elegant code like this for the exponential series: > > exps = 1 + integral exps > > As Oleg pointed out, this won't work in a strict language, and > must be replaced with something much less vivid: > > exps = fix (\s 1 + integral s) Can you explain why the first line won't work for lazy lists in a strict language? It seems perfectly fine to me. Tom From hjgtuyl at chello.nl Fri Jan 8 22:12:05 2016 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Fri, 08 Jan 2016 23:12:05 +0100 Subject: [Haskell-cafe] error while setting it up In-Reply-To: References: Message-ID: On Fri, 08 Jan 2016 14:42:47 +0100, Abhijit Patel wrote: > for the below command I am getting errror !! > please suggest me a solution > $make > ===--- building phase 0 > make --no-print-directory -f ghc.mk phase=0 phase_0_builds > ghc.mk:159: *** dyn is not in $(GhcLibWays), but $(DYNAMIC_GHC_PROGRAMS) > is > YES. Stop. > make: *** [all] Error 2 A search on Internet for the message "dyn is not in $(GhcLibWays)" lead me to https://mail.haskell.org/pipermail/ghc-devs/2013-May/001306.html Regards, Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Jan 8 22:23:13 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 8 Jan 2016 22:23:13 +0000 Subject: [Haskell-cafe] error while setting it up In-Reply-To: References: Message-ID: <20160108222313.GV21171@weber> On Fri, Jan 08, 2016 at 07:12:47PM +0530, Abhijit Patel wrote: > for the below command I am getting errror !! > please suggest me a solution > $make > ===--- building phase 0 > make --no-print-directory -f ghc.mk phase=0 phase_0_builds > ghc.mk:159: *** dyn is not in $(GhcLibWays), but $(DYNAMIC_GHC_PROGRAMS) is > YES. Stop. > make: *** [all] Error 2 Are you trying to build GHC from source? Are you a beginner? If yes to both then this is a very bad idea. Try something like Stack, which is designed to make it quick and easy to set up GHC and libraries: http://docs.haskellstack.org/en/stable/README.html From jerzy.karczmarczuk at unicaen.fr Fri Jan 8 23:24:23 2016 From: jerzy.karczmarczuk at unicaen.fr (Jerzy Karczmarczuk) Date: Sat, 9 Jan 2016 00:24:23 +0100 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <20160108203342.GU21171@weber> References: <201601082018.u08KIcXu119591@tahoe.cs.Dartmouth.EDU> <20160108203342.GU21171@weber> Message-ID: <56904527.5060303@unicaen.fr> Le 08/01/2016 21:33, Tom Ellis a ?crit : > On Fri, Jan 08, 2016 at 03:18:38PM -0500, Doug McIlroy wrote: > > exps = 1 + integral exps > > Can you explain why the first line won't work for lazy lists in a strict > language? It seems perfectly fine to me. > > Tom If I understand well the issue, simply because at the RHS exps is not a thunk, it is evaluated, which breaks down the co-recursivity of the definition, even if the operators are delayed. In a strict language you would have to use a kind of macros to make it work. Good, old days... I believe I wrote a paper on lazy power series already in 1996 (but Douglas McI. hasn't read it). (Theoretical Computer Science 187, pp. 203?219, (1997).) I still have a copy: https://karczmarczuk.users.greyc.fr/Transport/power.pdf and I tried hard to do some Computer Algebra with that, trying to implement by force some lazy algorithms in a strict CA language MuPAD (similar to Maple, with strong OO structure). It was clumsy and almost nobody was convinced, when I presented this in Paderborn, the home of MuPAD. https://karczmarczuk.users.greyc.fr/Transport/paslid.pdf Then MuPad quit the free software world, got attached to MathWorks, and I simply forgot this experiment. I tried to convince some students to continue it using Mathematica (Hold, HoldRest, etc. permit to construct delayed lists; in general, a rewriting system seems better adapted to such kind of implementation, than purely procedural languages), but the results were ugly, and my dear students asked me to give them some other project... Jerzy Karczmarczuk From noonslists at gmail.com Sat Jan 9 06:16:53 2016 From: noonslists at gmail.com (Noon Silk) Date: Sat, 9 Jan 2016 17:16:53 +1100 Subject: [Haskell-cafe] async docs not on hackage for some reason? Message-ID: Hello, Does anyone have any thoughts why the async docs aren't available on hackage yet? - https://hackage.haskell.org/package/async-2.1.0 It says this version was uploaded a few days ago, but no docs are available. I did a `cabal get` locally and build the docs using stack; so it works fine. -- Noon Silk, ? https://silky.github.io/ "Every morning when I wake up, I experience an exquisite joy ? the joy of being this signature." -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Sat Jan 9 08:15:52 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Sat, 9 Jan 2016 08:15:52 +0000 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <56904527.5060303@unicaen.fr> References: <201601082018.u08KIcXu119591@tahoe.cs.Dartmouth.EDU> <20160108203342.GU21171@weber> <56904527.5060303@unicaen.fr> Message-ID: <20160109081552.GW21171@weber> On Sat, Jan 09, 2016 at 12:24:23AM +0100, Jerzy Karczmarczuk wrote: > Le 08/01/2016 21:33, Tom Ellis a ?crit : > >On Fri, Jan 08, 2016 at 03:18:38PM -0500, Doug McIlroy wrote: > > > > exps = 1 + integral exps > > > >Can you explain why the first line won't work for lazy lists in a strict > >language? It seems perfectly fine to me. > > If I understand well the issue, simply because at the RHS exps is > not a thunk, it is evaluated, which breaks down the co-recursivity > of the definition, even if the operators are delayed. But if it evaluates to a thunk ... From svenpanne at gmail.com Sat Jan 9 12:28:49 2016 From: svenpanne at gmail.com (Sven Panne) Date: Sat, 9 Jan 2016 13:28:49 +0100 Subject: [Haskell-cafe] async docs not on hackage for some reason? In-Reply-To: References: Message-ID: 2016-01-09 7:16 GMT+01:00 Noon Silk : > Hello, > > Does anyone have any thoughts why the async docs aren't available on > hackage yet? > > - https://hackage.haskell.org/package/async-2.1.0 > > It says this version was uploaded a few days ago, but no docs are > available. I did a `cabal get` locally and build the docs using stack; so > it works fine. > Same here: http://hackage.haskell.org/package/OpenGLRaw http://hackage.haskell.org/package/OpenGL Hackage always had trouble building docs, while e.g. Stackage doesn't seem to have any problem. I don't understand at all why e.g. packages I uploaded basically at the same time got their documentation built very quickly, while the packages above have not even been tried yet... :-/ Note: I consider uploading documentation by hand a non-option, it takes way too much time when you maintain several packages and 99% of the time you get the links to other packages wrong. Furthermore, as a user I have no clue if the uploaded documentation *really* matches the uploaded source code. Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Sat Jan 9 15:11:34 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Sat, 9 Jan 2016 15:11:34 +0000 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <5690E4AC.8090209@unicaen.fr> References: <201601082018.u08KIcXu119591@tahoe.cs.Dartmouth.EDU> <20160108203342.GU21171@weber> <56904527.5060303@unicaen.fr> <20160109081552.GW21171@weber> <5690D078.1070904@unicaen.fr> <20160109100725.GX21171@weber> <5690E4AC.8090209@unicaen.fr> Message-ID: <20160109151134.GD21171@weber> Sharing a private conversation, with the permission of Jerzy. On Sat, Jan 09, 2016 at 11:45:00AM +0100, Jerzy Karczmarczuk wrote: > Again: *exps = 1 + integral exps* > > I said that if RHS exps is evaluated, it cannot work. > > >>>But if it evaluates to a thunk ... > >>Then, on which premises you call the language "strict"? > > >Why not? > > > >A thunk could either be a delayed function call, or it could be a memoised > >thunk, such as OCaml (a strict language) provides > > > > http://caml.inria.fr/pub/docs/manual-ocaml/libref/Lazy.html [...] > But in the original expression with exps, you don't delay anything. > You don't construct *exps()*, or equivalent. This variable is just > variable, lexical atom, and either you evaluate it or not. If not, > don't call the language strict. If there is a delay form introduced > by the compiler, the language is not strict. > > If you don't agree, I suggest that instead of asking "why this > doesn't work, it should!" , simply implement it. Oleg has already provided most of the solution http://okmij.org/ftp/ML/powser.ml The aim is exps = 1 + integral exps In Oleg's framework the answer is let exps = I.fix (fun e -> int 1 +% integ e) In a hypothetical OCaml with typeclasses, this becomes let exps = I.fix (fun e -> 1 + integ e) The remaining objection is the presence of the fix. The solution here is to allow the language to support recursive bindings of lazy values[1], not just of function values. The we could write let rec exps = 1 + integ exps If you still do not agree I would appreciate it if you could explain why such a language a) could not exist, or b) would not be called "strict" If you're still not convinced, consider a lazy language, Haskell--, which doesn't allow recursive bindings of non-function types. In Haskell-- you *cannot* write exps = 1 + integral exps but you have to write exps = I.fix (\e -> 1 + integral e) So we see that the nice syntax "exps = 1 + integral exps" is not due to laziness (since Haskell-- is lazy, but you cannot write that). Instead the nice syntax is due to lazy recursive bindings, and this sugar can exist as well in a strict language as it can in a lazy one. Tom [1] Note that recursive bindings are anyway just syntactic sugar for a fixed point. The definition sum = \x -> case x of [] -> 0 (x:xs) = x + sum xs is (essentially) sugar for sum = fix (\sum' x -> case x of [] -> 0 (x:xs) = x + sum' xs) so adding additional sugar for binding lazy values can hardly spoil anything. From jerzy.karczmarczuk at unicaen.fr Sat Jan 9 17:29:05 2016 From: jerzy.karczmarczuk at unicaen.fr (Jerzy Karczmarczuk) Date: Sat, 9 Jan 2016 18:29:05 +0100 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <20160109151134.GD21171@weber> References: <201601082018.u08KIcXu119591@tahoe.cs.Dartmouth.EDU> <20160108203342.GU21171@weber> <56904527.5060303@unicaen.fr> <20160109081552.GW21171@weber> <5690D078.1070904@unicaen.fr> <20160109100725.GX21171@weber> <5690E4AC.8090209@unicaen.fr> <20160109151134.GD21171@weber> Message-ID: <56914361.90101@unicaen.fr> Tom Ellis wrote : > consider a lazy language, Haskell--,/which doesn't allow recursive bindings of non-function types./ In Haskell-- you > *cannot* write > > exps = 1 + integral exps > > but you have to write > > exps = I.fix (\e -> 1 + integral e) > > So we see that the nice syntax "exps = 1 + integral exps" is not due to > laziness (since Haskell-- is lazy, but you cannot write that). If you say so... You may always say: "Consider the syntax XXXX. Now, consider a lazy language which doesn't allow XXXX. So, your nice syntax has nothing to do with laziness. QED". Tom, construct such a language, and I might believe you. Also, I recall your former objection, that *exps = 1 + integral exps* should work "for lazy lists" in a strict language. Please, implement it. Since you would need *letrec* anyway, I suggest Scheme (say, Racket). You will see what that implies. Compare the behaviour of strict and lazy Racket. Best regards Jerzy -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Sat Jan 9 17:36:54 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Sat, 9 Jan 2016 17:36:54 +0000 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <56914361.90101@unicaen.fr> References: <201601082018.u08KIcXu119591@tahoe.cs.Dartmouth.EDU> <20160108203342.GU21171@weber> <56904527.5060303@unicaen.fr> <20160109081552.GW21171@weber> <5690D078.1070904@unicaen.fr> <20160109100725.GX21171@weber> <5690E4AC.8090209@unicaen.fr> <20160109151134.GD21171@weber> <56914361.90101@unicaen.fr> Message-ID: <20160109173654.GA28201@weber> On Sat, Jan 09, 2016 at 06:29:05PM +0100, Jerzy Karczmarczuk wrote: > Tom Ellis wrote : > >consider a lazy language, Haskell--,/which doesn't allow recursive bindings of non-function types./ In Haskell-- you > >*cannot* write > > > > exps = 1 + integral exps > > > >but you have to write > > > > exps = I.fix (\e -> 1 + integral e) > > > >So we see that the nice syntax "exps = 1 + integral exps" is not due to > >laziness (since Haskell-- is lazy, but you cannot write that). > If you say so... > > You may always say: > > "Consider the syntax XXXX. Now, consider a lazy language which > doesn't allow XXXX. > So, your nice syntax has nothing to do with laziness. QED". Granted, but the more important point was the sketch of the strict language which *does* allow it. You have conveniently failed to challenge me on any of the aspects of the very simple design. > Tom, construct such a language, and I might believe you. I remind you that Doug's original claim was "this won't work in a strict language", which he offered without proof, even a sketch of a proof. I still hold the onus is on you (or him) to demonstrate it! > Also, I recall your former objection, that > *exps = 1 + integral exps* > > should work "for lazy lists" in a strict language. Please, implement > it. Since you would need *letrec* anyway, I suggest Scheme (say, > Racket). You will see what that implies. Compare the behaviour of > strict and lazy Racket. Maybe since Scheme and Racket are not typed things will go through there. I shall have to look into it. I don't know the languages. Tom From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Sat Jan 9 18:02:14 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Sat, 9 Jan 2016 18:02:14 +0000 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <56914361.90101@unicaen.fr> References: <201601082018.u08KIcXu119591@tahoe.cs.Dartmouth.EDU> <20160108203342.GU21171@weber> <56904527.5060303@unicaen.fr> <20160109081552.GW21171@weber> <5690D078.1070904@unicaen.fr> <20160109100725.GX21171@weber> <5690E4AC.8090209@unicaen.fr> <20160109151134.GD21171@weber> <56914361.90101@unicaen.fr> Message-ID: <20160109180214.GC28201@weber> On Sat, Jan 09, 2016 at 06:29:05PM +0100, Jerzy Karczmarczuk wrote: > Tom Ellis wrote : > >consider a lazy language, Haskell--,/which doesn't allow recursive bindings of non-function types./ In Haskell-- you > >*cannot* write > > > > exps = 1 + integral exps > > > >but you have to write > > > > exps = I.fix (\e -> 1 + integral e) > > > >So we see that the nice syntax "exps = 1 + integral exps" is not due to > >laziness (since Haskell-- is lazy, but you cannot write that). > > Tom, construct such a language, and I might believe you. By the way, for explicitness, here is my construction of such a language. Take any strict language and extend the rules for let rec such that let rec v = ... v ... means let v = fix (\v' -> ... v' ...) for any v that has lazy type (function types, explicit thunks etc.), where fix is its associated fixpoint operator. Then one can happily write let rec exps = 1 + integral exps because it means exactly Oleg's let exps = fix (\e -> 1 + integral e) Do you say (a) this language can't exist for some reason or (b) it is somehow "not strict"? Tom From arjenvanweelden at gmail.com Sat Jan 9 18:04:17 2016 From: arjenvanweelden at gmail.com (Arjen) Date: Sat, 9 Jan 2016 19:04:17 +0100 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <20160109173654.GA28201@weber> References: <201601082018.u08KIcXu119591@tahoe.cs.Dartmouth.EDU> <20160108203342.GU21171@weber> <56904527.5060303@unicaen.fr> <20160109081552.GW21171@weber> <5690D078.1070904@unicaen.fr> <20160109100725.GX21171@weber> <5690E4AC.8090209@unicaen.fr> <20160109151134.GD21171@weber> <56914361.90101@unicaen.fr> <20160109173654.GA28201@weber> Message-ID: <56914BA1.9080306@gmail.com> Sorry, forgot to reply to the list. On 01/09/2016 06:36 PM, Tom Ellis wrote: > On Sat, Jan 09, 2016 at 06:29:05PM +0100, Jerzy Karczmarczuk wrote: >> Tom Ellis wrote : >>> consider a lazy language, Haskell--,/which doesn't allow recursive bindings of non-function types./ In Haskell-- you >>> *cannot* write >>> >>> exps = 1 + integral exps >>> >>> but you have to write >>> >>> exps = I.fix (\e -> 1 + integral e) >>> >>> So we see that the nice syntax "exps = 1 + integral exps" is not due to >>> laziness (since Haskell-- is lazy, but you cannot write that). >> If you say so... >> >> You may always say: >> >> "Consider the syntax XXXX. Now, consider a lazy language which >> doesn't allow XXXX. >> So, your nice syntax has nothing to do with laziness. QED". > > Granted, but the more important point was the sketch of the strict language > which *does* allow it. You have conveniently failed to challenge me on > any of the aspects of the very simple design. > >> Tom, construct such a language, and I might believe you. > > I remind you that Doug's original claim was "this won't work in a strict > language", which he offered without proof, even a sketch of a proof. I > still hold the onus is on you (or him) to demonstrate it! > If I'm not mistaken, a strict language implies that arguments are evaluated before function calls. To calculate exps, you need to add 1 to the result of the function call integral on argument exps. To evaluate that call, it first evaluates the arguments: exps. And so on... This causes a non-terminating calculation, I would expect. IMHO, unless you add explicit laziness to a strict language, which some do but it requires some extra syntax, this cannot be done. I do believe Scheme or ML does have laziness annotations, and they show in the data types and function call syntax. >> Also, I recall your former objection, that > >> *exps = 1 + integral exps* >> >> should work "for lazy lists" in a strict language. Please, implement >> it. Since you would need *letrec* anyway, I suggest Scheme (say, >> Racket). You will see what that implies. Compare the behaviour of >> strict and lazy Racket. > > Maybe since Scheme and Racket are not typed things will go through there. I > shall have to look into it. I don't know the languages. > > Tom If you propose that a function (or anything with a function type) is implicitly lazy, then I think that you are describing a lazy/non-strict language. kind regards, Arjen From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Sat Jan 9 18:11:48 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Sat, 9 Jan 2016 18:11:48 +0000 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <56914BA1.9080306@gmail.com> References: <20160108203342.GU21171@weber> <56904527.5060303@unicaen.fr> <20160109081552.GW21171@weber> <5690D078.1070904@unicaen.fr> <20160109100725.GX21171@weber> <5690E4AC.8090209@unicaen.fr> <20160109151134.GD21171@weber> <56914361.90101@unicaen.fr> <20160109173654.GA28201@weber> <56914BA1.9080306@gmail.com> Message-ID: <20160109181148.GD28201@weber> On Sat, Jan 09, 2016 at 07:04:17PM +0100, Arjen wrote: > If I'm not mistaken, a strict language implies that arguments are > evaluated before function calls. To calculate exps, you need to add 1 to > the result of the function call integral on argument exps. Sure, but why should evaluating exps actually do anything? If exps is of function type then evaluating it need not do anything at all! > To evaluate that call, it first evaluates the arguments: exps. And so > on... This causes a non-terminating calculation, I would expect. I disagree, for the reason above. > IMHO, unless you add explicit laziness to a strict language, which > some do but it requires some extra syntax, this cannot be done. You don't even need explicit laziness. Having exps be of function type will do. That means evaluating it will terminate early. > If you propose that a function (or anything with a function type) is > implicitly lazy, then I think that you are describing a > lazy/non-strict language. I'm not sure what you're getting at here. I'm not proposing that function application is evaluated lazily, I'm claiming that functions themselves are lazy datatypes since they contain computations that are only run when you applying them to arguments. Tom From arjenvanweelden at gmail.com Sat Jan 9 18:22:48 2016 From: arjenvanweelden at gmail.com (Arjen) Date: Sat, 9 Jan 2016 19:22:48 +0100 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <20160109181148.GD28201@weber> References: <20160108203342.GU21171@weber> <56904527.5060303@unicaen.fr> <20160109081552.GW21171@weber> <5690D078.1070904@unicaen.fr> <20160109100725.GX21171@weber> <5690E4AC.8090209@unicaen.fr> <20160109151134.GD21171@weber> <56914361.90101@unicaen.fr> <20160109173654.GA28201@weber> <56914BA1.9080306@gmail.com> <20160109181148.GD28201@weber> Message-ID: <56914FF8.4000202@gmail.com> On 01/09/2016 07:11 PM, Tom Ellis wrote: > On Sat, Jan 09, 2016 at 07:04:17PM +0100, Arjen wrote: >> If I'm not mistaken, a strict language implies that arguments are >> evaluated before function calls. To calculate exps, you need to add 1 to >> the result of the function call integral on argument exps. > > Sure, but why should evaluating exps actually do anything? If exps is of > function type then evaluating it need not do anything at all! > >> To evaluate that call, it first evaluates the arguments: exps. And so >> on... This causes a non-terminating calculation, I would expect. > > I disagree, for the reason above. > >> IMHO, unless you add explicit laziness to a strict language, which >> some do but it requires some extra syntax, this cannot be done. > > You don't even need explicit laziness. Having exps be of function type will > do. That means evaluating it will terminate early. > >> If you propose that a function (or anything with a function type) is >> implicitly lazy, then I think that you are describing a >> lazy/non-strict language. > > I'm not sure what you're getting at here. I'm not proposing that function > application is evaluated lazily, I'm claiming that functions themselves are > lazy datatypes since they contain computations that are only run when you > applying them to arguments. > > Tom I was thinking of exps as a value (having a non-function type). Maybe I'm wrong or just not understanding the issue fully. How do you differentiate between expression that are values and those that are function applications on arguments? If I were to print the value of exps, like main = print exps. How would I express this? Or is it print's responsibility to evaluate the argument? Say, exps has type Integer. How does print differentiate between a actual value (print 42) and unevaluated expressions (print exps)? kind regards, Arjen From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Sat Jan 9 18:42:25 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Sat, 9 Jan 2016 18:42:25 +0000 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <56914FF8.4000202@gmail.com> References: <20160109081552.GW21171@weber> <5690D078.1070904@unicaen.fr> <20160109100725.GX21171@weber> <5690E4AC.8090209@unicaen.fr> <20160109151134.GD21171@weber> <56914361.90101@unicaen.fr> <20160109173654.GA28201@weber> <56914BA1.9080306@gmail.com> <20160109181148.GD28201@weber> <56914FF8.4000202@gmail.com> Message-ID: <20160109184225.GE28201@weber> On Sat, Jan 09, 2016 at 07:22:48PM +0100, Arjen wrote: > I was thinking of exps as a value (having a non-function type). > Maybe I'm wrong or just not understanding the issue fully. How do > you differentiate between expression that are values and those that > are function applications on arguments? I'm not sure you really "differentiate" between them. The evaluation of function arguments just has different consequences when those argument values are integers than it does when those argument values are functions. > If I were to print the value of exps, like main = print exps. How > would I express this? Or is it print's responsibility to evaluate > the argument? > Say, exps has type Integer. How does print differentiate between a > actual value (print 42) and unevaluated expressions (print exps)? I'm not quite sure what you're asking, but consider the difference between these two calls to the function id in OCaml. # let id x = x;; val id : 'a -> 'a = # id (Printf.printf "Hello");; Hello- : unit = () # id (fun () -> Printf.printf "Hello");; - : unit -> unit = The correspond to the following in an imaginary impure Haskell-like language: id (print "Hello") "Hello" id (\() -> print "Hello") Tom From arjenvanweelden at gmail.com Sat Jan 9 18:48:04 2016 From: arjenvanweelden at gmail.com (Arjen) Date: Sat, 9 Jan 2016 19:48:04 +0100 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <20160109184225.GE28201@weber> References: <20160109081552.GW21171@weber> <5690D078.1070904@unicaen.fr> <20160109100725.GX21171@weber> <5690E4AC.8090209@unicaen.fr> <20160109151134.GD21171@weber> <56914361.90101@unicaen.fr> <20160109173654.GA28201@weber> <56914BA1.9080306@gmail.com> <20160109181148.GD28201@weber> <56914FF8.4000202@gmail.com> <20160109184225.GE28201@weber> Message-ID: <569155E4.2030805@gmail.com> On 01/09/2016 07:42 PM, Tom Ellis wrote: > On Sat, Jan 09, 2016 at 07:22:48PM +0100, Arjen wrote: >> I was thinking of exps as a value (having a non-function type). >> Maybe I'm wrong or just not understanding the issue fully. How do >> you differentiate between expression that are values and those that >> are function applications on arguments? > > I'm not sure you really "differentiate" between them. The evaluation of > function arguments just has different consequences when those argument > values are integers than it does when those argument values are functions. > >> If I were to print the value of exps, like main = print exps. How >> would I express this? Or is it print's responsibility to evaluate >> the argument? >> Say, exps has type Integer. How does print differentiate between a >> actual value (print 42) and unevaluated expressions (print exps)? > > I'm not quite sure what you're asking, but consider the difference between > these two calls to the function id in OCaml. > > # let id x = x;; > val id : 'a -> 'a = > > # id (Printf.printf "Hello");; > Hello- : unit = () > > # id (fun () -> Printf.printf "Hello");; > - : unit -> unit = > > > The correspond to the following in an imaginary impure Haskell-like > language: > > id (print "Hello") > "Hello" > > id (\() -> print "Hello") > > > I think I see what you mean. Then in OCaml exps would have type unit->Int? And to get the value, you would use exps ()? kind regards, Arjen From abhisandhyasp.ap at gmail.com Sat Jan 9 18:50:53 2016 From: abhisandhyasp.ap at gmail.com (Abhijit Patel) Date: Sun, 10 Jan 2016 00:20:53 +0530 Subject: [Haskell-cafe] Regarding Developing my skills with haskell! Message-ID: Hello, I want to learn some practical application using haskell like some projects or developing a game. Can anyone suggest me some link to develop my skills with haskell ? Cheers, Abhijit -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Sat Jan 9 19:00:40 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Sat, 9 Jan 2016 19:00:40 +0000 Subject: [Haskell-cafe] Lazy series [was : Preventing sharing] In-Reply-To: <569155E4.2030805@gmail.com> References: <20160109100725.GX21171@weber> <5690E4AC.8090209@unicaen.fr> <20160109151134.GD21171@weber> <56914361.90101@unicaen.fr> <20160109173654.GA28201@weber> <56914BA1.9080306@gmail.com> <20160109181148.GD28201@weber> <56914FF8.4000202@gmail.com> <20160109184225.GE28201@weber> <569155E4.2030805@gmail.com> Message-ID: <20160109190040.GF28201@weber> On Sat, Jan 09, 2016 at 07:48:04PM +0100, Arjen wrote: > >I'm not quite sure what you're asking, but consider the difference between > >these two calls to the function id in OCaml. > > > ># let id x = x;; > >val id : 'a -> 'a = > > > ># id (Printf.printf "Hello");; > >Hello- : unit = () > > > ># id (fun () -> Printf.printf "Hello");; > >- : unit -> unit = > > > > > >The correspond to the following in an imaginary impure Haskell-like > >language: > > > >id (print "Hello") > >"Hello" > > > >id (\() -> print "Hello") > > > > I think I see what you mean. Then in OCaml exps would have type > unit->Int? And to get the value, you would use exps ()? In OCaml exps could have type A, where A is isomorphic to () -> (Double, A) To get the first value you could (effectively) use 'exps ()'. You get a tail of type A along with it. I suggest you look through Oleg's code. It's quite illuminating. http://okmij.org/ftp/ML/powser.ml Tom From alien11689 at gmail.com Sat Jan 9 19:03:00 2016 From: alien11689 at gmail.com (Dominik Przybysz) Date: Sat, 9 Jan 2016 20:03:00 +0100 Subject: [Haskell-cafe] Regarding Developing my skills with haskell! In-Reply-To: References: Message-ID: Hello, maybe try this https://en.wikibooks.org/wiki/Write_Yourself_a_Scheme_in_48_Hours 2016-01-09 19:50 GMT+01:00 Abhijit Patel : > Hello, > I want to learn some practical application using haskell like some > projects or developing a game. Can anyone suggest me some link to develop > my skills with haskell ? > > Cheers, > Abhijit > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -- Pozdrawiam, Dominik Przybysz -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikita at karetnikov.org Sat Jan 9 19:54:27 2016 From: nikita at karetnikov.org (Nikita Karetnikov) Date: Sat, 9 Jan 2016 22:54:27 +0300 Subject: [Haskell-cafe] Regarding Developing my skills with haskell! In-Reply-To: References: Message-ID: <20160109195427.GA21779@tau> > > I want to learn some practical application using haskell like some > > projects or developing a game. Can anyone suggest me some link to develop > > my skills with haskell ? How about a tetris? There's a tutorial and a package on Hackage (so you can follow along): https://www.cs.ox.ac.uk/people/ian.lynagh/Hetris/ https://hackage.haskell.org/package/hetris https://www.cs.ox.ac.uk/people/ian.lynagh/Hetris/Hetris.ps It's quite dated, though. Not sure whether it works with recent GHCs. In any case, give it a try! From will.yager at gmail.com Sat Jan 9 23:23:26 2016 From: will.yager at gmail.com (William Yager) Date: Sat, 9 Jan 2016 17:23:26 -0600 Subject: [Haskell-cafe] Promotion of field accessors using DataKinds Message-ID: Hello all, Let's say I have some data data Config = Conf (len :: Nat) (turboEncabulate :: Bool) Using DataKinds, we can promote Config to a kind and Conf to a type. However, it does not appear that GHC supports e.g. data Thing (conf :: Config) = Thing data Count (n :: Nat) = Count foo :: Thing conf -> Count (len conf) foo Thing = Count That is, it does not appear to properly replace "len conf" with the value of len from conf. Instead, the way I've found to do this is to define class Lengthed (a :: Config) where type Len a :: Nat instance Lengthed (Conf n) where type Len (Conf n t) = n Now, foo :: Thing conf -> Count (Len conf) works fine. So manually defining a type function that intuitively does the exact same thing as "len" seems to work. Is there a particular reason behind this? Thanks, Will -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeffbrown.the at gmail.com Sun Jan 10 08:09:13 2016 From: jeffbrown.the at gmail.com (Jeffrey Brown) Date: Sun, 10 Jan 2016 00:09:13 -0800 Subject: [Haskell-cafe] pattern matching v. type checking Message-ID: I am, thanks to an idea from Elliot Cameron, using the Functional Graph Library (FGL) to implement [1] something resembling a hyperggraph, which I'm calling Mindmap, in which relationships can involve any number of things, including other relationships. (By contrast, in a graph, Edges cannot belong to other Edges; only Nodes can.) Here are the types: -- Exprs (expressions) play Roles in Rels (relationships). -- A k-ary (Arity k) Rel consists of a k-ary template and k members. -- Each k-ary Rel emits k+1 Edges toward the other Exprs: -- one connects it to its RelTplt (relationship template) -- k more connect it to each of its k RelMbrs (relationship members) -- The two paragraphs after it will clear up any questions about the next. type Mindmap = Gr Expr Role data Role = RelTplt | RelMbr RelPos deriving (Show,Read,Eq,Ord) data Expr = Str String | Tplt Arity [String] | Rel Arity -- TODO ? deduce the Arity of a Tplt from its [String] -- TODO ? deduce from the graph the Arity of a Rel -- rather than carrying it redundantly in the Rel constructor deriving (Show,Read,Eq,Ord) type RelPos = Int -- the k members of a k-ary Rel take RelPos values [1..k] type Arity = Int The following is a Mindmap that represents the expression "dog needs water" using the subexpressions "dog" (a string), "water" (a string), and "_ wants _" (a relationship two things can have, that is a binary Rel): -- mkGraph :: Graph gr => [LNode a] -> [LEdge b] -> gr a b -- that is, mkGraph takes a list of nodes followed by a list of edges g1 :: Mindmap g1 = mkGraph [ (0, Str "dog" ) , (1, stringToTplt "_ wants _" ) -- produces a Tplt with Arity 2 , (3, Str "water" ) , (4, Rel 2 ) ] [ -- "dog wants water" (4,1, RelTplt) -- Node 1 is the Template for the Rel at Node 4 , (4,0, RelMbr 1) -- Node 0 is the 1st Rel Member of the Rel at Node 4 , (4,3, RelMbr 2) -- Node 3 is the 2nd Rel Member of the Rel at Node 4 ] The next Mindmap encodes the previous statement and a second statement stating that the first is dubious: g2 :: Mindmap g2 = mkGraph [ (0, Str "dog" ) , (1, stringToTplt "_ wants _" ) , (3, Str "water" ) , (4, Rel 2 ) , (5, stringToTplt "_ is _") , (6, Str "dubious" ) , (7, Rel 2 ) ] [ -- "dog wants water" is represented just like it was in g1 (4,1,RelTplt), (4,0, RelMbr 1), (4,3,RelMbr 2), -- "[dog wants water] is dubious" (7,5,RelTplt), (7,4,RelMbr 1), -- Node 4, the first Member of this Rel, is itself a Rel (7,6,RelMbr 2) ] I find myself doing a lot of pattern matching that maybe should be type checking instead, to distinguish between the three Expr constructors: For instance, here is a function that, given a Node at which there is a Rel, returns the Tplt for that Rel: tpltForRelAt :: (MonadError String m) => Mindmap -> Node -> m Expr tpltForRelAt g rn = do ir <-isRel g rn if not ir then throwError $ "tpltForRelAt: Label of LNode " ++ show rn ++ " is not a Rel." else return $ fromJust $ lab g $ head [n | (n,RelTplt) <- lsuc g rn] -- todo ? head is unsafe -- but is only unsafe if graph takes an invalid state -- because each Rel should have exactly one Tplt I had to manually check whether the Expr in question was a Rel. I feel like I'm doing the type system's job. Is there a better way? [1] https://github.com/JeffreyBenjaminBrown/digraphs-with-text -- Jeffrey Benjamin Brown -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Sun Jan 10 19:42:56 2016 From: svenpanne at gmail.com (Sven Panne) Date: Sun, 10 Jan 2016 20:42:56 +0100 Subject: [Haskell-cafe] Merging the OpenGLRaw and gl packages In-Reply-To: References: Message-ID: After some discussions and looking at the diffs needed to make the `luminance` package and Oliver Charles' SSAO-example use OpenGLRaw instead of gl, I decided to change the types of GL_TRUE and GL_FALSE from GLenum to GLboolean. When these enums are used as parameters, their type is almost always GLboolean, with glClampColor being the only exception. Some general retrieval functions like glProgramiv return boolean values as GLint, but that seems to be the rarer use case. OpenGL is very loosely typed, so you will have to use some fromIntegral calls, even if the enum patterns were more polymorphic. After several decades of computer science and having seen tons of bugs caused by them, I have a strong aversion to implicit conversions, so I'm still convinced that the monomorphic enums are the right thing. :-) I made a new release of OpenGLRaw ( https://github.com/haskell-opengl/OpenGLRaw/releases/tag/v3.1.0.0), which in addition to this typing change contains some "mkFoo" synonyms for the "makeFoo" functions, too, a difference between OpenGLRaw and gl I didn't notice earlier. Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ollie at ocharles.org.uk Sun Jan 10 20:56:40 2016 From: ollie at ocharles.org.uk (Oliver Charles) Date: Sun, 10 Jan 2016 20:56:40 +0000 Subject: [Haskell-cafe] Merging the OpenGLRaw and gl packages In-Reply-To: References: Message-ID: I'm not really convinced by this. This change introduced an inconsistency and duplication, but doesn't really solve the problem. I already found another enum that has this problem (GL_LINEAR), and I hardly suggest introducing GL_LINEAR to work around that. While I agree that OpenGL is barely typed *statically*, there is a lot of runtime type checking. In practice o always develop with KHR debug as an extension or replay via apitrace, and this always checks ebum values for validity. I think OpenGLRaw would be more practical with gl-style polymorphic patterns On Sun, 10 Jan 2016 7:43 pm Sven Panne wrote: > After some discussions and looking at the diffs needed to make the > `luminance` package and Oliver Charles' SSAO-example use OpenGLRaw instead > of gl, I decided to change the types of GL_TRUE and GL_FALSE from GLenum to > GLboolean. When these enums are used as parameters, their type is almost > always GLboolean, with glClampColor being the only exception. Some general > retrieval functions like glProgramiv return boolean values as GLint, but > that seems to be the rarer use case. OpenGL is very loosely typed, so you > will have to use some fromIntegral calls, even if the enum patterns were > more polymorphic. After several decades of computer science and having seen > tons of bugs caused by them, I have a strong aversion to implicit > conversions, so I'm still convinced that the monomorphic enums are the > right thing. :-) > > I made a new release of OpenGLRaw ( > https://github.com/haskell-opengl/OpenGLRaw/releases/tag/v3.1.0.0), which > in addition to this typing change contains some "mkFoo" synonyms for the > "makeFoo" functions, too, a difference between OpenGLRaw and gl I didn't > notice earlier. > > Cheers, > S. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From comp.lang.haskell at liyang.hu Mon Jan 11 04:44:20 2016 From: comp.lang.haskell at liyang.hu (Liyang HU) Date: Mon, 11 Jan 2016 04:44:20 +0000 (UTC) Subject: [Haskell-cafe] ANN: true-name 0.1.0.0 released Message-ID: > Also sent to comp.lang.haskell, & to convince Gmane I'm not top-posting. It is with some shame that I announce ?true-name?, a package to assist one in violating those pesky module abstraction boundaries via the magick of Template Haskell. http://hackage.haskell.org/package/true-name Take ?Control.Concurrent.Chan? for example; you can get your grubby mitts on the ?Chan? data constructor, despite it not being exported. Here we pattern match on it, and bind ?chanR? and ?chanW? to the ?MVar?s containing the read and write ends of the channel respectively: > chan@[truename| ''Chan Chan | chanR chanW |] <- newChan > writeChan chan (42 :: Int) Now, the type of ?chanR? references the unexported ?Stream? and ?ChItem? types. We need the ?ChItem? data constructor?which is hidden under a few indirections?but that's not a problem: > streamR <- readMVar chanR > [truename| ''Chan Chan Stream ChItem ChItem | x _ |] <- readMVar streamR > putStrLn $ "chan contains: " ++ show x This gives us a rather dodgy ?peekChan?. This sort of thing is usually a Bad Idea?, but may sometimes be more palatable than the alternatives. Full example: https://github.com/liyang/true-name/blob/master/sanity.hs Taking another example, suppose we want the ?Array? type constructor hidden deep in the bowels of the ?HashMap? implementation: ghci> :set -XQuasiQuotes ghci> import Data.HashMap.Strict ghci> :kind [truename| ''HashMap Full Array |] [truename| ''HashMap Full Array |] :: * -> * The ?Array? data constructor is one more reification away: ghci> :type [truename| ''HashMap Full Array Array |] [truename| ''HashMap Full Array Array |] :: ghc-prim-0.4.0.0:GHC.Prim.Array# a -> unordered-containers-0.2.5.1:Data.HashMap.Array.Array a Please don't flame me. /Liyang From cma at bitemyapp.com Mon Jan 11 07:45:52 2016 From: cma at bitemyapp.com (Christopher Allen) Date: Mon, 11 Jan 2016 01:45:52 -0600 Subject: [Haskell-cafe] New release of the book Haskell Programming from first principles Message-ID: I'd been reticent in the past to announce the book on the mailing list, but it's pretty comprehensive now and we have enough ecstatic readers learning Haskell with it that I thought I'd share what we've been up to. We're writing this Haskell book (http://haskellbook.com/) because many have found learning Haskell to be difficult and it doesn't have to be. We have a strong focus on writing it to be a book for learning and teaching - it's not just a reference or review of topics. Particularly, we strive to make the book suitable for self-learners. We think Haskell is a really nice language and learning Haskell should be as nice as using it is. The new release puts the book at 26 chapters and 1,156 pages. You can track our progress here: http://haskellbook.com/progress.html The latest release included parser combinators, composing types, and monad transformers. My coauthor Julie Moronuki has never programmed before learning Haskell to work with me on this book. She has written about using the book to teach her 10 year old son as well - https://superginbaby.wordpress.com/2015/04/08/teaching-haskell-to-a-10-year-old-day-1/ Julie has also written about learning Haskell more generally - https://superginbaby.wordpress.com/2015/05/30/learning-haskell-the-hard-way/ If you've been reading the book, please speak up and share your thoughts. We have some reader feedback on the site at http://haskellbook.com/feedback.html We'll be looking for a press to do a print run of the book soon as it's about 80% done. If anyone has any pointers or recommendations on whom to work with, particularly university presses, please email me. Cheers everyone, Chris Allen -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Mon Jan 11 09:42:40 2016 From: svenpanne at gmail.com (Sven Panne) Date: Mon, 11 Jan 2016 10:42:40 +0100 Subject: [Haskell-cafe] Merging the OpenGLRaw and gl packages In-Reply-To: References: Message-ID: 2016-01-10 21:56 GMT+01:00 Oliver Charles : > I'm not really convinced by this. This change introduced an inconsistency > and duplication, but doesn't really solve the problem. I already found > another enum that has this problem (GL_LINEAR), and I hardly suggest > introducing GL_LINEAR to work around that. > GL_LINEAR as a parameter is sometimes used as a GLenum (see e.g. glBlitFramebuffer) and sometimes as a GLint (see e.g. glGetTextureParameteriv), and there is no clear winner. > While I agree that OpenGL is barely typed *statically*, there is a lot of > runtime type checking. In practice o always develop with KHR debug as an > extension or replay via apitrace, and this always checks ebum values for > validity. > Yes, using a debug context + glEnable(GL_DEBUG_OUTPUT_SYNCHRONOUS) + glDebugMessageCallback during development is always a good idea. Apart from the stateful nature of the API ("this and that is only allowed when we are in state foobar etc."), the whole notion of profiles and extensions makes it fundamentally impossible to have a 100% type-safe API. You can't even e.g. statically tell which set of enums is allowed as a parameter for a given function. > I think OpenGLRaw would be more practical with gl-style polymorphic > patterns > As I said in my previous email: Whenever you use the OpenGL API directly (be it via OpenGLRaw or gl), you *will* have lots of 'fromIntegral's, and the patterns don't make much of a difference. A quick grep showed that your SSAO-example project has 33 fromIntegral calls, and only 2 are caused by the patterns being monomorphic. The luminance package is even more extreme in this respect: It contains 188 fromIntegral calls, and only 2 are caused by the monomorphic patterns. (I may be off by small amount, but that doesn't really change the fact.) So in a nutshell: This is a non-issue in practice and mostly a bikeshedding discussion, and I won't change that aspect of the binding. Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgf.dma at gmail.com Mon Jan 11 10:28:39 2016 From: sgf.dma at gmail.com (Dmitriy Matrosov) Date: Mon, 11 Jan 2016 13:28:39 +0300 Subject: [Haskell-cafe] Why does shift's function return result in Cont and reset reinsert result into Cont again? Message-ID: Hi. I've read "Introduction to Programming with Shift and Reset" by Kenichi Asai and Oleg Kiselyov , where shift and reset defined as > import Control.Monad.Cont > > shift :: ((a -> w) -> Cont w w) -> Cont w a > shift f = cont (flip runCont id . f) > > reset :: Cont a a -> Cont w a > reset = return . flip runCont id > > shiftT :: Monad m => ((a -> m w) -> ContT w m w) -> ContT w m a > shiftT f = ContT (flip runContT return . f) > > resetT :: Monad m => ContT a m a -> ContT w m a > resetT = lift . flip runContT return But why should function f return result in Cont, which i'll unwrap immediately? And why should i reinsert delimited continuation's result into Cont again in reset? Wouldn't it be simpler (better?) to just define shift/reset like > shift' :: ((a -> w) -> w) -> Cont w a > shift' = cont > > reset' :: Cont w w -> w > reset' m = runCont m id > > shiftT' :: ((a -> m w) -> m w) -> ContT w m a > shiftT' = ContT > > resetT' :: Monad m => ContT w m w -> m w > resetT' m = runContT m return Moreover, Bubble-up semantics proof of Cont bubble elimination is (literally) correct only with reset' (below is my bubble implementation, may be i've written it wrong?): > data Bubble w a b = Bubble (a -> Bubble w a b) ((a -> w) -> Bubble w a w) > | Value b > > instance Monad (Bubble w a) where > return x = Value x > Value x >>= h = h x > Bubble k f >>= h = Bubble (\x -> k x >>= h) f > > convBub'2 :: Bubble w a b -> Cont w b > convBub'2 (Value x) = return x > convBub'2 (Bubble k f) = cont $ \t -> runCont (fC (\x -> runCont (kC x) t)) id > where > --fC :: (a -> w) -> Cont w w > fC = convBub'2 . f > --kC :: a -> Cont w b > kC = convBub'2 . k > > bubbleResetProp :: Eq w => Bubble w a w -> Bool > bubbleResetProp b@(Value x) = reset' (convBub'2 b) == x > bubbleResetProp b@(Bubble k f) = > reset' (convBub'2 b) == reset' (fC (\x -> reset' (kC x))) > where > --fC :: (a -> w) -> Cont w w > fC = convBub'2 . f > --kC :: a -> Cont w b > kC = convBub'2 . k > > infixl 0 === > (===) :: a -> a -> a > (===) = const > > bubbleResetProof :: Bubble w a w -> w > bubbleResetProof (Bubble k f) = > reset' (cont $ \t -> runCont (fC (\x -> runCont (kC x) t)) id) > === runCont (cont $ \t -> runCont (fC (\x -> runCont (kC x) t)) id) id > === (\t -> runCont (fC (\x -> runCont (kC x) t)) id) id > === runCont (fC (\x -> runCont (kC x) id)) id > === reset' (fC (\x -> reset' (kC x))) > where > --fC :: (a -> w) -> Cont w w > fC = convBub'2 . f > --kC :: a -> Cont w b > kC = convBub'2 . k -- Dmitriy Matrosov -------------- next part -------------- An HTML attachment was scrubbed... URL: From hon.lianhung at gmail.com Mon Jan 11 11:44:03 2016 From: hon.lianhung at gmail.com (Lian Hung Hon) Date: Mon, 11 Jan 2016 19:44:03 +0800 Subject: [Haskell-cafe] Data declaration vs type classes In-Reply-To: References: <20160108111306.GQ21171@weber> Message-ID: Dear all, Thanks for the opinions. I'll go with type classes for now, because as Miguel said, I want it to be open :) Regards, Hon On 8 January 2016 at 19:46, Imants Cekusins wrote: > > you convert String or Text: what encoding would you use? > > let's say, this is very specific conversion where newtypes are used a > lot. There are many different formats for Int (even the same type of > int), String may be ascii, UTF8, ISO-..., you name it. > > using class does not make a difference re: type definition in this case. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam at well-typed.com Mon Jan 11 13:05:51 2016 From: adam at well-typed.com (Adam Gundry) Date: Mon, 11 Jan 2016 13:05:51 +0000 Subject: [Haskell-cafe] Promotion of field accessors using DataKinds In-Reply-To: References: Message-ID: <5693A8AF.6000603@well-typed.com> Hi Will, On 09/01/16 23:23, William Yager wrote: > data Config = Conf (len :: Nat) (turboEncabulate :: Bool) > However, it does not appear that GHC supports e.g. > > data Thing (conf :: Config) = Thing > data Count (n :: Nat) = Count > foo :: Thing conf -> Count (len conf) > foo Thing = Count > > That is, it does not appear to properly replace "len conf" with the > value of len from conf. Indeed, this is not yet supported. GHC doesn't currently have any form of promotion for functions (as opposed to datatypes), including record selectors. Thus when `len` is used in a type, it always refers to a type variable, not a function. You might be interested in the singletons package [1], which automatically generates promoted functions using Template Haskell. A full treatment of function promotion is an open research problem, because it requires reconciling the non-trivial differences between term-level functions and type families (which aren't really functions at all). In the meantime, your only options are separately defining type families corresponding to functions, either manually or via TH. > Instead, the way I've found to do this is to define > > class Lengthed (a :: Config) where > type Len a :: Nat > instance Lengthed (Conf n) where > type Len (Conf n t) = n > > Now, > > foo :: Thing conf -> Count (Len conf) > > works fine. So manually defining a type function that intuitively does > the exact same thing as "len" seems to work. Do note that the class isn't really necessary here: you could simply define type family Len (c :: Config) :: Nat where Len (Conf n t) = n Best regards, Adam [1] http://hackage.haskell.org/package/singletons -- Adam Gundry, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From guillaum.bouchard+haskell at gmail.com Mon Jan 11 14:15:34 2016 From: guillaum.bouchard+haskell at gmail.com (Guillaume Bouchard) Date: Mon, 11 Jan 2016 15:15:34 +0100 Subject: [Haskell-cafe] Data declaration vs type classes In-Reply-To: References: <20160108111306.GQ21171@weber> Message-ID: In case of the data approach, `GroceryTask` and `LaundryTask` are the same type: `Task`. Hence you can have some kind of "dynamic" polymorphism (or "dynamic" dispatch) by storing a list of homogeneous types (`Task`) with heterogeneous behaviors. For example, imagine you want to store a todo list and do all task of the todo list. data Task = GroceryTask | LaundryTask doTask GroceryTask = putStrLn "grocery" doTask LaundryTask = putStrLn "laundry" todoList :: [Task] todoList = [GroceryTask, LaundryTask, GroceryTask] doAllTasks :: [Task] -> IO () doAllTasks tasks = mapM_ doTask tasks However, In the case of the class approach data GroceryTask data LaundryTask class Task t where doTask :: t -> IO () instance Task GroceryTask where doTask t = putStrLn "grocery" instance Task LaundryTask where doTask t = putSTrLn "laundry" doAllTask :: [?????] -> IO () In this case, GroceryTask and LaundryTask are NOT the same type, hence the "????", you cannot create a list which stores different Tasks and returns apply However you can still wrap them inside a sum type : data DoableTask = DoableGrocery GroceryTask | DoableLaundry LaundryTask instance Task DoableTask where doTask (DoableGrocery t) = doTask t doTask (DoableLaundry t) = doTask t (Open question: is there a hack / tool / library / Template Haskell solution to generate this kind of stuff ?) There is other solutions, you can partially apply the doTask function, for examples: todoList :: [IO ()] todoList = [doTask GroceryTask, doTask LaundryTask, doTask GroceryTask] (Another open question, is there a simple solution to do a map over an literal heterogeneous list to get an homogeneous one?) Thank to laziness, this works, but can be really boring to implement. There is other solution using existential types or heterogeneous lists. I'm still looking for a good discussion about which one to use when we focus on performance. So, finally, there is no simple solution. If your type is close and really represents a choice between a set of possibilities and that you know you want a kind of dynamic dispatch, definitely go for the data approach. Else, the class approach is easier to extend at the cost of a lot of boilerplate when you want dynamic dispatch... On Mon, Jan 11, 2016 at 12:44 PM, Lian Hung Hon wrote: > Dear all, > > Thanks for the opinions. I'll go with type classes for now, because as > Miguel said, I want it to be open :) > > Regards, > Hon > > On 8 January 2016 at 19:46, Imants Cekusins wrote: >> >> > you convert String or Text: what encoding would you use? >> >> let's say, this is very specific conversion where newtypes are used a >> lot. There are many different formats for Int (even the same type of >> int), String may be ascii, UTF8, ISO-..., you name it. >> >> using class does not make a difference re: type definition in this case. >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From oleg.grenrus at iki.fi Mon Jan 11 16:54:20 2016 From: oleg.grenrus at iki.fi (Oleg Grenrus) Date: Mon, 11 Jan 2016 18:54:20 +0200 Subject: [Haskell-cafe] [RFC] github-0.14.0 release candidate Message-ID: Hi, There are lot of breaking changes In upcoming github package, which provides accoss to the Github API, v3 [1] I?d like to hear feedback before pushing the actual release. There are many breaking changes, and would be nice to avoid new breaking changes very soon. So if you spot something which can still be fixed, don?t hesitate to contact me or create a issue on Github [3]. You can find the release candidate at Hackage [2]. Cheers, Oleg Grenrus - [1]: https://developer.github.com/v3/ - [2]: http://hackage.haskell.org/package/github-0.14.0/candidate - [3]: https://github.com/phadej/github/issues -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 842 bytes Desc: Message signed with OpenPGP using GPGMail URL: From will.yager at gmail.com Mon Jan 11 20:20:54 2016 From: will.yager at gmail.com (Will Yager) Date: Mon, 11 Jan 2016 16:20:54 -0400 Subject: [Haskell-cafe] Data declaration vs type classes In-Reply-To: References: <20160108111306.GQ21171@weber> Message-ID: <47493CEA-CC82-4081-A516-394D6E2E3650@gmail.com> You can do this using ExistentialQuantification. > On Jan 11, 2016, at 10:15, Guillaume Bouchard wrote: > > > doAllTask :: [?????] -> IO () > > In this case, GroceryTask and LaundryTask are NOT the same type, hence > the "????", you cannot create a list which stores different Tasks and > returns apply > From dedgrant at gmail.com Mon Jan 11 23:20:18 2016 From: dedgrant at gmail.com (Darren Grant) Date: Mon, 11 Jan 2016 15:20:18 -0800 Subject: [Haskell-cafe] New release of the book Haskell Programming from first principles In-Reply-To: References: Message-ID: That is a healthy sum of work! Been keeping tabs and looking forward to the final product. Still have that spot on the shelf at home, and increasing rationale for the office. Cheers, Darren On Jan 10, 2016 23:45, "Christopher Allen" wrote: > > I'd been reticent in the past to announce the book on the mailing list, but it's pretty comprehensive now and we have enough ecstatic readers learning Haskell with it that I thought I'd share what we've been up to. > > We're writing this Haskell book (http://haskellbook.com/) because many have found learning Haskell to be difficult and it doesn't have to be. We have a strong focus on writing it to be a book for learning and teaching - it's not just a reference or review of topics. Particularly, we strive to make the book suitable for self-learners. We think Haskell is a really nice language and learning Haskell should be as nice as using it is. > > The new release puts the book at 26 chapters and 1,156 pages. You can track our progress here: http://haskellbook.com/progress.html > > The latest release included parser combinators, composing types, and monad transformers. > > My coauthor Julie Moronuki has never programmed before learning Haskell to work with me on this book. She has written about using the book to teach her 10 year old son as well - https://superginbaby.wordpress.com/2015/04/08/teaching-haskell-to-a-10-year-old-day-1/ > > Julie has also written about learning Haskell more generally - https://superginbaby.wordpress.com/2015/05/30/learning-haskell-the-hard-way/ > > If you've been reading the book, please speak up and share your thoughts. We have some reader feedback on the site at http://haskellbook.com/feedback.html > > We'll be looking for a press to do a print run of the book soon as it's about 80% done. If anyone has any pointers or recommendations on whom to work with, particularly university presses, please email me. > > Cheers everyone, > Chris Allen > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haskell at erebe.eu Tue Jan 12 08:35:52 2016 From: haskell at erebe.eu (=?UTF-8?Q?Romain_G=C3=A9rard?=) Date: Tue, 12 Jan 2016 09:35:52 +0100 Subject: [Haskell-cafe] New release of the book Haskell Programming from first principles In-Reply-To: References: Message-ID: <11eb0f3907152487201f5e0caa076ab4@erebe.eu> Thanks to the hard work ! Do you plan to sell hardcopy of it, or only pdf/epub ? Regards, Romain Le 2016-01-12 00:20, Darren Grant a ?crit : > That is a healthy sum of work! Been keeping tabs and looking forward to the final product. Still have that spot on the shelf at home, and increasing rationale for the office. > > Cheers, > Darren > > On Jan 10, 2016 23:45, "Christopher Allen" wrote: >> >> I'd been reticent in the past to announce the book on the mailing list, but it's pretty comprehensive now and we have enough ecstatic readers learning Haskell with it that I thought I'd share what we've been up to. >> >> We're writing this Haskell book (http://haskellbook.com/ [2]) because many have found learning Haskell to be difficult and it doesn't have to be. We have a strong focus on writing it to be a book for learning and teaching - it's not just a reference or review of topics. Particularly, we strive to make the book suitable for self-learners. We think Haskell is a really nice language and learning Haskell should be as nice as using it is. >> >> The new release puts the book at 26 chapters and 1,156 pages. You can track our progress here: http://haskellbook.com/progress.html [3] >> >> The latest release included parser combinators, composing types, and monad transformers. >> >> My coauthor Julie Moronuki has never programmed before learning Haskell to work with me on this book. She has written about using the book to teach her 10 year old son as well - https://superginbaby.wordpress.com/2015/04/08/teaching-haskell-to-a-10-year-old-day-1/ [4] >> >> Julie has also written about learning Haskell more generally - https://superginbaby.wordpress.com/2015/05/30/learning-haskell-the-hard-way/ [5] >> >> If you've been reading the book, please speak up and share your thoughts. We have some reader feedback on the site at http://haskellbook.com/feedback.html [6] >> >> We'll be looking for a press to do a print run of the book soon as it's about 80% done. If anyone has any pointers or recommendations on whom to work with, particularly university presses, please email me. >> >> Cheers everyone, >> Chris Allen >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe [1] >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe [1] Links: ------ [1] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe [2] http://haskellbook.com/ [3] http://haskellbook.com/progress.html [4] https://superginbaby.wordpress.com/2015/04/08/teaching-haskell-to-a-10-year-old-day-1/ [5] https://superginbaby.wordpress.com/2015/05/30/learning-haskell-the-hard-way/ [6] http://haskellbook.com/feedback.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From d12frosted at icloud.com Tue Jan 12 08:37:20 2016 From: d12frosted at icloud.com (Boris) Date: Tue, 12 Jan 2016 10:37:20 +0200 Subject: [Haskell-cafe] New release of the book Haskell Programming from first principles In-Reply-To: <11eb0f3907152487201f5e0caa076ab4@erebe.eu> References: <11eb0f3907152487201f5e0caa076ab4@erebe.eu> Message-ID: <96419412-F27C-4636-98BB-5A28472FA4EA@icloud.com> Hey Romain, I think this what you are looking for: > We'll be looking for a press to do a print run of the book soon as it's about 80% done. If anyone has any pointers or recommendations on whom to work with, particularly university presses, please email me. ~ Boris > On Jan 12, 2016, at 10:35 AM, Romain G?rard wrote: > > Thanks to the hard work ! > > Do you plan to sell hardcopy of it, or only pdf/epub ? > > Regards, > Romain > > > Le 2016-01-12 00:20, Darren Grant a ?crit : > >> That is a healthy sum of work! Been keeping tabs and looking forward to the final product. Still have that spot on the shelf at home, and increasing rationale for the office. >> >> Cheers, >> Darren >> >> On Jan 10, 2016 23:45, "Christopher Allen" wrote: >> > >> > I'd been reticent in the past to announce the book on the mailing list, but it's pretty comprehensive now and we have enough ecstatic readers learning Haskell with it that I thought I'd share what we've been up to. >> > >> > We're writing this Haskell book (http://haskellbook.com/) because many have found learning Haskell to be difficult and it doesn't have to be. We have a strong focus on writing it to be a book for learning and teaching - it's not just a reference or review of topics. Particularly, we strive to make the book suitable for self-learners. We think Haskell is a really nice language and learning Haskell should be as nice as using it is. >> > >> > The new release puts the book at 26 chapters and 1,156 pages. You can track our progress here: http://haskellbook.com/progress.html >> > >> > The latest release included parser combinators, composing types, and monad transformers. >> > >> > My coauthor Julie Moronuki has never programmed before learning Haskell to work with me on this book. She has written about using the book to teach her 10 year old son as well - https://superginbaby.wordpress.com/2015/04/08/teaching-haskell-to-a-10-year-old-day-1/ >> > >> > Julie has also written about learning Haskell more generally - https://superginbaby.wordpress.com/2015/05/30/learning-haskell-the-hard-way/ >> > >> > If you've been reading the book, please speak up and share your thoughts. We have some reader feedback on the site at http://haskellbook.com/feedback.html >> > >> > We'll be looking for a press to do a print run of the book soon as it's about 80% done. If anyone has any pointers or recommendations on whom to work with, particularly university presses, please email me. >> > >> > Cheers everyone, >> > Chris Allen >> > >> > >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > Haskell-Cafe at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From f.occhipinti at gmail.com Tue Jan 12 10:28:02 2016 From: f.occhipinti at gmail.com (Francesco Occhipinti) Date: Tue, 12 Jan 2016 11:28:02 +0100 Subject: [Haskell-cafe] New release of the book Haskell Programming from first principles In-Reply-To: References: Message-ID: Hello Chris and thanks for your effort in making Haskell more understandable to everyone. I hope that you will be open to an opinion which differs from the many enthusiastic comments you usually receive. I do not want to sound grumpy, but i need to say that i am not ecstatic about the idea of this book, so i hope that it will not become a sort of mandatory reference for the Haskell community. I do not consider the book and its research effort a bad thing, but i value existing resources and processes used by the Haskell community to document the language and the related theory. I don't think that getting into the details is useful here, i just want to mention that someone might be not interested in this project, and i hope that the choice not to read the book will be respected in all Haskell's public fora. I sincerely hope not to start a flame. You do not have to convince me, i might buy the book tomorrow. I just want to mention the risk to consider this very extensive and comprehensive work as the *only* or the *best* way to learn Haskell. This would take some precious diversity away from us. I hope that most people will understand the spirit of this remark. Cheers, Francesco Occhipinti 2016-01-11 8:45 GMT+01:00 Christopher Allen : > I'd been reticent in the past to announce the book on the mailing list, > but it's pretty comprehensive now and we have enough ecstatic readers > learning Haskell with it that I thought I'd share what we've been up to. > > We're writing this Haskell book (http://haskellbook.com/) because many > have found learning Haskell to be difficult and it doesn't have to be. We > have a strong focus on writing it to be a book for learning and teaching - > it's not just a reference or review of topics. Particularly, we strive to > make the book suitable for self-learners. We think Haskell is a really nice > language and learning Haskell should be as nice as using it is. > > The new release puts the book at 26 chapters and 1,156 pages. You can > track our progress here: http://haskellbook.com/progress.html > > The latest release included parser combinators, composing types, and monad > transformers. > > My coauthor Julie Moronuki has never programmed before learning Haskell to > work with me on this book. She has written about using the book to teach > her 10 year old son as well - > https://superginbaby.wordpress.com/2015/04/08/teaching-haskell-to-a-10-year-old-day-1/ > > Julie has also written about learning Haskell more generally - > https://superginbaby.wordpress.com/2015/05/30/learning-haskell-the-hard-way/ > > If you've been reading the book, please speak up and share your thoughts. > We have some reader feedback on the site at > http://haskellbook.com/feedback.html > > We'll be looking for a press to do a print run of the book soon as it's > about 80% done. If anyone has any pointers or recommendations on whom to > work with, particularly university presses, please email me. > > Cheers everyone, > Chris Allen > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haskell at erebe.eu Tue Jan 12 12:23:36 2016 From: haskell at erebe.eu (=?UTF-8?Q?Romain_G=C3=A9rard?=) Date: Tue, 12 Jan 2016 13:23:36 +0100 Subject: [Haskell-cafe] New release of the book Haskell Programming from first principles In-Reply-To: References: Message-ID: <16d5c3a9f197c263aa3053e7615424e8@erebe.eu> Hello, Can you explain your point a bit more ? How can more learning material can be a bad thing ? I have bought nearly every books regarding haskell but for now every single one fall into those 3 categories. * OUTDATED -> Real World Haskell, Programming in Haskell * TOO SPECIFIC -> Parallel and Concurrent Programming in Haskell, Haskell Data Analysis Cookbook * ONLY FOR BEGINNERS -> Learn you a haskell for great good, Thinking Functionally with Haskell, Beginning Haskell: A Project-Based Approach I plan to buy "HASKELL DESIGN PATTERNS" and I have great hope for this one, but for now I think when learning haskell there is a missing step after being intermediate. My only good ressources to advance in haskell were the haskell wikibook (great stuff) [7], and blogposts where you can find more about traversable, foldable, generics, Free monads, GADTS, Template Haskells, comonads, lens, how to handle exceptions, ... Those topics are not uncomonn in daily haskell programming, but are not present in learning materials. If this book can cover all of this, I will gladly accept it as a classical to have in your bookshelf. As I am not very sure about why you are not entousiastics about this one, can you please explain how this book approch differs from the others and why it will impact negatively the actual ecosystem ? Regards, Romain Le 2016-01-12 11:28, Francesco Occhipinti a ?crit : > Hello Chris and thanks for your effort in making Haskell more understandable to everyone. I hope that you will be open to an opinion which differs from the many enthusiastic comments you usually receive. > > I do not want to sound grumpy, but i need to say that i am not ecstatic about the idea of this book, so i hope that it will not become a sort of mandatory reference for the Haskell community. > > I do not consider the book and its research effort a bad thing, but i value existing resources and processes used by the Haskell community to document the language and the related theory. I don't think that getting into the details is useful here, i just want to mention that someone might be not interested in this project, and i hope that the choice not to read the book will be respected in all Haskell's public fora. > > I sincerely hope not to start a flame. You do not have to convince me, i might buy the book tomorrow. I just want to mention the risk to consider this very extensive and comprehensive work as the *only* or the *best* way to learn Haskell. This would take some precious diversity away from us. > > I hope that most people will understand the spirit of this remark. > > Cheers, > Francesco Occhipinti > > 2016-01-11 8:45 GMT+01:00 Christopher Allen : > >> I'd been reticent in the past to announce the book on the mailing list, but it's pretty comprehensive now and we have enough ecstatic readers learning Haskell with it that I thought I'd share what we've been up to. >> >> We're writing this Haskell book (http://haskellbook.com/ [1]) because many have found learning Haskell to be difficult and it doesn't have to be. We have a strong focus on writing it to be a book for learning and teaching - it's not just a reference or review of topics. Particularly, we strive to make the book suitable for self-learners. We think Haskell is a really nice language and learning Haskell should be as nice as using it is. >> >> The new release puts the book at 26 chapters and 1,156 pages. You can track our progress here: http://haskellbook.com/progress.html [2] >> >> The latest release included parser combinators, composing types, and monad transformers. >> >> My coauthor Julie Moronuki has never programmed before learning Haskell to work with me on this book. She has written about using the book to teach her 10 year old son as well - https://superginbaby.wordpress.com/2015/04/08/teaching-haskell-to-a-10-year-old-day-1/ [3] >> >> Julie has also written about learning Haskell more generally - https://superginbaby.wordpress.com/2015/05/30/learning-haskell-the-hard-way/ [4] >> >> If you've been reading the book, please speak up and share your thoughts. We have some reader feedback on the site at http://haskellbook.com/feedback.html [5] >> >> We'll be looking for a press to do a print run of the book soon as it's about 80% done. If anyone has any pointers or recommendations on whom to work with, particularly university presses, please email me. >> >> Cheers everyone, >> Chris Allen >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe [6] > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe [6] Links: ------ [1] http://haskellbook.com/ [2] http://haskellbook.com/progress.html [3] https://superginbaby.wordpress.com/2015/04/08/teaching-haskell-to-a-10-year-old-day-1/ [4] https://superginbaby.wordpress.com/2015/05/30/learning-haskell-the-hard-way/ [5] http://haskellbook.com/feedback.html [6] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe [7] https://en.wikibooks.org/wiki/Haskell -------------- next part -------------- An HTML attachment was scrubbed... URL: From johannes.waldmann at htwk-leipzig.de Tue Jan 12 14:20:23 2016 From: johannes.waldmann at htwk-leipzig.de (Johannes Waldmann) Date: Tue, 12 Jan 2016 15:20:23 +0100 Subject: [Haskell-cafe] New release of the book Haskell Programming from first principles Message-ID: <56950BA7.6050107@htwk-leipzig.de> Chris - Impressive amount of work! Since this is "cafe" ... allow me to put my standard rant here: Browsing your table of contents, I certainly agree that lambda calculus is a first principle. But when I teach (beginners or not) I put algebraic data types and pattern matching even before functions. Because I want to first describe data, then transformation of data. (Yes, I know, we can simulate data with functions. We can also simulate lambda calculus with term rewriting. No winner there..) And testing really should go as early as possible (lecture 1 or 2, ideally). I don't really know how to do this from first principles and early, because we need some type-classes here (deriving Eq and Show, and also Serial, for automated test case generation. Which should be a strong selling point.) And I do avoid numbers, lists and strings. Really. Functional programming tragically over-uses lists, and by extension, strings. (Of course, other languages also tragically over-use strings.) When I need a list, I have the students write down the data declaration. When I need numbers (mostly I don't), it's Peano numbers. Yes, lazy lists have a purpose in Haskell (expressing control) but this only works if they're being fused away. Later, when we have modules and type classes (for abstract data types), students can learn to use different representations for sequences. But if you start feeding them lists/strings, they'll think it's natural, and have a hard time switching to Data.Sequence, Data.Vector, Data.Text ... I know I did. Well. I'm sure you've given this a lot of thought. I see that you refer to Brent Yorgey's course https://github.com/bitemyapp/learnhaskell#yorgeys-cis194-course and there we have "data"/"case" in lecture 2. Oh, and I do have to read up on how he sells Applicative/Monad. For me, Monad feels more natural, but that's just because I learned it earlier, and now I have to unlearn it. And that illustrates the point I'm making about lists and strings. Best - Johannes. From jo at durchholz.org Tue Jan 12 15:20:14 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Tue, 12 Jan 2016 16:20:14 +0100 Subject: [Haskell-cafe] New release of the book Haskell Programming from first principles In-Reply-To: <56950BA7.6050107@htwk-leipzig.de> References: <56950BA7.6050107@htwk-leipzig.de> Message-ID: <569519AE.2040202@durchholz.org> Am 12.01.2016 um 15:20 schrieb Johannes Waldmann: > Functional programming tragically over-uses lists, > and by extension, strings. What would be the alternative for the syllabus? Regards, Jo From f.occhipinti at gmail.com Tue Jan 12 17:37:13 2016 From: f.occhipinti at gmail.com (Francesco Occhipinti) Date: Tue, 12 Jan 2016 18:37:13 +0100 Subject: [Haskell-cafe] New release of the book Haskell Programming from first principles In-Reply-To: <16d5c3a9f197c263aa3053e7615424e8@erebe.eu> References: <16d5c3a9f197c263aa3053e7615424e8@erebe.eu> Message-ID: Hello, I never wrote that i consider this book "a bad thing", that would not make any sense. I expressively wrote the contrary in order to avoid misunderstandings, but that was not enough. Let me state it clearly: this book seems to have all the features to become a milestone for the Haskell community, and a reference for future users. It is getting a lot of interest and enthusiasm, and this is great! While more and more people will approach Haskell starting from Chris' book's pages, i hope that beginners will still be pointed to existing Haskell resources maintained by the community, like i was in the past. I was delighted by finding the quality and variety of knowledge in Haskell wikis. I was pointed to papers, i have got used to be exposed to the bare code of core libraries. I thought about the types and the source. I enjoyed my journey through the often blamed existing Haskell documentation, and i hope that people will keep getting lost into it and be passionate about it. Here are some important values i see in the docs coming from the community: * they are communitary - for me this is invaluable. they are an effort we do together, the result of a complex process. we can all try to contribute if we are not satisfied, and beginner's input is precious * they are public, free, not affiliated with any specific company * they are succinct - this always motivated me. i can read and forget 100 explanations about monads, but the type class in itself will always challenge me for its simplicity. one could write the core formulae in a small notebook and keep studying on it for weeks * they are open ended and diverse - this leaves room for criticism and evolution. often, the Haskell guide for something is a paper about the theory of it I think that i received a lot studying Haskell through this community-contributed material, i just want that we keep considering this a viable path, even after the valuable work from Chris will become the common ground for most of us. I want to avoid that in the future, when somebody comes to #haskell with a question, the easy answer would be "go read the 1000 pages and then we can chat". Anyway, from the feedback i have got i realise that there are a lot of strong feelings about this, more than i expected. What moves around this book are all good news for the Haskell community, and i admire how Chris and Julie were able to convey so much content and experience through it. I just wanted to express a different path to learning. I am sorry if my few words, a bit too abstract and cold in their tone, were preceived as gratuitous criticism. Cheers, Francesco Occhipinti 2016-01-12 13:23 GMT+01:00 Romain G?rard : > Hello, > > Can you explain your point a bit more ? How can more learning material can > be a bad thing ? > > I have bought nearly every books regarding haskell but for now every > single one fall into those 3 categories. > > - *Outdated* -> Real World Haskell, Programming in Haskell > - *Too* *specific* -> Parallel and Concurrent Programming in Haskell, > Haskell Data Analysis Cookbook > - *Only* *for* *beginners* -> Learn you a haskell for great good, Thinking > Functionally with Haskell, Beginning Haskell: A Project-Based Approach > > I plan to buy "*Haskell Design Patterns*" and I have great hope for this > one, but for now I think when learning haskell there is a missing step > after being intermediate. > My only good ressources to advance in haskell were the haskell wikibook > (great stuff) , and blogposts > where you can find more about traversable, foldable, generics, Free monads, > GADTS, Template Haskells, comonads, lens, how to handle exceptions, ... > > Those topics are not uncomonn in daily haskell programming, but are not > present in learning materials. If this book can cover all of this, I will > gladly accept it as a classical to have in your bookshelf. > > As I am not very sure about why you are not entousiastics about this one, > can you please explain how this book approch differs from the others and > why it will impact negatively the actual ecosystem ? > > Regards, > Romain > > Le 2016-01-12 11:28, Francesco Occhipinti a ?crit : > > Hello Chris and thanks for your effort in making Haskell more > understandable to everyone. I hope that you will be open to an opinion > which differs from the many enthusiastic comments you usually receive. > > I do not want to sound grumpy, but i need to say that i am not ecstatic > about the idea of this book, so i hope that it will not become a sort of > mandatory reference for the Haskell community. > > I do not consider the book and its research effort a bad thing, but i > value existing resources and processes used by the Haskell community to > document the language and the related theory. I don't think that getting > into the details is useful here, i just want to mention that someone might > be not interested in this project, and i hope that the choice not to read > the book will be respected in all Haskell's public fora. > > I sincerely hope not to start a flame. You do not have to convince me, i > might buy the book tomorrow. I just want to mention the risk to consider > this very extensive and comprehensive work as the *only* or the *best* way > to learn Haskell. This would take some precious diversity away from us. > > I hope that most people will understand the spirit of this remark. > > > Cheers, > Francesco Occhipinti > > > 2016-01-11 8:45 GMT+01:00 Christopher Allen : > >> I'd been reticent in the past to announce the book on the mailing list, >> but it's pretty comprehensive now and we have enough ecstatic readers >> learning Haskell with it that I thought I'd share what we've been up to. >> >> We're writing this Haskell book (http://haskellbook.com/) because many >> have found learning Haskell to be difficult and it doesn't have to be. We >> have a strong focus on writing it to be a book for learning and teaching - >> it's not just a reference or review of topics. Particularly, we strive to >> make the book suitable for self-learners. We think Haskell is a really nice >> language and learning Haskell should be as nice as using it is. >> >> The new release puts the book at 26 chapters and 1,156 pages. You can >> track our progress here: http://haskellbook.com/progress.html >> >> The latest release included parser combinators, composing types, and >> monad transformers. >> >> My coauthor Julie Moronuki has never programmed before learning Haskell >> to work with me on this book. She has written about using the book to teach >> her 10 year old son as well - >> https://superginbaby.wordpress.com/2015/04/08/teaching-haskell-to-a-10-year-old-day-1/ >> >> Julie has also written about learning Haskell more generally - >> https://superginbaby.wordpress.com/2015/05/30/learning-haskell-the-hard-way/ >> >> If you've been reading the book, please speak up and share your thoughts. >> We have some reader feedback on the site at >> http://haskellbook.com/feedback.html >> >> We'll be looking for a press to do a print run of the book soon as it's >> about 80% done. If anyone has any pointers or recommendations on whom to >> work with, particularly university presses, please email me. >> >> Cheers everyone, >> Chris Allen >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > _______________________________________________ > Haskell-Cafe mailing listHaskell-Cafe at haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trebla at vex.net Tue Jan 12 20:38:00 2016 From: trebla at vex.net (Albert Y. C. Lai) Date: Tue, 12 Jan 2016 15:38:00 -0500 Subject: [Haskell-cafe] Why does shift's function return result in Cont and reset reinsert result into Cont again? In-Reply-To: References: Message-ID: <56956428.3080508@vex.net> On 2016-01-11 05:28 AM, Dmitriy Matrosov wrote: > Wouldn't it be simpler (better?) to just define shift/reset like > > > shift' :: ((a -> w) -> w) -> Cont w a > > shift' = cont > > > > reset' :: Cont w w -> w > > reset' m = runCont m id > > > > shiftT' :: ((a -> m w) -> m w) -> ContT w m a > > shiftT' = ContT > > > > resetT' :: Monad m => ContT w m w -> m w > > resetT' m = runContT m return Yes. See also https://www.schoolofhaskell.com/user/dolio/monad-transformers-and-static-effect-scoping#an-alternate-implementation From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Tue Jan 12 22:42:29 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Tue, 12 Jan 2016 22:42:29 +0000 Subject: [Haskell-cafe] New release of the book Haskell Programming from first principles In-Reply-To: References: Message-ID: <20160112224228.GH2385@weber> On Tue, Jan 12, 2016 at 11:28:02AM +0100, Francesco Occhipinti wrote: > I do not consider the book and its research effort a bad thing, but i value > existing resources and processes used by the Haskell community to document > the language and the related theory. Fortunately, Haskell does not have a linear type system, thus this book does not destroy previously-existing Haskell books :) From dedgrant at gmail.com Wed Jan 13 04:36:27 2016 From: dedgrant at gmail.com (Darren Grant) Date: Tue, 12 Jan 2016 20:36:27 -0800 Subject: [Haskell-cafe] New release of the book Haskell Programming from first principles In-Reply-To: <20160112224228.GH2385@weber> References: <20160112224228.GH2385@weber> Message-ID: "Fortunately, Haskell does not have a linear type system, thus this book does not destroy previously-existing Haskell books :)" Made my evening. :D Cheers, Darren On Tue, Jan 12, 2016 at 2:42 PM, Tom Ellis < tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk> wrote: > On Tue, Jan 12, 2016 at 11:28:02AM +0100, Francesco Occhipinti wrote: > > I do not consider the book and its research effort a bad thing, but i > value > > existing resources and processes used by the Haskell community to > document > > the language and the related theory. > > Fortunately, Haskell does not have a linear type system, thus this book > does > not destroy previously-existing Haskell books :) > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aditya.siram at gmail.com Wed Jan 13 19:29:45 2016 From: aditya.siram at gmail.com (aditya siram) Date: Wed, 13 Jan 2016 13:29:45 -0600 Subject: [Haskell-cafe] 7.10.3 source link missing In-Reply-To: References: Message-ID: Bump. On Thu, Jan 7, 2016 at 12:31 PM, aditya siram wrote: > Hi all, > Just wanted to make haskell.org maintainers aware that 7.10.3 release > does not provide a link to the source distribution on the Download page. > https://www.haskell.org/ghc/download_ghc_7_10_3#sources. > > Thanks! > -deech > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.gibiansky at gmail.com Wed Jan 13 22:37:54 2016 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Wed, 13 Jan 2016 22:37:54 +0000 Subject: [Haskell-cafe] Hiring: Haskell at Biotech Startup! Message-ID: Hello haskell-cafe, I'm an engineer at Karius, a "stealth-mode" biotech startup in Menlo Park, CA, and we're looking to hire a few folks to write software (and we use Haskell!). Currently only hiring locally or with relocation (though that could change in the future, so feel free to get in touch regardless!). We are a team of crazy biologists, engineers, data scientists and clinicians on a mission to change forever the way infectious diseases are diagnosed and treated. We face incredibly interesting challenges in software engineering, machine learning and molecular biology, as we push the limits of diagnostics and genomic technologies. We're hiring computational biologists, software engineers and data scientists. If you're a software engineer, we're looking for experience in front-end, back-end, web development, intrastructure, devops, bioinformatics, and machine learning. We have a varied list of challenges; we build large data processing pipelines to analyze data from in-house DNA sequencers, separate the signal from the noise and extract what we need, and visualize this in ways that are helpful for scientists and doctor; we build web apps and tools for biologists and doctors to use to plan, conduct, and analyze experiments; we work closely with molecular biologists to analyze data generated by these experiments and develop novel computational biology methods. Our technology stack, as of right now: - Python (for bioinformatics) - Rails (for one backend codebase in maintenance mode) - React and ES6 (for front-end interfaces) - Haskell (for infrastructure and new development) - Backed by AWS and Docker We just put our first large Haskell application into production and are planning on continuing with Haskell; this is an opportunity to use Haskell at a cutting-edge biotechnology startup. If any of this sounds exciting to you, please don't hesitate to get in touch with us by emailing Greg Stock atgstock at kariusdx.com. Take a look at our job postings on AngelList for more detail, though they won't say much about Haskell. You may know me personally from my work with IHaskell and my hindent style ; Greg Weber is also here at Karius, whom you may know from his contributions to Persistent , Yesod , and Shelly . -- Andrew Gibiansky -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at stefanwehr.de Thu Jan 14 09:15:17 2016 From: mail at stefanwehr.de (Stefan Wehr) Date: Thu, 14 Jan 2016 09:15:17 +0000 Subject: [Haskell-cafe] Call for Participation: BOB 2016 (February 19, Berlin) Message-ID: Quick reminder: the early registration deadline for BOB 2016 is this Sunday! We have some interesting Haskell talks and tutorials at BOB 2016! ================================================================ BOB 2016 Conference "What happens if we simply use what's best?" February 19, 2016 Berlin http://bobkonf.de/2016/ Program: http://bobkonf.de/2016/program.html Registration: http://bobkonf.de/2016/registration.html ================================================================ BOB is the conference for developers, architects and decision-makers to explore technologies beyond the mainstream in software development, and to find the best tools available to software developers today. Our goal is for all participants of BOB to return home with new insights that enable them to improve their own software development experiences. The program features 14 talks and 8 tutorials on current topics: http://bobkonf.de/2016/program.html The subject range of talks includes functional programming, advanced front-end development, data management, and sophisticated uses of types. The tutorials feature introductions to Erlang, Haskell, Scala, Isabelle, Purescript, Idris, Akka HTTP, and Specification by Example. Elise Huard will hold the keynote talk - about Languages We Love. Registration is open online: http://bobkonf.de/2016/registration.html NOTE: The early-bird rates expire on January 17, 2016! BOB cooperates with the :clojured conference on the following day. There is a registration discount available for participants of both events. http://www.clojured.de/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomasmiedema at gmail.com Thu Jan 14 19:19:01 2016 From: thomasmiedema at gmail.com (Thomas Miedema) Date: Thu, 14 Jan 2016 20:19:01 +0100 Subject: [Haskell-cafe] repa parallelization results In-Reply-To: References: Message-ID: Anatoly: I also ran your benchmark, and can not reproduce your findings. Note that GHC does not make effective use of hyperthreads ( https://ghc.haskell.org/trac/ghc/ticket/9221#comment:12). So don't use -N4 when you have only a dual core machine. Maybe that's why you were getting bad results? I also notice a `NaN` in one of your timing results. I don't know how that is possible, or if it affected your results. Could you try running your benchmark again, but this time with -N2? On Sat, Mar 14, 2015 at 5:21 PM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > dense matrix product is not an algorithm that makes sense in repa's > execution model, > Matrix multiplication is the first example in the first repa paper: http://benl.ouroborus.net/papers/repa/repa-icfp2010.pdf. Look at figures 2 and 7. "we measured very good absolute speedup, ?7.2 for 8 cores, on multicore hardware" Doing a quick experiment with 2 threads (my laptop doesn't have more cores): $ cabal install repa-examples # I did not bother with `-fllvm` ... $ ~/.cabal/bin/repa-mmult -random 1024 1024 -random 1024 1204 elapsedTimeMS = 6491 $ ~/.cabal/bin/repa-mmult -random 1024 1024 -random 1024 1204 +RTS -N2 elapsedTimeMS = 3393 This is with GHC 7.10.3 and repa-3.4.0.1 (and dependencies from http://www.stackage.org/snapshot/lts-3.22) -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomasmiedema at gmail.com Thu Jan 14 19:22:19 2016 From: thomasmiedema at gmail.com (Thomas Miedema) Date: Thu, 14 Jan 2016 20:22:19 +0100 Subject: [Haskell-cafe] repa parallelization results In-Reply-To: References: Message-ID: To avoid any confusion, this was a reply to the following email: On Fri, Mar 13, 2015 at 6:23 PM, Anatoly Yakovenko wrote: > https://gist.github.com/aeyakovenko/bf558697a0b3f377f9e8 > > > so i am seeing basically results with N4 that are as good as using > sequential computation on my macbook for the matrix multiply > algorithm. any idea why? > > Thanks, > Anatoly > On Thu, Jan 14, 2016 at 8:19 PM, Thomas Miedema wrote: > Anatoly: I also ran your benchmark, and can not reproduce your findings. > > Note that GHC does not make effective use of hyperthreads ( > https://ghc.haskell.org/trac/ghc/ticket/9221#comment:12). So don't use > -N4 when you have only a dual core machine. Maybe that's why you were > getting bad results? I also notice a `NaN` in one of your timing results. I > don't know how that is possible, or if it affected your results. Could you > try running your benchmark again, but this time with -N2? > > On Sat, Mar 14, 2015 at 5:21 PM, Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> dense matrix product is not an algorithm that makes sense in repa's >> execution model, >> > > Matrix multiplication is the first example in the first repa paper: > http://benl.ouroborus.net/papers/repa/repa-icfp2010.pdf. Look at figures > 2 and 7. > > "we measured very good absolute speedup, ?7.2 for 8 cores, on > multicore hardware" > > Doing a quick experiment with 2 threads (my laptop doesn't have more > cores): > > $ cabal install repa-examples # I did not bother with `-fllvm` > ... > > $ ~/.cabal/bin/repa-mmult -random 1024 1024 -random 1024 1204 > elapsedTimeMS = 6491 > > $ ~/.cabal/bin/repa-mmult -random 1024 1024 -random 1024 1204 +RTS -N2 > elapsedTimeMS = 3393 > > This is with GHC 7.10.3 and repa-3.4.0.1 (and dependencies from > http://www.stackage.org/snapshot/lts-3.22) > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aeyakovenko at gmail.com Thu Jan 14 20:57:12 2016 From: aeyakovenko at gmail.com (Anatoly Yakovenko) Date: Thu, 14 Jan 2016 20:57:12 +0000 Subject: [Haskell-cafe] repa parallelization results In-Reply-To: References: Message-ID: Not sure what changed, but after rerunning it I get expected results: anatolys-MacBook:rbm anatolyy$ dist/build/proto/proto +RTS -N2 benchmarking P time 1.791 s (1.443 s .. 2.304 s) 0.991 R? (0.974 R? .. 1.000 R?) mean 1.803 s (1.750 s .. 1.855 s) std dev 90.06 ms (0.0 s .. 90.90 ms) variance introduced by outliers: 19% (moderately inflated) benchmarking S time 3.225 s (2.685 s .. 3.837 s) 0.996 R? (0.985 R? .. 1.000 R?) mean 3.033 s (2.857 s .. 3.142 s) std dev 165.0 ms (0.0 s .. 188.7 ms) variance introduced by outliers: 19% (moderately inflated) perf log written to dist/perf-mmult.html anatolys-MacBook:rbm anatolyy$ dist/build/proto/proto +RTS -N4 benchmarking P time 1.851 s (1.326 s .. 2.316 s) 0.990 R? (0.964 R? .. 1.000 R?) mean 1.784 s (1.693 s .. 1.901 s) std dev 106.3 ms (0.0 s .. 119.8 ms) variance introduced by outliers: 19% (moderately inflated) benchmarking S time 3.329 s (3.041 s .. 3.944 s) 0.996 R? (0.993 R? .. 1.000 R?) mean 3.173 s (3.100 s .. 3.244 s) std dev 119.6 ms (0.0 s .. 121.9 ms) variance introduced by outliers: 19% (moderately inflated) perf log written to dist/perf-mmult.html anatolys-MacBook:rbm anatolyy$ dist/build/proto/proto +RTS -N benchmarking P time 1.717 s (1.654 s .. 1.830 s) 0.999 R? (0.999 R? .. 1.000 R?) mean 1.717 s (1.701 s .. 1.728 s) std dev 16.64 ms (0.0 s .. 19.20 ms) variance introduced by outliers: 19% (moderately inflated) benchmarking S time 3.127 s (3.079 s .. 3.222 s) 1.000 R? (1.000 R? .. 1.000 R?) mean 3.105 s (3.094 s .. 3.116 s) std dev 18.12 ms (543.9 as .. 18.50 ms) variance introduced by outliers: 19% (moderately inflated) perf log written to dist/perf-mmult.html On Thu, Jan 14, 2016 at 11:22 AM Thomas Miedema wrote: > To avoid any confusion, this was a reply to the following email: > > > On Fri, Mar 13, 2015 at 6:23 PM, Anatoly Yakovenko > wrote: > >> https://gist.github.com/aeyakovenko/bf558697a0b3f377f9e8 >> >> >> so i am seeing basically results with N4 that are as good as using >> sequential computation on my macbook for the matrix multiply >> algorithm. any idea why? >> >> Thanks, >> Anatoly >> > > On Thu, Jan 14, 2016 at 8:19 PM, Thomas Miedema > wrote: > >> Anatoly: I also ran your benchmark, and can not reproduce your findings. >> >> Note that GHC does not make effective use of hyperthreads ( >> https://ghc.haskell.org/trac/ghc/ticket/9221#comment:12). So don't use >> -N4 when you have only a dual core machine. Maybe that's why you were >> getting bad results? I also notice a `NaN` in one of your timing results. I >> don't know how that is possible, or if it affected your results. Could you >> try running your benchmark again, but this time with -N2? >> >> On Sat, Mar 14, 2015 at 5:21 PM, Carter Schonwald < >> carter.schonwald at gmail.com> wrote: >> >>> dense matrix product is not an algorithm that makes sense in repa's >>> execution model, >>> >> >> Matrix multiplication is the first example in the first repa paper: >> http://benl.ouroborus.net/papers/repa/repa-icfp2010.pdf. Look at figures >> 2 and 7. >> >> "we measured very good absolute speedup, ?7.2 for 8 cores, on >> multicore hardware" >> >> Doing a quick experiment with 2 threads (my laptop doesn't have more >> cores): >> >> $ cabal install repa-examples # I did not bother with `-fllvm` >> ... >> >> $ ~/.cabal/bin/repa-mmult -random 1024 1024 -random 1024 1204 >> elapsedTimeMS = 6491 >> >> $ ~/.cabal/bin/repa-mmult -random 1024 1024 -random 1024 1204 +RTS -N2 >> elapsedTimeMS = 3393 >> >> This is with GHC 7.10.3 and repa-3.4.0.1 (and dependencies from >> http://www.stackage.org/snapshot/lts-3.22) >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ky3 at atamo.com Fri Jan 15 11:01:55 2016 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Fri, 15 Jan 2016 18:01:55 +0700 Subject: [Haskell-cafe] Haskell Weekly News Message-ID: *Top Picks:* - Team GHC announces the first release candidate of version 8.0 . New features include: - Strict and StrictData extensions - TypeFamilyDependencies extension for injective type families - TypeInType extension for more dependent typing hackery - explicit type application in plain Haskell, not Core - Applicative do-notation - a spanking new pattern-match checker - modularization of the ghci interpreter: it can now run as an independent process Note that the announcement includes a list of bugs linked to the new features. - The engineers at an Australian real estate listings website explain how they "used Category Theory to solve a problem in Java." They face the problem of their search API having grown gnarly and inextensible. First they offer a monoid tutorial culminating in SearchResults -> SearchResults endomorphisms. Then they regularize their database lookups as Kleisli-composable instances of a monomorphic state monad of type (DataSource, SearchResults) -> SearchResults. Finally, they profunctorize the state monad for mereological development of the DataSource. So how was the blog post received? A vocal section of the HN community express skepticism . One haskell subredditor found it "an excellent article." - Verity Stob , the doyen of information technology satire, skewers the cargo culting of Functional Programming and by the by writes a monad tutorial (omg!). Haskell redditors chuckle and cluck at the hatchet job . - Team Wander Nauta creates Viskell , "an experimental visual programming environment for a typed (Haskell-like) functional programming language." Programming with touch tablets in mind, he implements Viskell in Java 8 because "Haskell lacks suitable GUI libraries, and we need good multi-touch support." A slides PDF contains more screenshots. Well-received on both Hacker News and /r/haskell . *Quotes of the Week:* - Tom Ellis: In Haskell you don't fight the type system. It fights your bugs. - Jeremy Bowers: The reason I find Haskell interesting is precisely that it's the only place I know where the theoretically-minded and the practically-minded get together and interpollinate. Everywhere else the one group pretty much just sneers at the other. - Redditor lukewarm: Yes, you can write industry quality software in Haskell. Do all your computations in the IO monad, keep intermediate results in MVars. Use only Int and String types. Use exceptions to handle errors. Write yourself custom constructs to emulate for and while loops, preferably using Template Haskell. -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From zilinc.dev at gmail.com Mon Jan 18 00:48:17 2016 From: zilinc.dev at gmail.com (Zilin Chen) Date: Mon, 18 Jan 2016 11:48:17 +1100 Subject: [Haskell-cafe] project compile with multi-versions of alex Message-ID: <569C3651.7030506@gmail.com> Hi Cafe, I have a Haskell project which has ~90 dependencies (transitively) and is managed with plain cabal (w/o stackage, etc.). I try to keep it compile with a few latest releases of GHC (so right now 7.8.4, 7.10.1, 7.10.2 and 7.10.3). It was tractable with conditionals in the cabal config file with minor annoyance. But recently I found this issue:language-c-quote-0.11.3 does not compile with alex 3.1.5 [1]. I was wondering if I should also allow a few versions of executables, like alex, happy, etc (I can see though it will be a huge pain); or should I force all users to use the latest versions of them (I don't know if they are compatible with old ghc versions)? What's the common practice for other projects? Any hints? What should I write in my cabal config? Thanks, Zilin [1]https://github.com/mainland/language-c-quote/issues/57 -------------- next part -------------- An HTML attachment was scrubbed... URL: From manny at fpcomplete.com Mon Jan 18 04:39:35 2016 From: manny at fpcomplete.com (Emanuel Borsboom) Date: Mon, 18 Jan 2016 04:39:35 +0000 Subject: [Haskell-cafe] ANN: stack-1.0.2 Message-ID: New version released of Stack, a build tool. See haskellstack.org for installation and upgrade instructions. Release notes: - Arch Linux: Stack has been adopted into the official community repository , so we will no longer be updating the AUR with new versions. See the install/upgrade guide for current download instructions. Major changes: - stack init and solver overhaul #1583 Other enhancements: - Disable locale/codepage hacks when GHC >=7.10.3 #1552 - Specify multiple images to build for stack image container docs - Specify which executables to include in images for stack image container docs - Docker: pass supplemantary groups and umask into container - If git fetch fails wipe the directory and try again from scratch #1418 - Warn if newly installed executables won?t be available on the PATH #1362 - stack.yaml: for stack image container, specify multiple images to generate, and which executables should be added to those images - GHCI: add interactive Main selection #1068 - Care less about the particular name of a GHCJS sdist folder #1622 - Unified Enable/disable help messaging #1613 Bug fixes: - Don?t share precompiled packages between GHC/platform variants and Docker #1551 - Properly redownload corrupted downloads with the correct file size. Mailing list discussion - Gracefully handle invalid paths in error/warning messages #1561 - Nix: select the correct GHC version corresponding to the snapshot even when an abstract resolver is passed via --resolver on the command-line. #1641 - Fix: Stack does not allow using an external package from ghci #1557 - Disable ambiguous global ??resolver? option for ?stack init? #1531 - Obey --no-nix flag - Fix: GHCJS Execute.hs: Non-exhaustive patterns in lambda #1591 - Send file-watch and sticky logger messages to stderr #1302 #1635 - Use globaldb path for querying Cabal version #1647 ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mainland at apeiron.net Mon Jan 18 05:59:20 2016 From: mainland at apeiron.net (Geoffrey Mainland) Date: Mon, 18 Jan 2016 00:59:20 -0500 Subject: [Haskell-cafe] project compile with multi-versions of alex In-Reply-To: <569C3651.7030506@gmail.com> References: <569C3651.7030506@gmail.com> Message-ID: <569C7F38.70703@apeiron.net> How about using language-c-quote 0.11.4? Cheers, Geoff On 01/17/2016 07:48 PM, Zilin Chen wrote: > Hi Cafe, > > I have a Haskell project which has ~90 dependencies (transitively) and > is managed with plain cabal (w/o stackage, etc.). I try to keep it > compile with a few latest releases of GHC (so right now 7.8.4, 7.10.1, > 7.10.2 and 7.10.3). > It was tractable with conditionals in the cabal config file with minor > annoyance. But recently I found this issue:language-c-quote-0.11.3 > does not compile with alex 3.1.5 [1]. I was wondering if I should also > allow a few versions of executables, like alex, happy, etc (I can see > though it will be a huge pain); or should I force all users to use the > latest versions of them (I don't know if they are compatible with old > ghc versions)? What's the common practice for other projects? Any > hints? What should I write in my cabal config? > > Thanks, > Zilin > > > [1]https://github.com/mainland/language-c-quote/issues/57 From dons00 at gmail.com Mon Jan 18 10:21:56 2016 From: dons00 at gmail.com (Don Stewart) Date: Mon, 18 Jan 2016 10:21:56 +0000 Subject: [Haskell-cafe] Haskell dev roles with Strats at Standard Chartered Message-ID: Hi folks, I'm hiring 3 more devs to write Haskell for Standard Chartered in London and Singapore. Details of the roles below, but broadly in FX algo pricing and pricing automation. Ability to write "tight" total Haskell that can run 24/7 and do the right thing is needed. https://donsbot.wordpress.com/2016/01/18/haskell-developer-roles-at-standard-chartered-london-singapore/ CVs to me at Standard Chartered -- Don -------------- next part -------------- An HTML attachment was scrubbed... URL: From danburton.email at gmail.com Tue Jan 19 04:07:11 2016 From: danburton.email at gmail.com (Dan Burton) Date: Mon, 18 Jan 2016 20:07:11 -0800 Subject: [Haskell-cafe] Stackage is reverting to aeson-0.9 Message-ID: This means that LTS 4 is being discontinued, and LTS 5 is imminent. See the announcement blog post: https://unknownparallel.wordpress.com/2016/01/18/stackage-is-reverting-to-aeson-0-9/ See also related discussion (yesterday): https://www.reddit.com/r/haskell/comments/41gpdk/lts4_with_aeson010_is_being_discontinued_lts5/ If there are any questions or concerns, don't hesitate to ping me. -- Dan Burton -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentm at themactionfaction.com Tue Jan 19 04:30:27 2016 From: agentm at themactionfaction.com (A.M.) Date: Mon, 18 Jan 2016 23:30:27 -0500 Subject: [Haskell-cafe] [ANNOUNCE] Haskell DBMS: Project:M36 In-Reply-To: <470c8ab1-b154-468d-aa13-a0a1ba130229@googlegroups.com> References: <565F2FAF.1090807@themactionfaction.com> <565F4867.2020104@gmail.com> <565F4CFC.9020200@themactionfaction.com> <470c8ab1-b154-468d-aa13-a0a1ba130229@googlegroups.com> Message-ID: <569DBBE3.1020909@themactionfaction.com> On 12/21/2015 11:30 AM, R?mi Vion wrote: > Hello, Project:M36 seems amazing, thanks ! > Thank your also for the link to the "/Out of the Tar Pit" paper/ > (http://shaffner.us/cs/papers/tarpit.pdf > ). Thank you Remi. We are working on completing the final components described in precisely this paper. Then, we will publish an essay about how Project:M36 meets the paper's requirements for functional-relational programming. > > Are you aware of people using Project:M36 in production ? The project is very young and unlikely to be used in production. Currently, I would recommend it to users interested in learning about the mathematics behind the relational algebra. In comparison, SQL is a terrible platform for learning about the relational algebra because SQL made some poor historical decisions. Here is an essay on one facet of this argument: https://github.com/agentm/project-m36/blob/master/docs/on_null.markdown > Is there some user feedback available somewhere ? Please use the github issue system if you encounter any problems. Or are you asking for a "frequently-asked questions" section? We should certainly add that. > For small scale applications, do you think M36 is ready enough to be a > viable alternative to Postgres ? Project:M36 would be a viable replacement for PostgreSQL for experimental use only. There are many more optimizations at every level which need to be implemented before Project:M36 is on-par with PostgreSQL and we are working on many new features. Project:M36 has also not yet achieved feature parity with PostgreSQL, though Project:M36 already includes some features impossible in PostgreSQL (such as nested relations). Thanks for trying Project:M36! Any feedback you have would be very valuable. Cheers, M -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: OpenPGP digital signature URL: From fa-ml at ariis.it Tue Jan 19 08:08:59 2016 From: fa-ml at ariis.it (Francesco Ariis) Date: Tue, 19 Jan 2016 09:08:59 +0100 Subject: [Haskell-cafe] Doc generation? In-Reply-To: References: Message-ID: <20160119080859.GA5300@casa.casa> On Tue, Jan 19, 2016 at 04:24:24PM +1100, Noon Silk wrote: > Does anyone know what is happening here? > > Not a single one of the packages on > http://hackage.haskell.org/packages/recent has docs generated at the moment. > > Some older ones, upload this year, also do not - > http://hackage.haskell.org/package/pipes-concurrency Docs not being built can be quite frustrating; for those dark times I build them locally: http://ariis.it/static/articles/no-docs-hackage/page.html Living with a flaky WiFi, saves me from screaming at the monitor quite some times. From plredmond at gmail.com Tue Jan 19 16:39:20 2016 From: plredmond at gmail.com (Patrick Redmond) Date: Tue, 19 Jan 2016 08:39:20 -0800 Subject: [Haskell-cafe] Doc generation? In-Reply-To: <20160119080859.GA5300@casa.casa> References: <20160119080859.GA5300@casa.casa> Message-ID: I don't know what's happening with hackage, but if you're using stack in your workflow a simple workaround is to build docs locally and search them with a shell script. For example: $ stack haddock async And then muck around in .stack-work or ~/.stack. I've written a bash/fish script to do the search for you here: plredmond.github.io/posts/search-haddocks-offline.html On Tuesday, January 19, 2016, Francesco Ariis wrote: > On Tue, Jan 19, 2016 at 04:24:24PM +1100, Noon Silk wrote: > > Does anyone know what is happening here? > > > > Not a single one of the packages on > > http://hackage.haskell.org/packages/recent has docs generated at the > moment. > > > > Some older ones, upload this year, also do not - > > http://hackage.haskell.org/package/pipes-concurrency > > Docs not being built can be quite frustrating; for those dark times I > build them locally: > > http://ariis.it/static/articles/no-docs-hackage/page.html > > Living with a flaky WiFi, saves me from screaming at the monitor quite > some times. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.fine at gmail.com Tue Jan 19 17:51:40 2016 From: mark.fine at gmail.com (Mark Fine) Date: Tue, 19 Jan 2016 09:51:40 -0800 Subject: [Haskell-cafe] Doc generation? In-Reply-To: References: <20160119080859.GA5300@casa.casa> Message-ID: Also, if the packages are on stackage, you can look at the documentation there: https://www.stackage.org/package/pipes-concurrency Mark On Tue, Jan 19, 2016 at 8:39 AM, Patrick Redmond wrote: > I don't know what's happening with hackage, but if you're using stack in > your workflow a simple workaround is to build docs locally and search them > with a shell script. For example: > > $ stack haddock async > > And then muck around in .stack-work or ~/.stack. I've written a > bash/fish script to do the search for you here: > plredmond.github.io/posts/search-haddocks-offline.html > > On Tuesday, January 19, 2016, Francesco Ariis wrote: > >> On Tue, Jan 19, 2016 at 04:24:24PM +1100, Noon Silk wrote: >> > Does anyone know what is happening here? >> > >> > Not a single one of the packages on >> > http://hackage.haskell.org/packages/recent has docs generated at the >> moment. >> > >> > Some older ones, upload this year, also do not - >> > http://hackage.haskell.org/package/pipes-concurrency >> >> Docs not being built can be quite frustrating; for those dark times I >> build them locally: >> >> http://ariis.it/static/articles/no-docs-hackage/page.html >> >> Living with a flaky WiFi, saves me from screaming at the monitor quite >> some times. >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From john_ericson at brown.edu Tue Jan 19 21:17:02 2016 From: john_ericson at brown.edu (Ericson, John) Date: Tue, 19 Jan 2016 13:17:02 -0800 Subject: [Haskell-cafe] Host-Oriented Template Haskell Message-ID: As is well known, TH and cross-compiling do not get along. There are various proposals on how to make this interaction less annoying, and I am not against them. But as I see it, the problem is largely inherent to the design of TH itself: since values can (usually) be lifted from compile-time to run-time, and normal definitions from upstream modules to downstream modules' TH, TH and normal code must "live in the same world". Now this restriction in turn bequeaths TH with much expressive power, and I wouldn't advocate getting rid of it. But many tasks do not need it, and in some cases, say in bootstrapping compilers[1] themselves, it is impossible to use TH because of it, even were all the current proposals implemented. For these reason, I propose a new TH variant which has much harsher phase separation. Normal definitions from upstream modules can not be used, lifting values is either not permitted or is allowed to fail (because of missing/incompatible definitions), and IO is defined to match the behavior of the host, not target, platform (in the cross compiling case). The only interaction between the two phases is that quoted syntax is resolved against the the run-time phase's definitions (just like today). Some of you may find this a shoddy substitute for defining a subset of Haskell which behaves identically on all platforms, and optionally constraining TH to it. But the big feature that my proposal offers and that one doesn't is to be able to independently specify compile-time dependencies for the host-oriented TH---this is analogous to the newish `Setup.hs` dependencies. That in turns leads to what I think is the "killer app" for Host-Oriented TH: exposing the various prepossessors we use (alex, happy, hsc2hs, even CPP) into libraries, and side-stepping any need for "executable dependencies" in Cabal. Note also that at least hsc2hs additionally requires host-IO---there may not even exist a C compiler on the target platform at all. Finally, forgive me if this has been brought up before. I've been thinking about this a while, and did a final pass over the GHC wiki to make sure it wasn't already proposed, but I could have missed something (this is also my first post to the list). John [1]: https://github.com/ghcjs/ghcjs/blob/master/lib/ghcjs-prim/GHCJS/Prim/Internal/Build.hs From dxld at darkboxed.org Tue Jan 19 21:19:48 2016 From: dxld at darkboxed.org (Daniel =?iso-8859-1?Q?Gr=F6ber?=) Date: Tue, 19 Jan 2016 22:19:48 +0100 Subject: [Haskell-cafe] [ANN] ghc-mod-5.5.0.0: Happy Haskell Hacking Message-ID: <20160119211948.GA16744@grml> I'm pleased to announce the release of ghc-mod 5.5.0.0! This is primarily a maintenance and bug fix release. We are releasing this as a major version bump as we are following a policy of not trying to keep API compatibility until v6.0 to enable us to clean up ghc-mod's internals and API. What's new? =========== * Cabal flags are now preserved across automatic reconfigurations When ghc-mod detects something influencing the cabal configuration has changed since the last invocation it will automatically reconfigure the project. Previously this would call 'cabal configure' without any additional options thus possibly reverting flags the user might have added to the configure command previously. Now we extract the current set of flags from the existing configuration and pass the appropriate options to the configure command. * Rewritten command-line parser (again) The home grown sub-command parser based on getopt has been a user experience disaster so we've replaced it using a new optparse-applicative based parser. This does have the unfortunate side effect that we had to remove support for some optional arguments we had supported previously thus breaking compatibility with very old frontends. * Remove CWD requirement from command-line tools In v5.4.0.0 we had to add a workaround for a nasty race condition in 'ghc-mod legacy-interactive' (ghc-modi) which added a requirement that all ghc-mod command line tools are run in the root of each project's directory. This limitation has now been removed. Frontends which have implemented this workaround should be compatible going forward but for performance reasons it is advisable to disable the workaround for versions after v5.5.0.0. * Various bug fixes and smaller improvements From the change log: * Fix cabal-helper errors when no global GHC is installed (Stack) * Support for spaces in file names when using legacy-interactive * Fix "No instance nor default method for class operation put" * Fix a variety of caching related issues * Emacs: Fix slowdown and bugs caused by excessive use of `map-file` * Emacs: Add ghc-report-errors to inhibit *GHC Error* logging What is ghc-mod? ================ ghc-mod is both a back-end program for enhancing editors and other kinds of development environments with support for Haskell and a library for abstracting the black magic incantations required to use the GHC API in various environments, especially Cabal and Stack projects. The library is used by ambitious projects like HaRe[1], mote[2] and haskell-ide-engine[3] Getting ghc-mod =============== GitHub: https://github.com/DanielG/ghc-mod Hackage: http://hackage.haskell.org/package/ghc-mod Editor frontends: - Emacs (native): https://github.com/DanielG/ghc-mod https://github.com/iquiw/company-ghc - Vim: https://github.com/eagletmt/ghcmod-vim https://github.com/eagletmt/neco-ghc - Atom: https://github.com/atom-haskell/ide-haskell Known issues ============ For issues other than the ones mentioned below visit our issue tracker: https://github.com/DanielG/ghc-mod/issues Frequently reported issues -------------------------- ghc-mod once compiled is bound to one version of GHC since we link against the GHC API library. This used to not be a very big problem but since Stack made it exceedingly easy for users to use more than one version of GHC without even knowing the number of problems in this area has exploded. We are tracing the issue in the following issue: https://github.com/DanielG/ghc-mod/issues/615 (Support switching GHC versions without recompiling ghc-mod) ghc-mod's `case`, `sig` and `refine` commands still do not work properly with GHC>7.10 (See https://github.com/DanielG/ghc-mod/issues/438). Unless someone volunteers to fix this issue I will work towards replacing the features using mote[2] instead as the current code is, from my point of view, unmaintainable. If you do notice any other problems please report them: https://github.com/DanielG/ghc-mod/issues/new ---- [1]: https://github.com/alanz/HaRe [2]: https://github.com/imeckler/mote [3]: https://github.com/haskell/haskell-ide-engine -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From lemming at henning-thielemann.de Tue Jan 19 22:12:02 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Tue, 19 Jan 2016 23:12:02 +0100 (CET) Subject: [Haskell-cafe] CPU with Haskell support Message-ID: Hi all, every now and then I think it would be cool to have a microprocessor that supports Haskell in a way. A processor where lazy evaluation is not overhead but an optimization opportunity, a processor that can make use of the explicit data dependencies in Haskell programs in order to utilize many computation units in parallel. I know of the Reduceron project, which evolves only slowly and if it somewhen is ready for use it is uncertain whether it can compete with stock CPUs since FPGA's need much more chip space for the same logic. I got to know that in todays x86 processors you can alter the instruction set, which is mainly used for bugfixes. Wouldn't it be interesting to add some instructions for Haskell support? However, I suspect that such a patch might be rendered invalid by new processor generations with changed internal details. Fortunately, there are processors that are designed for custom instruction set extensions: https://en.wikipedia.org/wiki/Xtensa Would it be sensible to create a processor based on such a design? I have no idea what it might cost, and you would still need some peripheral circuitry to run it. What could processor instructions for Haskell support look like? Has anyone already thought in this direction? From noonslists at gmail.com Tue Jan 19 22:37:26 2016 From: noonslists at gmail.com (Noon Silk) Date: Wed, 20 Jan 2016 09:37:26 +1100 Subject: [Haskell-cafe] Doc generation? In-Reply-To: References: <20160119080859.GA5300@casa.casa> Message-ID: Thanks Mark and All, Indeed, I've been generating the docs myself offline; I didn't know about looking for the docs on stackage; thanks! -- Noon On Wed, Jan 20, 2016 at 4:51 AM, Mark Fine wrote: > Also, if the packages are on stackage, you can look at the documentation > there: > > https://www.stackage.org/package/pipes-concurrency > > Mark > > On Tue, Jan 19, 2016 at 8:39 AM, Patrick Redmond > wrote: > >> I don't know what's happening with hackage, but if you're using stack in >> your workflow a simple workaround is to build docs locally and search them >> with a shell script. For example: >> >> $ stack haddock async >> >> And then muck around in .stack-work or ~/.stack. I've written a >> bash/fish script to do the search for you here: >> plredmond.github.io/posts/search-haddocks-offline.html >> >> On Tuesday, January 19, 2016, Francesco Ariis wrote: >> >>> On Tue, Jan 19, 2016 at 04:24:24PM +1100, Noon Silk wrote: >>> > Does anyone know what is happening here? >>> > >>> > Not a single one of the packages on >>> > http://hackage.haskell.org/packages/recent has docs generated at the >>> moment. >>> > >>> > Some older ones, upload this year, also do not - >>> > http://hackage.haskell.org/package/pipes-concurrency >>> >>> Docs not being built can be quite frustrating; for those dark times I >>> build them locally: >>> >>> http://ariis.it/static/articles/no-docs-hackage/page.html >>> >>> Living with a flaky WiFi, saves me from screaming at the monitor quite >>> some times. >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -- Noon Silk, ? https://silky.github.io/ "Every morning when I wake up, I experience an exquisite joy ? the joy of being this signature." -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikhail.glushenkov at gmail.com Tue Jan 19 22:40:31 2016 From: mikhail.glushenkov at gmail.com (Mikhail Glushenkov) Date: Tue, 19 Jan 2016 23:40:31 +0100 Subject: [Haskell-cafe] Doc generation? In-Reply-To: References: <20160119080859.GA5300@casa.casa> Message-ID: Hi *, On 19 January 2016 at 23:37, Noon Silk wrote: > Thanks Mark and All, > > Indeed, I've been generating the docs myself offline; I didn't know about > looking for the docs on stackage; thanks! Note that with the Git version of cabal-install you can run 'cabal upload --doc' to upload the docs to Hackage manually. From auke at tulcod.com Tue Jan 19 22:44:26 2016 From: auke at tulcod.com (Auke Booij) Date: Tue, 19 Jan 2016 22:44:26 +0000 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: Message-ID: This question is much more involved than you seem to be suggesting. It's not just about adding "some instructions for Haskell support". You have to think about how you want to express /every/ haskell term as a series of bits (preferably predictably many bits), and find a (finite) combination of logical gates to do arbitrary computations with them. If you want to go anywhere in this directions, perhaps a good start would be implementing a processor with instructions for (untyped) lambda calculus. One approach for this could be to take a (mathematical) model of lambda calculus and see how its elements can be represented as natural numbers. This implementation, I suspect, would be terribly inefficient. Think about what the lambda application gate would look like in terms of NAND gates. Yes, it can probably be done in theory. No, it won't be pretty. And forget about practical. Finally, a major advantage of having such "raw" language as an instruction set is that it allows many many optimizations (e.g. pipelining (which, I would say, is the single most important reason that processors are able to run at GHzs instead of MHzs (Pentium 4 processors, famed for their high clock speed, had 31 pipeline stages))) that I cannot imagine being possible in anything close to a "lambda calculus processor". What is the added value you hope to achieve? On 19 January 2016 at 22:12, Henning Thielemann wrote: > > Hi all, > > every now and then I think it would be cool to have a microprocessor that > supports Haskell in a way. A processor where lazy evaluation is not overhead > but an optimization opportunity, a processor that can make use of the > explicit data dependencies in Haskell programs in order to utilize many > computation units in parallel. I know of the Reduceron project, which > evolves only slowly and if it somewhen is ready for use it is uncertain > whether it can compete with stock CPUs since FPGA's need much more chip > space for the same logic. > > I got to know that in todays x86 processors you can alter the instruction > set, which is mainly used for bugfixes. Wouldn't it be interesting to add > some instructions for Haskell support? However, I suspect that such a patch > might be rendered invalid by new processor generations with changed internal > details. Fortunately, there are processors that are designed for custom > instruction set extensions: > https://en.wikipedia.org/wiki/Xtensa > > Would it be sensible to create a processor based on such a design? I have no > idea what it might cost, and you would still need some peripheral circuitry > to run it. What could processor instructions for Haskell support look like? > Has anyone already thought in this direction? > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From dedgrant at gmail.com Tue Jan 19 23:00:12 2016 From: dedgrant at gmail.com (Darren Grant) Date: Tue, 19 Jan 2016 15:00:12 -0800 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: Message-ID: Limiting the scope for my own sanity here, there may yet be some application in various hardware level emulations of continuation passing calculi, perhaps building on static single assignment. It might be possoble to derive an interesting instruction set from the sorts of intermediate representations we see in compiler infrastructures like LLVM, but it is hard to guess how these hardware designs would benefit haskell, rather than the other way around. Cheers, Darren On Jan 19, 2016 14:44, "Auke Booij" wrote: > This question is much more involved than you seem to be suggesting. > It's not just about adding "some instructions for Haskell support". > You have to think about how you want to express /every/ haskell term > as a series of bits (preferably predictably many bits), and find a > (finite) combination of logical gates to do arbitrary computations > with them. > > If you want to go anywhere in this directions, perhaps a good start > would be implementing a processor with instructions for (untyped) > lambda calculus. One approach for this could be to take a > (mathematical) model of lambda calculus and see how its elements can > be represented as natural numbers. > > This implementation, I suspect, would be terribly inefficient. Think > about what the lambda application gate would look like in terms of > NAND gates. Yes, it can probably be done in theory. No, it won't be > pretty. And forget about practical. > > Finally, a major advantage of having such "raw" language as an > instruction set is that it allows many many optimizations (e.g. > pipelining (which, I would say, is the single most important reason > that processors are able to run at GHzs instead of MHzs (Pentium 4 > processors, famed for their high clock speed, had 31 pipeline > stages))) that I cannot imagine being possible in anything close to a > "lambda calculus processor". > > What is the added value you hope to achieve? > > On 19 January 2016 at 22:12, Henning Thielemann > wrote: > > > > Hi all, > > > > every now and then I think it would be cool to have a microprocessor that > > supports Haskell in a way. A processor where lazy evaluation is not > overhead > > but an optimization opportunity, a processor that can make use of the > > explicit data dependencies in Haskell programs in order to utilize many > > computation units in parallel. I know of the Reduceron project, which > > evolves only slowly and if it somewhen is ready for use it is uncertain > > whether it can compete with stock CPUs since FPGA's need much more chip > > space for the same logic. > > > > I got to know that in todays x86 processors you can alter the instruction > > set, which is mainly used for bugfixes. Wouldn't it be interesting to add > > some instructions for Haskell support? However, I suspect that such a > patch > > might be rendered invalid by new processor generations with changed > internal > > details. Fortunately, there are processors that are designed for custom > > instruction set extensions: > > https://en.wikipedia.org/wiki/Xtensa > > > > Would it be sensible to create a processor based on such a design? I > have no > > idea what it might cost, and you would still need some peripheral > circuitry > > to run it. What could processor instructions for Haskell support look > like? > > Has anyone already thought in this direction? > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Tue Jan 19 23:03:06 2016 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Wed, 20 Jan 2016 00:03:06 +0100 (CET) Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: Message-ID: On Tue, 19 Jan 2016, Auke Booij wrote: > This question is much more involved than you seem to be suggesting. > It's not just about adding "some instructions for Haskell support". > You have to think about how you want to express /every/ haskell term > as a series of bits (preferably predictably many bits), and find a > (finite) combination of logical gates to do arbitrary computations > with them. I am not thinking about a radically different machine language, just a common imperative machine language with some added instructions for tasks often found in machine code generated from Haskell. E.g. mainstream processors support C function calls with special jump instruction and stack handling. Maybe there could be instructions that assist handling thunks or Haskell function calls. From benl at ouroborus.net Wed Jan 20 00:52:21 2016 From: benl at ouroborus.net (Ben Lippmeier) Date: Wed, 20 Jan 2016 11:52:21 +1100 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: Message-ID: <3EAF442C-90D4-40BE-91CE-EB7AE3E6EAFC@ouroborus.net> > On 20 Jan 2016, at 9:12 am, Henning Thielemann wrote: > I got to know that in todays x86 processors you can alter the instruction set, which is mainly used for bugfixes. Wouldn't it be interesting to add some instructions for Haskell support? However, I suspect that such a patch might be rendered invalid by new processor generations with changed internal details. Fortunately, there are processors that are designed for custom instruction set extensions: > https://en.wikipedia.org/wiki/Xtensa Your post assumes that the time to fetch/decode the instruction stream is a bottleneck, and reducing the number of instructions will in some way make the program faster. Your typically lazy GHC compiled program spends much of its time building thunks and otherwise copying data between the stack and the heap. If it?s blocked waiting for data memory / data cache miss then reducing the number of instructions won?t help anything ? at least if the fancy new instructions just tell the processor to do something that would lead to cache miss anyway. See: Cache Performance of Lazy Functional Programs on Current Hardware (from 2009) Arbob Ahmad and Henry DeYoung http://www.cs.cmu.edu/~hdeyoung/15740/report.pdf Indirect branches are also a problem (load an address from data memory, then jump to it), as branch predictors usually cannot deal with them. Slowdowns due to mispredicted branches could perhaps be mitigated by improving the branch predictor in a Haskell specific way, but you might not need new instructions to do so. Or another way of putting it: ?If you tell a long story with less words, then it?s still a long story.? Ben. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok at cs.otago.ac.nz Wed Jan 20 05:16:49 2016 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Wed, 20 Jan 2016 18:16:49 +1300 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: Message-ID: On 20/01/2016, at 12:03 pm, Henning Thielemann wrote: > I am not thinking about a radically different machine language, just a common imperative machine language with some added instructions for tasks often found in machine code generated from Haskell. E.g. mainstream processors support C function calls with special jump instruction and stack handling. Maybe there could be instructions that assist handling thunks or Haskell function calls. I was at a presentation once where the speaker showed how (thanks to the fact that Prolog doesn't evaluate arguments in calls) calling a procedure and executing the procedure could be overlapped, getting a factor of 2 speedup for an important part of the code. At another presentation a speaker showed how using a special outboard coprocessor could dramatically speed up memory management. I suspect that neither technique would be much help on today's machines and for Haskell. However, there is a hint here that doing something quite different might pay off. For example, if branch predictors don't do well with thunk handling, maybe there is a way of processing thunks that a quite different kind of branch predictor *might* cope with. Or maybe something that's expecting to process thousands of microthreads might not care about branch prediction. (Although that idea has been tried as a way of handling memory latency, I don't think it's been tried for Haskell.) Perhaps you might look for something different; instead of 'faster on similar hardware' you might look at 'cheaper'. Could a specialised Haskell processor use less energy than a standard CPU? Don't quit your day job, but don't be too sure there's nothing left to think of either. From 6yearold at gmail.com Wed Jan 20 07:05:48 2016 From: 6yearold at gmail.com (Gleb Popov) Date: Wed, 20 Jan 2016 10:05:48 +0300 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: Message-ID: On Wed, Jan 20, 2016 at 1:12 AM, Henning Thielemann < lemming at henning-thielemann.de> wrote: > > Hi all, > > every now and then I think it would be cool to have a microprocessor that > supports Haskell in a way. A processor where lazy evaluation is not > overhead but an optimization opportunity, a processor that can make use of > the explicit data dependencies in Haskell programs in order to utilize many > computation units in parallel. I know of the Reduceron project, which > evolves only slowly and if it somewhen is ready for use it is uncertain > whether it can compete with stock CPUs since FPGA's need much more chip > space for the same logic. > > I got to know that in todays x86 processors you can alter the instruction > set, which is mainly used for bugfixes. Wouldn't it be interesting to add > some instructions for Haskell support? However, I suspect that such a patch > might be rendered invalid by new processor generations with changed > internal details. Fortunately, there are processors that are designed for > custom instruction set extensions: > https://en.wikipedia.org/wiki/Xtensa > > Would it be sensible to create a processor based on such a design? I have > no idea what it might cost, and you would still need some peripheral > circuitry to run it. What could processor instructions for Haskell support > look like? Has anyone already thought in this direction? > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > I remember reading relevent paper: The Reduceron reconfigured and re-evaluated. Authors are MATTHEW NAYLOR and COLIN RUNCIMAN. -------------- next part -------------- An HTML attachment was scrubbed... URL: From svenpanne at gmail.com Wed Jan 20 07:09:38 2016 From: svenpanne at gmail.com (Sven Panne) Date: Wed, 20 Jan 2016 08:09:38 +0100 Subject: [Haskell-cafe] Doc generation? In-Reply-To: References: <20160119080859.GA5300@casa.casa> Message-ID: 2016-01-19 23:40 GMT+01:00 Mikhail Glushenkov : > Note that with the Git version of cabal-install you can run 'cabal > upload --doc' to upload the docs to Hackage manually. > Are there any safeguards on the Hackage server side to guarantee consistency between the uploaded package and the uploaded docs? (i.e. make sure it's the right version etc.) Are there checks on the server side that the cross-package links are correct? Does the server make sure that the docs contain source links? If the answer to any of these questions is "no", I consider even the possibility of uploading docs by hand a bug. Wrong/partial documentation is worse than no documentation at all... Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikhail.glushenkov at gmail.com Wed Jan 20 07:35:02 2016 From: mikhail.glushenkov at gmail.com (Mikhail Glushenkov) Date: Wed, 20 Jan 2016 08:35:02 +0100 Subject: [Haskell-cafe] Doc generation? In-Reply-To: References: <20160119080859.GA5300@casa.casa> Message-ID: Hi, On 20 January 2016 at 08:09, Sven Panne wrote: > Are there any safeguards on the Hackage server side to guarantee consistency > between the uploaded package and the uploaded docs? (i.e. make sure it's the > right version etc.) Are there checks on the server side that the > cross-package links are correct? Does the server make sure that the docs > contain source links? If I'm reading [1] correctly, no such checks are performed. > If the answer to any of these questions is "no", I consider even the > possibility of uploading docs by hand a bug. Wrong/partial documentation is > worse than no documentation at all... You're welcome to open a ticket on the hackage-server bug tracker. [1] https://github.com/haskell/hackage-server/blob/master/Distribution/Server/Features/Documentation.hs#L223 From jo at durchholz.org Wed Jan 20 07:51:23 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Wed, 20 Jan 2016 08:51:23 +0100 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: Message-ID: <569F3C7B.80508@durchholz.org> Am 19.01.2016 um 23:12 schrieb Henning Thielemann: > > Fortunately, there are > processors that are designed for custom instruction set extensions: > https://en.wikipedia.org/wiki/Xtensa Unfortunately, the WP article does not say anything that couldn't be said about, say, an ARM core. Other than that Xtensa core being some VLIW design. > Would it be sensible to create a processor based on such a design? Very, very unlikely, for multiple reasons. Special-purpose CPUs have been built, most notably for LISP, less notably for Java, and probably for other purposes that I haven't heard of. Invariably, their architectural advantages were obsoleted by economy of scale: Mainstream CPUs are being produced in such huge numbers that Intel etc. could affort more engineers to optimize every nook and cranny, more engineers to optimize the structure downscaling, and larger fabs that could do more chips on more one-time expensive but per-piece cheap equipment, and in the end, the special-purpose chips were slower and more expensive. It's an extremely strong competition you are facing if you try this. Also, it is very easy to misidentify the actual bottlenecks and make instructions for the wrong ones. If caching is the main bottleneck (which it usually is), no amount of CPU improvement will help you and you'll simply need a larger cache. Or, probably, a compiler that knows enough about the program and its data flow to arrange the data in a cache-line-friendly fashion. I do not think this is going to be a long-term problem though. Pure languages have huge advantages for fine-grained parallel processing, and CPU technology is pushing towards multiple cores, so that's a natural match. As pure languages come into more widespread use, the engineers at Intel, AMD etc. will look at what the pure languages need, and add optimizations for these. Just my 2 cents. Jo From svenpanne at gmail.com Wed Jan 20 08:49:31 2016 From: svenpanne at gmail.com (Sven Panne) Date: Wed, 20 Jan 2016 09:49:31 +0100 Subject: [Haskell-cafe] Doc generation? In-Reply-To: References: <20160119080859.GA5300@casa.casa> Message-ID: 2016-01-20 8:35 GMT+01:00 Mikhail Glushenkov : > You're welcome to open a ticket on the hackage-server bug tracker. > I've reported this several times through several channels, not sure if yet another report will have an impact. Here a quick summary of doc-generation-related issues: https://github.com/haskell/hackage-server/issues/464 https://github.com/haskell/hackage-server/issues/463 https://github.com/haskell/hackage-server/issues/421 https://github.com/haskell/hackage-server/issues/420 https://github.com/haskell/hackage-server/issues/368 https://github.com/haskell/hackage-server/issues/244 https://github.com/haskell/hackage-server/issues/183 https://github.com/haskell/hackage-server/issues/145 https://github.com/haskell/hackage-server/issues/55 >From a packager maintainer POV, it's totally unpredictable if/when documentation gets built, see e.g. http://hackage.haskell.org/package/GLURaw-2.0.0.1: Currently there are no docs, while the previous version had docs, and the only change was relaxing the upper bound of OpenGLRaw ( https://github.com/haskell-opengl/GLURaw/compare/v2.0.0.0...v2.0.0.1). Basically I gave up any hope and rely on stackage and/or local docs... :-( Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikhail.glushenkov at gmail.com Wed Jan 20 09:11:54 2016 From: mikhail.glushenkov at gmail.com (Mikhail Glushenkov) Date: Wed, 20 Jan 2016 10:11:54 +0100 Subject: [Haskell-cafe] Doc generation? In-Reply-To: References: <20160119080859.GA5300@casa.casa> Message-ID: Hi, On 20 January 2016 at 09:49, Sven Panne wrote: > 2016-01-20 8:35 GMT+01:00 Mikhail Glushenkov : >> >> You're welcome to open a ticket on the hackage-server bug tracker. > > > I've reported this several times through several channels Those tickets are about missing docs, not missing checks of manually uploaded docs. From svenpanne at gmail.com Wed Jan 20 10:45:55 2016 From: svenpanne at gmail.com (Sven Panne) Date: Wed, 20 Jan 2016 11:45:55 +0100 Subject: [Haskell-cafe] Doc generation? In-Reply-To: References: <20160119080859.GA5300@casa.casa> Message-ID: 2016-01-20 10:11 GMT+01:00 Mikhail Glushenkov : > Those tickets are about missing docs, not missing checks of manually > uploaded docs. > Ah, OK, then we misunderstood each other. My point is: If doc generation actually worked on Hackage, the manual upload could be disabled immediately, I see it only as a fragile workaround, so there is no point in opening a ticket for improving that when we already have tons of tickets for the *real* problem (which somehow seems to be ignored for ages). Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergueyz at gmail.com Wed Jan 20 12:16:41 2016 From: sergueyz at gmail.com (Serguey Zefirov) Date: Wed, 20 Jan 2016 15:16:41 +0300 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: <569F3C7B.80508@durchholz.org> References: <569F3C7B.80508@durchholz.org> Message-ID: You are unneccessary overly pessimistic, let me show you somethings you, probably, have not thought or heard about. 2016-01-20 10:51 GMT+03:00 Joachim Durchholz : > Am 19.01.2016 um 23:12 schrieb Henning Thielemann: > >> >> Fortunately, there are >> processors that are designed for custom instruction set extensions: >> https://en.wikipedia.org/wiki/Xtensa >> > > Unfortunately, the WP article does not say anything that couldn't be said > about, say, an ARM core. Other than that Xtensa core being some VLIW design. > > Would it be sensible to create a processor based on such a design? >> > > Very, very unlikely, for multiple reasons. > > Special-purpose CPUs have been built, most notably for LISP, less notably > for Java, and probably for other purposes that I haven't heard of. > Invariably, their architectural advantages were obsoleted by economy of > scale: Mainstream CPUs are being produced in such huge numbers that Intel > etc. could affort more engineers to optimize every nook and cranny, more > engineers to optimize the structure downscaling, and larger fabs that could > do more chips on more one-time expensive but per-piece cheap equipment, and > in the end, the special-purpose chips were slower and more expensive. It's > an extremely strong competition you are facing if you try this. > > Also, it is very easy to misidentify the actual bottlenecks and make > instructions for the wrong ones. > If caching is the main bottleneck (which it usually is), no amount of CPU > improvement will help you and you'll simply need a larger cache. Or, > probably, a compiler that knows enough about the program and its data flow > to arrange the data in a cache-line-friendly fashion. > A demonstration from the industry, albeit not quite hardware industry: http://www.disneyanimation.com/technology/innovations/hyperion - "Hyperion handles several million light rays at a time by sorting and bundling them together according to their directions. When the rays are grouped in this way, many of the rays in a bundle hit the same object in the same region of space. This similarity of ray hits then allows us ? and the computer ? to optimize the calculations for the objects hit." Then, let me bring up an old idea of mine: https://mail.haskell.org/pipermail/haskell-cafe/2009-August/065327.html Basically, we can group identical closures into vectors, ready for SIMD instructions to operate over them. The "vectors" should work just like Data.Vector.Unboxed - instead of vector of tuple of arguments there should be a tuple of vectors with individual arguments (and results to update for lazy evaluation). Combine this with sorting of addresses in case of references and you can get a lot of speedup by doing... not much. > > I do not think this is going to be a long-term problem though. Pure > languages have huge advantages for fine-grained parallel processing, and > CPU technology is pushing towards multiple cores, so that's a natural > match. As pure languages come into more widespread use, the engineers at > Intel, AMD etc. will look at what the pure languages need, and add > optimizations for these. > > Just my 2 cents. > Jo > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tomberek at gmail.com Wed Jan 20 12:20:11 2016 From: tomberek at gmail.com (Thomas Bereknyei) Date: Wed, 20 Jan 2016 07:20:11 -0500 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: Message-ID: There is a CPU being design with an interesting architecture. Take a look at the Mill CPU at millcomputing.com. A facsinating feature is the use of a belt of values vs registers. The values on the belt are immutable and fall off (~auto gc'd?) unless expressly saved or returned. It is also desined to make function calls very cheap. tomberek On Jan 20, 2016 3:49 AM, wrote: Send Haskell-Cafe mailing list submissions to haskell-cafe at haskell.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe or, via email, send a message with subject or body 'help' to haskell-cafe-request at haskell.org You can reach the person managing the list at haskell-cafe-owner at haskell.org When replying, please edit your Subject line so it is more specific than "Re: Contents of Haskell-Cafe digest..." Today's Topics: 1. Re: Doc generation? (Patrick Redmond) 2. Re: Doc generation? (Mark Fine) 3. Host-Oriented Template Haskell (Ericson, John) 4. [ANN] ghc-mod-5.5.0.0: Happy Haskell Hacking (Daniel Gr?ber) 5. CPU with Haskell support (Henning Thielemann) 6. Re: Doc generation? (Noon Silk) 7. Re: Doc generation? (Mikhail Glushenkov) 8. Re: CPU with Haskell support (Auke Booij) 9. Re: CPU with Haskell support (Darren Grant) 10. Re: CPU with Haskell support (Henning Thielemann) 11. Re: CPU with Haskell support (Ben Lippmeier) 12. Re: CPU with Haskell support (Richard A. O'Keefe) 13. Re: CPU with Haskell support (Gleb Popov) 14. Re: Doc generation? (Sven Panne) 15. Re: Doc generation? (Mikhail Glushenkov) 16. Re: CPU with Haskell support (Joachim Durchholz) 17. Re: Doc generation? (Sven Panne) ---------------------------------------------------------------------- Message: 1 Date: Tue, 19 Jan 2016 08:39:20 -0800 From: Patrick Redmond To: "cabal-devel at haskell.org" , "haskell-cafe at haskell.org" Subject: Re: [Haskell-cafe] Doc generation? Message-ID: Content-Type: text/plain; charset="utf-8" I don't know what's happening with hackage, but if you're using stack in your workflow a simple workaround is to build docs locally and search them with a shell script. For example: $ stack haddock async And then muck around in .stack-work or ~/.stack. I've written a bash/fish script to do the search for you here: plredmond.github.io/posts/search-haddocks-offline.html On Tuesday, January 19, 2016, Francesco Ariis wrote: > On Tue, Jan 19, 2016 at 04:24:24PM +1100, Noon Silk wrote: > > Does anyone know what is happening here? > > > > Not a single one of the packages on > > http://hackage.haskell.org/packages/recent has docs generated at the > moment. > > > > Some older ones, upload this year, also do not - > > http://hackage.haskell.org/package/pipes-concurrency > > Docs not being built can be quite frustrating; for those dark times I > build them locally: > > http://ariis.it/static/articles/no-docs-hackage/page.html > > Living with a flaky WiFi, saves me from screaming at the monitor quite > some times. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160119/517bfc2f/attachment-0001.html > ------------------------------ Message: 2 Date: Tue, 19 Jan 2016 09:51:40 -0800 From: Mark Fine To: Patrick Redmond Cc: "cabal-devel at haskell.org" , "haskell-cafe at haskell.org" Subject: Re: [Haskell-cafe] Doc generation? Message-ID: Content-Type: text/plain; charset="utf-8" Also, if the packages are on stackage, you can look at the documentation there: https://www.stackage.org/package/pipes-concurrency Mark On Tue, Jan 19, 2016 at 8:39 AM, Patrick Redmond wrote: > I don't know what's happening with hackage, but if you're using stack in > your workflow a simple workaround is to build docs locally and search them > with a shell script. For example: > > $ stack haddock async > > And then muck around in .stack-work or ~/.stack. I've written a > bash/fish script to do the search for you here: > plredmond.github.io/posts/search-haddocks-offline.html > > On Tuesday, January 19, 2016, Francesco Ariis wrote: > >> On Tue, Jan 19, 2016 at 04:24:24PM +1100, Noon Silk wrote: >> > Does anyone know what is happening here? >> > >> > Not a single one of the packages on >> > http://hackage.haskell.org/packages/recent has docs generated at the >> moment. >> > >> > Some older ones, upload this year, also do not - >> > http://hackage.haskell.org/package/pipes-concurrency >> >> Docs not being built can be quite frustrating; for those dark times I >> build them locally: >> >> http://ariis.it/static/articles/no-docs-hackage/page.html >> >> Living with a flaky WiFi, saves me from screaming at the monitor quite >> some times. >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160119/c6010b86/attachment-0001.html > ------------------------------ Message: 3 Date: Tue, 19 Jan 2016 13:17:02 -0800 From: "Ericson, John" To: Haskell-Cafe Subject: [Haskell-cafe] Host-Oriented Template Haskell Message-ID: Content-Type: text/plain; charset=UTF-8 As is well known, TH and cross-compiling do not get along. There are various proposals on how to make this interaction less annoying, and I am not against them. But as I see it, the problem is largely inherent to the design of TH itself: since values can (usually) be lifted from compile-time to run-time, and normal definitions from upstream modules to downstream modules' TH, TH and normal code must "live in the same world". Now this restriction in turn bequeaths TH with much expressive power, and I wouldn't advocate getting rid of it. But many tasks do not need it, and in some cases, say in bootstrapping compilers[1] themselves, it is impossible to use TH because of it, even were all the current proposals implemented. For these reason, I propose a new TH variant which has much harsher phase separation. Normal definitions from upstream modules can not be used, lifting values is either not permitted or is allowed to fail (because of missing/incompatible definitions), and IO is defined to match the behavior of the host, not target, platform (in the cross compiling case). The only interaction between the two phases is that quoted syntax is resolved against the the run-time phase's definitions (just like today). Some of you may find this a shoddy substitute for defining a subset of Haskell which behaves identically on all platforms, and optionally constraining TH to it. But the big feature that my proposal offers and that one doesn't is to be able to independently specify compile-time dependencies for the host-oriented TH---this is analogous to the newish `Setup.hs` dependencies. That in turns leads to what I think is the "killer app" for Host-Oriented TH: exposing the various prepossessors we use (alex, happy, hsc2hs, even CPP) into libraries, and side-stepping any need for "executable dependencies" in Cabal. Note also that at least hsc2hs additionally requires host-IO---there may not even exist a C compiler on the target platform at all. Finally, forgive me if this has been brought up before. I've been thinking about this a while, and did a final pass over the GHC wiki to make sure it wasn't already proposed, but I could have missed something (this is also my first post to the list). John [1]: https://github.com/ghcjs/ghcjs/blob/master/lib/ghcjs-prim/GHCJS/Prim/Internal/Build.hs ------------------------------ Message: 4 Date: Tue, 19 Jan 2016 22:19:48 +0100 From: Daniel Gr?ber To: haskell-cafe at haskell.org Subject: [Haskell-cafe] [ANN] ghc-mod-5.5.0.0: Happy Haskell Hacking Message-ID: <20160119211948.GA16744 at grml> Content-Type: text/plain; charset="us-ascii" I'm pleased to announce the release of ghc-mod 5.5.0.0! This is primarily a maintenance and bug fix release. We are releasing this as a major version bump as we are following a policy of not trying to keep API compatibility until v6.0 to enable us to clean up ghc-mod's internals and API. What's new? =========== * Cabal flags are now preserved across automatic reconfigurations When ghc-mod detects something influencing the cabal configuration has changed since the last invocation it will automatically reconfigure the project. Previously this would call 'cabal configure' without any additional options thus possibly reverting flags the user might have added to the configure command previously. Now we extract the current set of flags from the existing configuration and pass the appropriate options to the configure command. * Rewritten command-line parser (again) The home grown sub-command parser based on getopt has been a user experience disaster so we've replaced it using a new optparse-applicative based parser. This does have the unfortunate side effect that we had to remove support for some optional arguments we had supported previously thus breaking compatibility with very old frontends. * Remove CWD requirement from command-line tools In v5.4.0.0 we had to add a workaround for a nasty race condition in 'ghc-mod legacy-interactive' (ghc-modi) which added a requirement that all ghc-mod command line tools are run in the root of each project's directory. This limitation has now been removed. Frontends which have implemented this workaround should be compatible going forward but for performance reasons it is advisable to disable the workaround for versions after v5.5.0.0. * Various bug fixes and smaller improvements From the change log: * Fix cabal-helper errors when no global GHC is installed (Stack) * Support for spaces in file names when using legacy-interactive * Fix "No instance nor default method for class operation put" * Fix a variety of caching related issues * Emacs: Fix slowdown and bugs caused by excessive use of `map-file` * Emacs: Add ghc-report-errors to inhibit *GHC Error* logging What is ghc-mod? ================ ghc-mod is both a back-end program for enhancing editors and other kinds of development environments with support for Haskell and a library for abstracting the black magic incantations required to use the GHC API in various environments, especially Cabal and Stack projects. The library is used by ambitious projects like HaRe[1], mote[2] and haskell-ide-engine[3] Getting ghc-mod =============== GitHub: https://github.com/DanielG/ghc-mod Hackage: http://hackage.haskell.org/package/ghc-mod Editor frontends: - Emacs (native): https://github.com/DanielG/ghc-mod https://github.com/iquiw/company-ghc - Vim: https://github.com/eagletmt/ghcmod-vim https://github.com/eagletmt/neco-ghc - Atom: https://github.com/atom-haskell/ide-haskell Known issues ============ For issues other than the ones mentioned below visit our issue tracker: https://github.com/DanielG/ghc-mod/issues Frequently reported issues -------------------------- ghc-mod once compiled is bound to one version of GHC since we link against the GHC API library. This used to not be a very big problem but since Stack made it exceedingly easy for users to use more than one version of GHC without even knowing the number of problems in this area has exploded. We are tracing the issue in the following issue: https://github.com/DanielG/ghc-mod/issues/615 (Support switching GHC versions without recompiling ghc-mod) ghc-mod's `case`, `sig` and `refine` commands still do not work properly with GHC>7.10 (See https://github.com/DanielG/ghc-mod/issues/438). Unless someone volunteers to fix this issue I will work towards replacing the features using mote[2] instead as the current code is, from my point of view, unmaintainable. If you do notice any other problems please report them: https://github.com/DanielG/ghc-mod/issues/new ---- [1]: https://github.com/alanz/HaRe [2]: https://github.com/imeckler/mote [3]: https://github.com/haskell/haskell-ide-engine -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: not available URL: < http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160119/9fea4144/attachment-0001.sig > ------------------------------ Message: 5 Date: Tue, 19 Jan 2016 23:12:02 +0100 (CET) From: Henning Thielemann To: Haskell Cafe Subject: [Haskell-cafe] CPU with Haskell support Message-ID: Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII Hi all, every now and then I think it would be cool to have a microprocessor that supports Haskell in a way. A processor where lazy evaluation is not overhead but an optimization opportunity, a processor that can make use of the explicit data dependencies in Haskell programs in order to utilize many computation units in parallel. I know of the Reduceron project, which evolves only slowly and if it somewhen is ready for use it is uncertain whether it can compete with stock CPUs since FPGA's need much more chip space for the same logic. I got to know that in todays x86 processors you can alter the instruction set, which is mainly used for bugfixes. Wouldn't it be interesting to add some instructions for Haskell support? However, I suspect that such a patch might be rendered invalid by new processor generations with changed internal details. Fortunately, there are processors that are designed for custom instruction set extensions: https://en.wikipedia.org/wiki/Xtensa Would it be sensible to create a processor based on such a design? I have no idea what it might cost, and you would still need some peripheral circuitry to run it. What could processor instructions for Haskell support look like? Has anyone already thought in this direction? ------------------------------ Message: 6 Date: Wed, 20 Jan 2016 09:37:26 +1100 From: Noon Silk To: Mark Fine Cc: "cabal-devel at haskell.org" , "haskell-cafe at haskell.org" Subject: Re: [Haskell-cafe] Doc generation? Message-ID: Content-Type: text/plain; charset="utf-8" Thanks Mark and All, Indeed, I've been generating the docs myself offline; I didn't know about looking for the docs on stackage; thanks! -- Noon On Wed, Jan 20, 2016 at 4:51 AM, Mark Fine wrote: > Also, if the packages are on stackage, you can look at the documentation > there: > > https://www.stackage.org/package/pipes-concurrency > > Mark > > On Tue, Jan 19, 2016 at 8:39 AM, Patrick Redmond > wrote: > >> I don't know what's happening with hackage, but if you're using stack in >> your workflow a simple workaround is to build docs locally and search them >> with a shell script. For example: >> >> $ stack haddock async >> >> And then muck around in .stack-work or ~/.stack. I've written a >> bash/fish script to do the search for you here: >> plredmond.github.io/posts/search-haddocks-offline.html >> >> On Tuesday, January 19, 2016, Francesco Ariis wrote: >> >>> On Tue, Jan 19, 2016 at 04:24:24PM +1100, Noon Silk wrote: >>> > Does anyone know what is happening here? >>> > >>> > Not a single one of the packages on >>> > http://hackage.haskell.org/packages/recent has docs generated at the >>> moment. >>> > >>> > Some older ones, upload this year, also do not - >>> > http://hackage.haskell.org/package/pipes-concurrency >>> >>> Docs not being built can be quite frustrating; for those dark times I >>> build them locally: >>> >>> http://ariis.it/static/articles/no-docs-hackage/page.html >>> >>> Living with a flaky WiFi, saves me from screaming at the monitor quite >>> some times. >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -- Noon Silk, ? https://silky.github.io/ "Every morning when I wake up, I experience an exquisite joy ? the joy of being this signature." -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160120/f8298446/attachment-0001.html > ------------------------------ Message: 7 Date: Tue, 19 Jan 2016 23:40:31 +0100 From: Mikhail Glushenkov To: Noon Silk Cc: "cabal-devel at haskell.org" , "haskell-cafe at haskell.org" Subject: Re: [Haskell-cafe] Doc generation? Message-ID: Content-Type: text/plain; charset=UTF-8 Hi *, On 19 January 2016 at 23:37, Noon Silk wrote: > Thanks Mark and All, > > Indeed, I've been generating the docs myself offline; I didn't know about > looking for the docs on stackage; thanks! Note that with the Git version of cabal-install you can run 'cabal upload --doc' to upload the docs to Hackage manually. ------------------------------ Message: 8 Date: Tue, 19 Jan 2016 22:44:26 +0000 From: Auke Booij To: Henning Thielemann Cc: Haskell Cafe Subject: Re: [Haskell-cafe] CPU with Haskell support Message-ID: Content-Type: text/plain; charset=UTF-8 This question is much more involved than you seem to be suggesting. It's not just about adding "some instructions for Haskell support". You have to think about how you want to express /every/ haskell term as a series of bits (preferably predictably many bits), and find a (finite) combination of logical gates to do arbitrary computations with them. If you want to go anywhere in this directions, perhaps a good start would be implementing a processor with instructions for (untyped) lambda calculus. One approach for this could be to take a (mathematical) model of lambda calculus and see how its elements can be represented as natural numbers. This implementation, I suspect, would be terribly inefficient. Think about what the lambda application gate would look like in terms of NAND gates. Yes, it can probably be done in theory. No, it won't be pretty. And forget about practical. Finally, a major advantage of having such "raw" language as an instruction set is that it allows many many optimizations (e.g. pipelining (which, I would say, is the single most important reason that processors are able to run at GHzs instead of MHzs (Pentium 4 processors, famed for their high clock speed, had 31 pipeline stages))) that I cannot imagine being possible in anything close to a "lambda calculus processor". What is the added value you hope to achieve? On 19 January 2016 at 22:12, Henning Thielemann wrote: > > Hi all, > > every now and then I think it would be cool to have a microprocessor that > supports Haskell in a way. A processor where lazy evaluation is not overhead > but an optimization opportunity, a processor that can make use of the > explicit data dependencies in Haskell programs in order to utilize many > computation units in parallel. I know of the Reduceron project, which > evolves only slowly and if it somewhen is ready for use it is uncertain > whether it can compete with stock CPUs since FPGA's need much more chip > space for the same logic. > > I got to know that in todays x86 processors you can alter the instruction > set, which is mainly used for bugfixes. Wouldn't it be interesting to add > some instructions for Haskell support? However, I suspect that such a patch > might be rendered invalid by new processor generations with changed internal > details. Fortunately, there are processors that are designed for custom > instruction set extensions: > https://en.wikipedia.org/wiki/Xtensa > > Would it be sensible to create a processor based on such a design? I have no > idea what it might cost, and you would still need some peripheral circuitry > to run it. What could processor instructions for Haskell support look like? > Has anyone already thought in this direction? > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe ------------------------------ Message: 9 Date: Tue, 19 Jan 2016 15:00:12 -0800 From: Darren Grant To: Auke Booij Cc: Henning Thielemann , Haskell Cafe Subject: Re: [Haskell-cafe] CPU with Haskell support Message-ID: Content-Type: text/plain; charset="utf-8" Limiting the scope for my own sanity here, there may yet be some application in various hardware level emulations of continuation passing calculi, perhaps building on static single assignment. It might be possoble to derive an interesting instruction set from the sorts of intermediate representations we see in compiler infrastructures like LLVM, but it is hard to guess how these hardware designs would benefit haskell, rather than the other way around. Cheers, Darren On Jan 19, 2016 14:44, "Auke Booij" wrote: > This question is much more involved than you seem to be suggesting. > It's not just about adding "some instructions for Haskell support". > You have to think about how you want to express /every/ haskell term > as a series of bits (preferably predictably many bits), and find a > (finite) combination of logical gates to do arbitrary computations > with them. > > If you want to go anywhere in this directions, perhaps a good start > would be implementing a processor with instructions for (untyped) > lambda calculus. One approach for this could be to take a > (mathematical) model of lambda calculus and see how its elements can > be represented as natural numbers. > > This implementation, I suspect, would be terribly inefficient. Think > about what the lambda application gate would look like in terms of > NAND gates. Yes, it can probably be done in theory. No, it won't be > pretty. And forget about practical. > > Finally, a major advantage of having such "raw" language as an > instruction set is that it allows many many optimizations (e.g. > pipelining (which, I would say, is the single most important reason > that processors are able to run at GHzs instead of MHzs (Pentium 4 > processors, famed for their high clock speed, had 31 pipeline > stages))) that I cannot imagine being possible in anything close to a > "lambda calculus processor". > > What is the added value you hope to achieve? > > On 19 January 2016 at 22:12, Henning Thielemann > wrote: > > > > Hi all, > > > > every now and then I think it would be cool to have a microprocessor that > > supports Haskell in a way. A processor where lazy evaluation is not > overhead > > but an optimization opportunity, a processor that can make use of the > > explicit data dependencies in Haskell programs in order to utilize many > > computation units in parallel. I know of the Reduceron project, which > > evolves only slowly and if it somewhen is ready for use it is uncertain > > whether it can compete with stock CPUs since FPGA's need much more chip > > space for the same logic. > > > > I got to know that in todays x86 processors you can alter the instruction > > set, which is mainly used for bugfixes. Wouldn't it be interesting to add > > some instructions for Haskell support? However, I suspect that such a > patch > > might be rendered invalid by new processor generations with changed > internal > > details. Fortunately, there are processors that are designed for custom > > instruction set extensions: > > https://en.wikipedia.org/wiki/Xtensa > > > > Would it be sensible to create a processor based on such a design? I > have no > > idea what it might cost, and you would still need some peripheral > circuitry > > to run it. What could processor instructions for Haskell support look > like? > > Has anyone already thought in this direction? > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160119/58412564/attachment-0001.html > ------------------------------ Message: 10 Date: Wed, 20 Jan 2016 00:03:06 +0100 (CET) From: Henning Thielemann To: Auke Booij Cc: Haskell Cafe Subject: Re: [Haskell-cafe] CPU with Haskell support Message-ID: Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed On Tue, 19 Jan 2016, Auke Booij wrote: > This question is much more involved than you seem to be suggesting. > It's not just about adding "some instructions for Haskell support". > You have to think about how you want to express /every/ haskell term > as a series of bits (preferably predictably many bits), and find a > (finite) combination of logical gates to do arbitrary computations > with them. I am not thinking about a radically different machine language, just a common imperative machine language with some added instructions for tasks often found in machine code generated from Haskell. E.g. mainstream processors support C function calls with special jump instruction and stack handling. Maybe there could be instructions that assist handling thunks or Haskell function calls. ------------------------------ Message: 11 Date: Wed, 20 Jan 2016 11:52:21 +1100 From: Ben Lippmeier To: Henning Thielemann Cc: Haskell Cafe Subject: Re: [Haskell-cafe] CPU with Haskell support Message-ID: <3EAF442C-90D4-40BE-91CE-EB7AE3E6EAFC at ouroborus.net> Content-Type: text/plain; charset="utf-8" > On 20 Jan 2016, at 9:12 am, Henning Thielemann < lemming at henning-thielemann.de> wrote: > I got to know that in todays x86 processors you can alter the instruction set, which is mainly used for bugfixes. Wouldn't it be interesting to add some instructions for Haskell support? However, I suspect that such a patch might be rendered invalid by new processor generations with changed internal details. Fortunately, there are processors that are designed for custom instruction set extensions: > https://en.wikipedia.org/wiki/Xtensa Your post assumes that the time to fetch/decode the instruction stream is a bottleneck, and reducing the number of instructions will in some way make the program faster. Your typically lazy GHC compiled program spends much of its time building thunks and otherwise copying data between the stack and the heap. If it?s blocked waiting for data memory / data cache miss then reducing the number of instructions won?t help anything ? at least if the fancy new instructions just tell the processor to do something that would lead to cache miss anyway. See: Cache Performance of Lazy Functional Programs on Current Hardware (from 2009) Arbob Ahmad and Henry DeYoung http://www.cs.cmu.edu/~hdeyoung/15740/report.pdf < http://www.cs.cmu.edu/~hdeyoung/15740/report.pdf> Indirect branches are also a problem (load an address from data memory, then jump to it), as branch predictors usually cannot deal with them. Slowdowns due to mispredicted branches could perhaps be mitigated by improving the branch predictor in a Haskell specific way, but you might not need new instructions to do so. Or another way of putting it: ?If you tell a long story with less words, then it?s still a long story.? Ben. -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160120/881d9bb6/attachment-0001.html > ------------------------------ Message: 12 Date: Wed, 20 Jan 2016 18:16:49 +1300 From: "Richard A. O'Keefe" To: Henning Thielemann Cc: Haskell Cafe Subject: Re: [Haskell-cafe] CPU with Haskell support Message-ID: Content-Type: text/plain; charset="us-ascii" On 20/01/2016, at 12:03 pm, Henning Thielemann < lemming at henning-thielemann.de> wrote: > I am not thinking about a radically different machine language, just a common imperative machine language with some added instructions for tasks often found in machine code generated from Haskell. E.g. mainstream processors support C function calls with special jump instruction and stack handling. Maybe there could be instructions that assist handling thunks or Haskell function calls. I was at a presentation once where the speaker showed how (thanks to the fact that Prolog doesn't evaluate arguments in calls) calling a procedure and executing the procedure could be overlapped, getting a factor of 2 speedup for an important part of the code. At another presentation a speaker showed how using a special outboard coprocessor could dramatically speed up memory management. I suspect that neither technique would be much help on today's machines and for Haskell. However, there is a hint here that doing something quite different might pay off. For example, if branch predictors don't do well with thunk handling, maybe there is a way of processing thunks that a quite different kind of branch predictor *might* cope with. Or maybe something that's expecting to process thousands of microthreads might not care about branch prediction. (Although that idea has been tried as a way of handling memory latency, I don't think it's been tried for Haskell.) Perhaps you might look for something different; instead of 'faster on similar hardware' you might look at 'cheaper'. Could a specialised Haskell processor use less energy than a standard CPU? Don't quit your day job, but don't be too sure there's nothing left to think of either. ------------------------------ Message: 13 Date: Wed, 20 Jan 2016 10:05:48 +0300 From: Gleb Popov <6yearold at gmail.com> To: Henning Thielemann Cc: Haskell Cafe Subject: Re: [Haskell-cafe] CPU with Haskell support Message-ID: Content-Type: text/plain; charset="utf-8" On Wed, Jan 20, 2016 at 1:12 AM, Henning Thielemann < lemming at henning-thielemann.de> wrote: > > Hi all, > > every now and then I think it would be cool to have a microprocessor that > supports Haskell in a way. A processor where lazy evaluation is not > overhead but an optimization opportunity, a processor that can make use of > the explicit data dependencies in Haskell programs in order to utilize many > computation units in parallel. I know of the Reduceron project, which > evolves only slowly and if it somewhen is ready for use it is uncertain > whether it can compete with stock CPUs since FPGA's need much more chip > space for the same logic. > > I got to know that in todays x86 processors you can alter the instruction > set, which is mainly used for bugfixes. Wouldn't it be interesting to add > some instructions for Haskell support? However, I suspect that such a patch > might be rendered invalid by new processor generations with changed > internal details. Fortunately, there are processors that are designed for > custom instruction set extensions: > https://en.wikipedia.org/wiki/Xtensa > > Would it be sensible to create a processor based on such a design? I have > no idea what it might cost, and you would still need some peripheral > circuitry to run it. What could processor instructions for Haskell support > look like? Has anyone already thought in this direction? > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > I remember reading relevent paper: The Reduceron reconfigured and re-evaluated. Authors are MATTHEW NAYLOR and COLIN RUNCIMAN. -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160120/df0fdb6a/attachment-0001.html > ------------------------------ Message: 14 Date: Wed, 20 Jan 2016 08:09:38 +0100 From: Sven Panne To: Mikhail Glushenkov Cc: "cabal-devel at haskell.org" , "haskell-cafe at haskell.org" Subject: Re: [Haskell-cafe] Doc generation? Message-ID: Content-Type: text/plain; charset="utf-8" 2016-01-19 23:40 GMT+01:00 Mikhail Glushenkov : > Note that with the Git version of cabal-install you can run 'cabal > upload --doc' to upload the docs to Hackage manually. > Are there any safeguards on the Hackage server side to guarantee consistency between the uploaded package and the uploaded docs? (i.e. make sure it's the right version etc.) Are there checks on the server side that the cross-package links are correct? Does the server make sure that the docs contain source links? If the answer to any of these questions is "no", I consider even the possibility of uploading docs by hand a bug. Wrong/partial documentation is worse than no documentation at all... Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160120/621dacc2/attachment-0001.html > ------------------------------ Message: 15 Date: Wed, 20 Jan 2016 08:35:02 +0100 From: Mikhail Glushenkov To: Sven Panne Cc: "cabal-devel at haskell.org" , "haskell-cafe at haskell.org" Subject: Re: [Haskell-cafe] Doc generation? Message-ID: Content-Type: text/plain; charset=UTF-8 Hi, On 20 January 2016 at 08:09, Sven Panne wrote: > Are there any safeguards on the Hackage server side to guarantee consistency > between the uploaded package and the uploaded docs? (i.e. make sure it's the > right version etc.) Are there checks on the server side that the > cross-package links are correct? Does the server make sure that the docs > contain source links? If I'm reading [1] correctly, no such checks are performed. > If the answer to any of these questions is "no", I consider even the > possibility of uploading docs by hand a bug. Wrong/partial documentation is > worse than no documentation at all... You're welcome to open a ticket on the hackage-server bug tracker. [1] https://github.com/haskell/hackage-server/blob/master/Distribution/Server/Features/Documentation.hs#L223 ------------------------------ Message: 16 Date: Wed, 20 Jan 2016 08:51:23 +0100 From: Joachim Durchholz To: haskell-cafe at haskell.org Subject: Re: [Haskell-cafe] CPU with Haskell support Message-ID: <569F3C7B.80508 at durchholz.org> Content-Type: text/plain; charset=windows-1252; format=flowed Am 19.01.2016 um 23:12 schrieb Henning Thielemann: > > Fortunately, there are > processors that are designed for custom instruction set extensions: > https://en.wikipedia.org/wiki/Xtensa Unfortunately, the WP article does not say anything that couldn't be said about, say, an ARM core. Other than that Xtensa core being some VLIW design. > Would it be sensible to create a processor based on such a design? Very, very unlikely, for multiple reasons. Special-purpose CPUs have been built, most notably for LISP, less notably for Java, and probably for other purposes that I haven't heard of. Invariably, their architectural advantages were obsoleted by economy of scale: Mainstream CPUs are being produced in such huge numbers that Intel etc. could affort more engineers to optimize every nook and cranny, more engineers to optimize the structure downscaling, and larger fabs that could do more chips on more one-time expensive but per-piece cheap equipment, and in the end, the special-purpose chips were slower and more expensive. It's an extremely strong competition you are facing if you try this. Also, it is very easy to misidentify the actual bottlenecks and make instructions for the wrong ones. If caching is the main bottleneck (which it usually is), no amount of CPU improvement will help you and you'll simply need a larger cache. Or, probably, a compiler that knows enough about the program and its data flow to arrange the data in a cache-line-friendly fashion. I do not think this is going to be a long-term problem though. Pure languages have huge advantages for fine-grained parallel processing, and CPU technology is pushing towards multiple cores, so that's a natural match. As pure languages come into more widespread use, the engineers at Intel, AMD etc. will look at what the pure languages need, and add optimizations for these. Just my 2 cents. Jo ------------------------------ Message: 17 Date: Wed, 20 Jan 2016 09:49:31 +0100 From: Sven Panne To: Mikhail Glushenkov Cc: "cabal-devel at haskell.org" , "haskell-cafe at haskell.org" Subject: Re: [Haskell-cafe] Doc generation? Message-ID: Content-Type: text/plain; charset="utf-8" 2016-01-20 8:35 GMT+01:00 Mikhail Glushenkov : > You're welcome to open a ticket on the hackage-server bug tracker. > I've reported this several times through several channels, not sure if yet another report will have an impact. Here a quick summary of doc-generation-related issues: https://github.com/haskell/hackage-server/issues/464 https://github.com/haskell/hackage-server/issues/463 https://github.com/haskell/hackage-server/issues/421 https://github.com/haskell/hackage-server/issues/420 https://github.com/haskell/hackage-server/issues/368 https://github.com/haskell/hackage-server/issues/244 https://github.com/haskell/hackage-server/issues/183 https://github.com/haskell/hackage-server/issues/145 https://github.com/haskell/hackage-server/issues/55 >From a packager maintainer POV, it's totally unpredictable if/when documentation gets built, see e.g. http://hackage.haskell.org/package/GLURaw-2.0.0.1: Currently there are no docs, while the previous version had docs, and the only change was relaxing the upper bound of OpenGLRaw ( https://github.com/haskell-opengl/GLURaw/compare/v2.0.0.0...v2.0.0.1). Basically I gave up any hope and rely on stackage and/or local docs... :-( Cheers, S. -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160120/505370f1/attachment.html > ------------------------------ Subject: Digest Footer _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe ------------------------------ End of Haskell-Cafe Digest, Vol 149, Issue 18 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ollie at ocharles.org.uk Wed Jan 20 12:41:44 2016 From: ollie at ocharles.org.uk (Oliver Charles) Date: Wed, 20 Jan 2016 12:41:44 +0000 Subject: [Haskell-cafe] Doc generation? In-Reply-To: References: <20160119080859.GA5300@casa.casa> Message-ID: This would require the ability to re-upload a package that only has documentation changes. I regularly re-upload documentation when people report a documentation bug (such as a typo). I wouldn't want to lose that ability. On Wed, Jan 20, 2016 at 10:46 AM Sven Panne wrote: > 2016-01-20 10:11 GMT+01:00 Mikhail Glushenkov < > mikhail.glushenkov at gmail.com>: > >> Those tickets are about missing docs, not missing checks of manually >> uploaded docs. >> > > Ah, OK, then we misunderstood each other. My point is: If doc generation > actually worked on Hackage, the manual upload could be disabled > immediately, I see it only as a fragile workaround, so there is no point in > opening a ticket for improving that when we already have tons of tickets > for the *real* problem (which somehow seems to be ignored for ages). > > Cheers, > S. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jo at durchholz.org Wed Jan 20 12:43:32 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Wed, 20 Jan 2016 13:43:32 +0100 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: <569F3C7B.80508@durchholz.org> Message-ID: <569F80F4.60707@durchholz.org> Am 20.01.2016 um 13:16 schrieb Serguey Zefirov: > You are unneccessary overly pessimistic, let me show you somethings you, > probably, have not thought or heard about. Okaaaay... > A demonstration from the industry, albeit not quite hardware industry: > > http://www.disneyanimation.com/technology/innovations/hyperion - "Hyperion > handles several million light rays at a time by sorting and bundling them > together according to their directions. When the rays are grouped in this > way, many of the rays in a bundle hit the same object in the same region of > space. This similarity of ray hits then allows us ? and the computer ? to > optimize the calculations for the objects hit." Sure. If you have a few gazillion of identical algorithms, you can parallelize on that. That's the reason why 3D cards even took off, the graphics pipeline grew processing capabilities and evolved into a (rather restricted) GPU core model. So it's not necessarily impossible to build something useful, merely very unlikely. > Then, let me bring up an old idea of mine: > https://mail.haskell.org/pipermail/haskell-cafe/2009-August/065327.html > > Basically, we can group identical closures into vectors, ready for SIMD > instructions to operate over them. The "vectors" should work just like > Data.Vector.Unboxed - instead of vector of tuple of arguments there should > be a tuple of vectors with individual arguments (and results to update for > lazy evaluation). > > Combine this with sorting of addresses in case of references and you can > get a lot of speedup by doing... not much. Heh. Such stuff could work - *provided* that you can really make a case of having enough similar work. Still, I'd work on making a model of that on GPGPU hardware first. Two advantages: 1) No hardware investment. 2) You can see what the low-hanging fruit are and get a rough first idea how much parallelization really gives you. The other approach: See what you can get out of a Xeon with really many cores (14, or even more). Compare the single-GPGPU vs. multi-GPGPU speedup with the single-CPUcore vs. multi-CPUcore speedup. That might provide insight into how well the interconnects and cache coherence protocols interfere with the multicore speedup. Why I'm so central on multicore? Because that's where hardware is going to go, because hardware isn't going to clock much higher but people will still want to improve performance. Actually I think that single-core improvements aren't going to be very important. First on my list would be exploiting multicore, second cache locality. There's more to be gotten from there than from specialized hardware, IMVHO. Regards, Jo From sergueyz at gmail.com Wed Jan 20 12:43:45 2016 From: sergueyz at gmail.com (Serguey Zefirov) Date: Wed, 20 Jan 2016 15:43:45 +0300 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: Message-ID: The "belt", as I can guess, either has same addressing problems as stack - it shifts, so addresses of operands will be hard to compute knowing only their state of stack. Or, if belt does not shift, it is equivalent to the N (depends on arch) copies of regular two port register file. That was attempted in Alpha AXP 21264, I believe - there were two register files to reduce wiring overhead in them and they were masked by register renaming, keeping architecture implementation minutae hidden. I also think you should multiply that N above by 2, for at least two register files for two operand operations and by M (number of ALUs) or risk developing your own register file with M*2 ports (the wiring overhead of register file is O(ports^2)). The infamous Elbrus 2K VLIW CPU suffered from slow register file for years (as, I believe, Itanium do - compare its clock frequency with Xeons). I believe Elbrus team more or less solved that in about 2010. I can be wrong, of course, but I generally try to stay away from VLIW or very high ILP single cores in my CPU designs (which I still do in my spare time). 2016-01-20 15:20 GMT+03:00 Thomas Bereknyei : > There is a CPU being design with an interesting architecture. Take a look > at the Mill CPU at millcomputing.com. > > A facsinating feature is the use of a belt of values vs registers. The > values on the belt are immutable and fall off (~auto gc'd?) unless > expressly saved or returned. > > It is also desined to make function calls very cheap. > > tomberek > On Jan 20, 2016 3:49 AM, wrote: > > Send Haskell-Cafe mailing list submissions to > haskell-cafe at haskell.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > or, via email, send a message with subject or body 'help' to > haskell-cafe-request at haskell.org > > You can reach the person managing the list at > haskell-cafe-owner at haskell.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Haskell-Cafe digest..." > > > Today's Topics: > > 1. Re: Doc generation? (Patrick Redmond) > 2. Re: Doc generation? (Mark Fine) > 3. Host-Oriented Template Haskell (Ericson, John) > 4. [ANN] ghc-mod-5.5.0.0: Happy Haskell Hacking (Daniel Gr?ber) > 5. CPU with Haskell support (Henning Thielemann) > 6. Re: Doc generation? (Noon Silk) > 7. Re: Doc generation? (Mikhail Glushenkov) > 8. Re: CPU with Haskell support (Auke Booij) > 9. Re: CPU with Haskell support (Darren Grant) > 10. Re: CPU with Haskell support (Henning Thielemann) > 11. Re: CPU with Haskell support (Ben Lippmeier) > 12. Re: CPU with Haskell support (Richard A. O'Keefe) > 13. Re: CPU with Haskell support (Gleb Popov) > 14. Re: Doc generation? (Sven Panne) > 15. Re: Doc generation? (Mikhail Glushenkov) > 16. Re: CPU with Haskell support (Joachim Durchholz) > 17. Re: Doc generation? (Sven Panne) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 19 Jan 2016 08:39:20 -0800 > From: Patrick Redmond > To: "cabal-devel at haskell.org" , > "haskell-cafe at haskell.org" > Subject: Re: [Haskell-cafe] Doc generation? > Message-ID: > 8drrZDuOQtcB38rPPkJ2w at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > I don't know what's happening with hackage, but if you're using stack in > your workflow a simple workaround is to build docs locally and search them > with a shell script. For example: > > $ stack haddock async > > And then muck around in .stack-work or ~/.stack. I've written a > bash/fish script to do the search for you here: > plredmond.github.io/posts/search-haddocks-offline.html > > On Tuesday, January 19, 2016, Francesco Ariis wrote: > > > On Tue, Jan 19, 2016 at 04:24:24PM +1100, Noon Silk wrote: > > > Does anyone know what is happening here? > > > > > > Not a single one of the packages on > > > http://hackage.haskell.org/packages/recent has docs generated at the > > moment. > > > > > > Some older ones, upload this year, also do not - > > > http://hackage.haskell.org/package/pipes-concurrency > > > > Docs not being built can be quite frustrating; for those dark times I > > build them locally: > > > > http://ariis.it/static/articles/no-docs-hackage/page.html > > > > Living with a flaky WiFi, saves me from screaming at the monitor quite > > some times. > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160119/517bfc2f/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Tue, 19 Jan 2016 09:51:40 -0800 > From: Mark Fine > To: Patrick Redmond > Cc: "cabal-devel at haskell.org" , > "haskell-cafe at haskell.org" > Subject: Re: [Haskell-cafe] Doc generation? > Message-ID: > < > CANRZ_fkC4-wM8-UyzeQEFJiiUZ02qo22CHdhfUt68wQeaRX8UA at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Also, if the packages are on stackage, you can look at the documentation > there: > > https://www.stackage.org/package/pipes-concurrency > > Mark > > On Tue, Jan 19, 2016 at 8:39 AM, Patrick Redmond > wrote: > > > I don't know what's happening with hackage, but if you're using stack in > > your workflow a simple workaround is to build docs locally and search > them > > with a shell script. For example: > > > > $ stack haddock async > > > > And then muck around in .stack-work or ~/.stack. I've written a > > bash/fish script to do the search for you here: > > plredmond.github.io/posts/search-haddocks-offline.html > > > > On Tuesday, January 19, 2016, Francesco Ariis wrote: > > > >> On Tue, Jan 19, 2016 at 04:24:24PM +1100, Noon Silk wrote: > >> > Does anyone know what is happening here? > >> > > >> > Not a single one of the packages on > >> > http://hackage.haskell.org/packages/recent has docs generated at the > >> moment. > >> > > >> > Some older ones, upload this year, also do not - > >> > http://hackage.haskell.org/package/pipes-concurrency > >> > >> Docs not being built can be quite frustrating; for those dark times I > >> build them locally: > >> > >> http://ariis.it/static/articles/no-docs-hackage/page.html > >> > >> Living with a flaky WiFi, saves me from screaming at the monitor quite > >> some times. > >> _______________________________________________ > >> Haskell-Cafe mailing list > >> Haskell-Cafe at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > >> > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160119/c6010b86/attachment-0001.html > > > > ------------------------------ > > Message: 3 > Date: Tue, 19 Jan 2016 13:17:02 -0800 > From: "Ericson, John" > To: Haskell-Cafe > Subject: [Haskell-cafe] Host-Oriented Template Haskell > Message-ID: > < > CA+Vhx4TF0x5-XrKi09A36xVww2HyWr0J3QysqJYdJbKDU3dhDw at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > As is well known, TH and cross-compiling do not get along. There are > various proposals on how to make this interaction less annoying, and I > am not against them. But as I see it, the problem is largely inherent > to the design of TH itself: since values can (usually) be lifted from > compile-time to run-time, and normal definitions from upstream modules > to downstream modules' TH, TH and normal code must "live in the same > world". > > Now this restriction in turn bequeaths TH with much expressive power, > and I wouldn't advocate getting rid of it. But many tasks do not need > it, and in some cases, say in bootstrapping compilers[1] themselves, > it is impossible to use TH because of it, even were all the current > proposals implemented. > > For these reason, I propose a new TH variant which has much harsher > phase separation. Normal definitions from upstream modules can not be > used, lifting values is either not permitted or is allowed to fail > (because of missing/incompatible definitions), and IO is defined to > match the behavior of the host, not target, platform (in the cross > compiling case). The only interaction between the two phases is that > quoted syntax is resolved against the the run-time phase's definitions > (just like today). > > Some of you may find this a shoddy substitute for defining a subset of > Haskell which behaves identically on all platforms, and optionally > constraining TH to it. But the big feature that my proposal offers and > that one doesn't is to be able to independently specify compile-time > dependencies for the host-oriented TH---this is analogous to the > newish `Setup.hs` dependencies. That in turns leads to what I think is > the "killer app" for Host-Oriented TH: exposing the various > prepossessors we use (alex, happy, hsc2hs, even CPP) into libraries, > and side-stepping any need for "executable dependencies" in Cabal. > Note also that at least hsc2hs additionally requires host-IO---there > may not even exist a C compiler on the target platform at all. > > Finally, forgive me if this has been brought up before. I've been > thinking about this a while, and did a final pass over the GHC wiki to > make sure it wasn't already proposed, but I could have missed > something (this is also my first post to the list). > > John > > [1]: > https://github.com/ghcjs/ghcjs/blob/master/lib/ghcjs-prim/GHCJS/Prim/Internal/Build.hs > > > ------------------------------ > > Message: 4 > Date: Tue, 19 Jan 2016 22:19:48 +0100 > From: Daniel Gr?ber > To: haskell-cafe at haskell.org > Subject: [Haskell-cafe] [ANN] ghc-mod-5.5.0.0: Happy Haskell Hacking > Message-ID: <20160119211948.GA16744 at grml> > Content-Type: text/plain; charset="us-ascii" > > I'm pleased to announce the release of ghc-mod 5.5.0.0! > > This is primarily a maintenance and bug fix release. We are releasing this > as a > major version bump as we are following a policy of not trying to keep API > compatibility until v6.0 to enable us to clean up ghc-mod's internals and > API. > > What's new? > =========== > > * Cabal flags are now preserved across automatic reconfigurations > > When ghc-mod detects something influencing the cabal configuration has > changed > since the last invocation it will automatically reconfigure the > project. Previously this would call 'cabal configure' without any > additional > options thus possibly reverting flags the user might have added to the > configure command previously. Now we extract the current set of flags > from the > existing configuration and pass the appropriate options to the configure > command. > > * Rewritten command-line parser (again) > > The home grown sub-command parser based on getopt has been a user > experience > disaster so we've replaced it using a new optparse-applicative based > parser. > This does have the unfortunate side effect that we had to remove support > for > some optional arguments we had supported previously thus breaking > compatibility with very old frontends. > > * Remove CWD requirement from command-line tools > > In v5.4.0.0 we had to add a workaround for a nasty race condition in > 'ghc-mod > legacy-interactive' (ghc-modi) which added a requirement that all ghc-mod > command line tools are run in the root of each project's directory. This > limitation has now been removed. Frontends which have implemented this > workaround should be compatible going forward but for performance > reasons it > is advisable to disable the workaround for versions after v5.5.0.0. > > * Various bug fixes and smaller improvements > > From the change log: > * Fix cabal-helper errors when no global GHC is installed (Stack) > * Support for spaces in file names when using legacy-interactive > * Fix "No instance nor default method for class operation put" > * Fix a variety of caching related issues > * Emacs: Fix slowdown and bugs caused by excessive use of `map-file` > * Emacs: Add ghc-report-errors to inhibit *GHC Error* logging > > > What is ghc-mod? > ================ > > ghc-mod is both a back-end program for enhancing editors and other kinds of > development environments with support for Haskell and a library for > abstracting > the black magic incantations required to use the GHC API in various > environments, especially Cabal and Stack projects. The library is used by > ambitious projects like HaRe[1], mote[2] and haskell-ide-engine[3] > > > Getting ghc-mod > =============== > > GitHub: https://github.com/DanielG/ghc-mod > Hackage: http://hackage.haskell.org/package/ghc-mod > > Editor frontends: > > - Emacs (native): > https://github.com/DanielG/ghc-mod > https://github.com/iquiw/company-ghc > - Vim: > https://github.com/eagletmt/ghcmod-vim > https://github.com/eagletmt/neco-ghc > - Atom: > https://github.com/atom-haskell/ide-haskell > > Known issues > ============ > > For issues other than the ones mentioned below visit our issue tracker: > > https://github.com/DanielG/ghc-mod/issues > > Frequently reported issues > -------------------------- > > ghc-mod once compiled is bound to one version of GHC since we link against > the > GHC API library. This used to not be a very big problem but since Stack > made it > exceedingly easy for users to use more than one version of GHC without even > knowing the number of problems in this area has exploded. We are tracing > the > issue in the following issue: > https://github.com/DanielG/ghc-mod/issues/615 > (Support switching GHC versions without recompiling ghc-mod) > > ghc-mod's `case`, `sig` and `refine` commands still do not work properly > with > GHC>7.10 (See https://github.com/DanielG/ghc-mod/issues/438). Unless > someone volunteers to fix this issue I will work towards replacing the > features > using mote[2] instead as the current code is, from my point of view, > unmaintainable. > > If you do notice any other problems please report them: > > https://github.com/DanielG/ghc-mod/issues/new > > ---- > [1]: https://github.com/alanz/HaRe > [2]: https://github.com/imeckler/mote > [3]: https://github.com/haskell/haskell-ide-engine > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: signature.asc > Type: application/pgp-signature > Size: 819 bytes > Desc: not available > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160119/9fea4144/attachment-0001.sig > > > > ------------------------------ > > Message: 5 > Date: Tue, 19 Jan 2016 23:12:02 +0100 (CET) > From: Henning Thielemann > To: Haskell Cafe > Subject: [Haskell-cafe] CPU with Haskell support > Message-ID: > Content-Type: TEXT/PLAIN; format=flowed; charset=US-ASCII > > > Hi all, > > every now and then I think it would be cool to have a microprocessor that > supports Haskell in a way. A processor where lazy evaluation is not > overhead but an optimization opportunity, a processor that can make use of > the explicit data dependencies in Haskell programs in order to utilize > many computation units in parallel. I know of the Reduceron project, which > evolves only slowly and if it somewhen is ready for use it is uncertain > whether it can compete with stock CPUs since FPGA's need much more chip > space for the same logic. > > I got to know that in todays x86 processors you can alter the instruction > set, which is mainly used for bugfixes. Wouldn't it be interesting to add > some instructions for Haskell support? However, I suspect that such a > patch might be rendered invalid by new processor generations with changed > internal details. Fortunately, there are processors that are designed for > custom instruction set extensions: > https://en.wikipedia.org/wiki/Xtensa > > Would it be sensible to create a processor based on such a design? I have > no idea what it might cost, and you would still need some peripheral > circuitry to run it. What could processor instructions for Haskell support > look like? Has anyone already thought in this direction? > > > ------------------------------ > > Message: 6 > Date: Wed, 20 Jan 2016 09:37:26 +1100 > From: Noon Silk > To: Mark Fine > Cc: "cabal-devel at haskell.org" , > "haskell-cafe at haskell.org" > Subject: Re: [Haskell-cafe] Doc generation? > Message-ID: > pMc_QqhFjXw at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Thanks Mark and All, > > Indeed, I've been generating the docs myself offline; I didn't know about > looking for the docs on stackage; thanks! > > -- > Noon > > On Wed, Jan 20, 2016 at 4:51 AM, Mark Fine wrote: > > > Also, if the packages are on stackage, you can look at the documentation > > there: > > > > https://www.stackage.org/package/pipes-concurrency > > > > Mark > > > > On Tue, Jan 19, 2016 at 8:39 AM, Patrick Redmond > > wrote: > > > >> I don't know what's happening with hackage, but if you're using stack in > >> your workflow a simple workaround is to build docs locally and search > them > >> with a shell script. For example: > >> > >> $ stack haddock async > >> > >> And then muck around in .stack-work or ~/.stack. I've written a > >> bash/fish script to do the search for you here: > >> plredmond.github.io/posts/search-haddocks-offline.html > >> > >> On Tuesday, January 19, 2016, Francesco Ariis wrote: > >> > >>> On Tue, Jan 19, 2016 at 04:24:24PM +1100, Noon Silk wrote: > >>> > Does anyone know what is happening here? > >>> > > >>> > Not a single one of the packages on > >>> > http://hackage.haskell.org/packages/recent has docs generated at the > >>> moment. > >>> > > >>> > Some older ones, upload this year, also do not - > >>> > http://hackage.haskell.org/package/pipes-concurrency > >>> > >>> Docs not being built can be quite frustrating; for those dark times I > >>> build them locally: > >>> > >>> http://ariis.it/static/articles/no-docs-hackage/page.html > >>> > >>> Living with a flaky WiFi, saves me from screaming at the monitor quite > >>> some times. > >>> _______________________________________________ > >>> Haskell-Cafe mailing list > >>> Haskell-Cafe at haskell.org > >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > >>> > >> > >> _______________________________________________ > >> Haskell-Cafe mailing list > >> Haskell-Cafe at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > >> > >> > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > > > -- > Noon Silk, ? > > https://silky.github.io/ > > "Every morning when I wake up, I experience an exquisite joy ? the joy > of being this signature." > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160120/f8298446/attachment-0001.html > > > > ------------------------------ > > Message: 7 > Date: Tue, 19 Jan 2016 23:40:31 +0100 > From: Mikhail Glushenkov > To: Noon Silk > Cc: "cabal-devel at haskell.org" , > "haskell-cafe at haskell.org" > Subject: Re: [Haskell-cafe] Doc generation? > Message-ID: > oARD+WCsgctKUqyJ9yMEyV7z1O-Q at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > Hi *, > > On 19 January 2016 at 23:37, Noon Silk wrote: > > Thanks Mark and All, > > > > Indeed, I've been generating the docs myself offline; I didn't know > about > > looking for the docs on stackage; thanks! > > Note that with the Git version of cabal-install you can run 'cabal > upload --doc' to upload the docs to Hackage manually. > > > ------------------------------ > > Message: 8 > Date: Tue, 19 Jan 2016 22:44:26 +0000 > From: Auke Booij > To: Henning Thielemann > Cc: Haskell Cafe > Subject: Re: [Haskell-cafe] CPU with Haskell support > Message-ID: > < > CAP6YhL+GkGDjQSQuBE0HcrL8R6MAibSKRWu7jAR5Or8uwbDKfA at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > This question is much more involved than you seem to be suggesting. > It's not just about adding "some instructions for Haskell support". > You have to think about how you want to express /every/ haskell term > as a series of bits (preferably predictably many bits), and find a > (finite) combination of logical gates to do arbitrary computations > with them. > > If you want to go anywhere in this directions, perhaps a good start > would be implementing a processor with instructions for (untyped) > lambda calculus. One approach for this could be to take a > (mathematical) model of lambda calculus and see how its elements can > be represented as natural numbers. > > This implementation, I suspect, would be terribly inefficient. Think > about what the lambda application gate would look like in terms of > NAND gates. Yes, it can probably be done in theory. No, it won't be > pretty. And forget about practical. > > Finally, a major advantage of having such "raw" language as an > instruction set is that it allows many many optimizations (e.g. > pipelining (which, I would say, is the single most important reason > that processors are able to run at GHzs instead of MHzs (Pentium 4 > processors, famed for their high clock speed, had 31 pipeline > stages))) that I cannot imagine being possible in anything close to a > "lambda calculus processor". > > What is the added value you hope to achieve? > > On 19 January 2016 at 22:12, Henning Thielemann > wrote: > > > > Hi all, > > > > every now and then I think it would be cool to have a microprocessor that > > supports Haskell in a way. A processor where lazy evaluation is not > overhead > > but an optimization opportunity, a processor that can make use of the > > explicit data dependencies in Haskell programs in order to utilize many > > computation units in parallel. I know of the Reduceron project, which > > evolves only slowly and if it somewhen is ready for use it is uncertain > > whether it can compete with stock CPUs since FPGA's need much more chip > > space for the same logic. > > > > I got to know that in todays x86 processors you can alter the instruction > > set, which is mainly used for bugfixes. Wouldn't it be interesting to add > > some instructions for Haskell support? However, I suspect that such a > patch > > might be rendered invalid by new processor generations with changed > internal > > details. Fortunately, there are processors that are designed for custom > > instruction set extensions: > > https://en.wikipedia.org/wiki/Xtensa > > > > Would it be sensible to create a processor based on such a design? I > have no > > idea what it might cost, and you would still need some peripheral > circuitry > > to run it. What could processor instructions for Haskell support look > like? > > Has anyone already thought in this direction? > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > ------------------------------ > > Message: 9 > Date: Tue, 19 Jan 2016 15:00:12 -0800 > From: Darren Grant > To: Auke Booij > Cc: Henning Thielemann , Haskell Cafe > > Subject: Re: [Haskell-cafe] CPU with Haskell support > Message-ID: > NxGBg at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Limiting the scope for my own sanity here, there may yet be some > application in various hardware level emulations of continuation passing > calculi, perhaps building on static single assignment. > > It might be possoble to derive an interesting instruction set from the > sorts of intermediate representations we see in compiler infrastructures > like LLVM, but it is hard to guess how these hardware designs would benefit > haskell, rather than the other way around. > > Cheers, > Darren > On Jan 19, 2016 14:44, "Auke Booij" wrote: > > > This question is much more involved than you seem to be suggesting. > > It's not just about adding "some instructions for Haskell support". > > You have to think about how you want to express /every/ haskell term > > as a series of bits (preferably predictably many bits), and find a > > (finite) combination of logical gates to do arbitrary computations > > with them. > > > > If you want to go anywhere in this directions, perhaps a good start > > would be implementing a processor with instructions for (untyped) > > lambda calculus. One approach for this could be to take a > > (mathematical) model of lambda calculus and see how its elements can > > be represented as natural numbers. > > > > This implementation, I suspect, would be terribly inefficient. Think > > about what the lambda application gate would look like in terms of > > NAND gates. Yes, it can probably be done in theory. No, it won't be > > pretty. And forget about practical. > > > > Finally, a major advantage of having such "raw" language as an > > instruction set is that it allows many many optimizations (e.g. > > pipelining (which, I would say, is the single most important reason > > that processors are able to run at GHzs instead of MHzs (Pentium 4 > > processors, famed for their high clock speed, had 31 pipeline > > stages))) that I cannot imagine being possible in anything close to a > > "lambda calculus processor". > > > > What is the added value you hope to achieve? > > > > On 19 January 2016 at 22:12, Henning Thielemann > > wrote: > > > > > > Hi all, > > > > > > every now and then I think it would be cool to have a microprocessor > that > > > supports Haskell in a way. A processor where lazy evaluation is not > > overhead > > > but an optimization opportunity, a processor that can make use of the > > > explicit data dependencies in Haskell programs in order to utilize many > > > computation units in parallel. I know of the Reduceron project, which > > > evolves only slowly and if it somewhen is ready for use it is uncertain > > > whether it can compete with stock CPUs since FPGA's need much more chip > > > space for the same logic. > > > > > > I got to know that in todays x86 processors you can alter the > instruction > > > set, which is mainly used for bugfixes. Wouldn't it be interesting to > add > > > some instructions for Haskell support? However, I suspect that such a > > patch > > > might be rendered invalid by new processor generations with changed > > internal > > > details. Fortunately, there are processors that are designed for custom > > > instruction set extensions: > > > https://en.wikipedia.org/wiki/Xtensa > > > > > > Would it be sensible to create a processor based on such a design? I > > have no > > > idea what it might cost, and you would still need some peripheral > > circuitry > > > to run it. What could processor instructions for Haskell support look > > like? > > > Has anyone already thought in this direction? > > > _______________________________________________ > > > Haskell-Cafe mailing list > > > Haskell-Cafe at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160119/58412564/attachment-0001.html > > > > ------------------------------ > > Message: 10 > Date: Wed, 20 Jan 2016 00:03:06 +0100 (CET) > From: Henning Thielemann > To: Auke Booij > Cc: Haskell Cafe > Subject: Re: [Haskell-cafe] CPU with Haskell support > Message-ID: > Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed > > > On Tue, 19 Jan 2016, Auke Booij wrote: > > > This question is much more involved than you seem to be suggesting. > > It's not just about adding "some instructions for Haskell support". > > You have to think about how you want to express /every/ haskell term > > as a series of bits (preferably predictably many bits), and find a > > (finite) combination of logical gates to do arbitrary computations > > with them. > > I am not thinking about a radically different machine language, just a > common imperative machine language with some added instructions for tasks > often found in machine code generated from Haskell. E.g. mainstream > processors support C function calls with special jump instruction and > stack handling. Maybe there could be instructions that assist handling > thunks or Haskell function calls. > > > ------------------------------ > > Message: 11 > Date: Wed, 20 Jan 2016 11:52:21 +1100 > From: Ben Lippmeier > To: Henning Thielemann > Cc: Haskell Cafe > Subject: Re: [Haskell-cafe] CPU with Haskell support > Message-ID: <3EAF442C-90D4-40BE-91CE-EB7AE3E6EAFC at ouroborus.net> > Content-Type: text/plain; charset="utf-8" > > > > On 20 Jan 2016, at 9:12 am, Henning Thielemann < > lemming at henning-thielemann.de> wrote: > > > I got to know that in todays x86 processors you can alter the > instruction set, which is mainly used for bugfixes. Wouldn't it be > interesting to add some instructions for Haskell support? However, I > suspect that such a patch might be rendered invalid by new processor > generations with changed internal details. Fortunately, there are > processors that are designed for custom instruction set extensions: > > https://en.wikipedia.org/wiki/Xtensa > > > Your post assumes that the time to fetch/decode the instruction stream is > a bottleneck, and reducing the number of instructions will in some way make > the program faster. > > Your typically lazy GHC compiled program spends much of its time building > thunks and otherwise copying data between the stack and the heap. If it?s > blocked waiting for data memory / data cache miss then reducing the number > of instructions won?t help anything ? at least if the fancy new > instructions just tell the processor to do something that would lead to > cache miss anyway. > > See: Cache Performance of Lazy Functional Programs on Current Hardware > (from 2009) > Arbob Ahmad and Henry DeYoung > http://www.cs.cmu.edu/~hdeyoung/15740/report.pdf < > http://www.cs.cmu.edu/~hdeyoung/15740/report.pdf> > > Indirect branches are also a problem (load an address from data memory, > then jump to it), as branch predictors usually cannot deal with them. > Slowdowns due to mispredicted branches could perhaps be mitigated by > improving the branch predictor in a Haskell specific way, but you might not > need new instructions to do so. > > Or another way of putting it: ?If you tell a long story with less words, > then it?s still a long story.? > > Ben. > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160120/881d9bb6/attachment-0001.html > > > > ------------------------------ > > Message: 12 > Date: Wed, 20 Jan 2016 18:16:49 +1300 > From: "Richard A. O'Keefe" > To: Henning Thielemann > Cc: Haskell Cafe > Subject: Re: [Haskell-cafe] CPU with Haskell support > Message-ID: > Content-Type: text/plain; charset="us-ascii" > > > On 20/01/2016, at 12:03 pm, Henning Thielemann < > lemming at henning-thielemann.de> wrote: > > I am not thinking about a radically different machine language, just a > common imperative machine language with some added instructions for tasks > often found in machine code generated from Haskell. E.g. mainstream > processors support C function calls with special jump instruction and stack > handling. Maybe there could be instructions that assist handling thunks or > Haskell function calls. > > I was at a presentation once where the speaker showed how > (thanks to the fact that Prolog doesn't evaluate arguments > in calls) calling a procedure and executing the procedure > could be overlapped, getting a factor of 2 speedup for an > important part of the code. At another presentation a > speaker showed how using a special outboard coprocessor > could dramatically speed up memory management. > > I suspect that neither technique would be much help on today's > machines and for Haskell. However, there is a hint here that > doing something quite different might pay off. > > For example, if branch predictors don't do well with thunk > handling, maybe there is a way of processing thunks that a > quite different kind of branch predictor *might* cope with. > Or maybe something that's expecting to process thousands of > microthreads might not care about branch prediction. (Although > that idea has been tried as a way of handling memory latency, > I don't think it's been tried for Haskell.) > > Perhaps you might look for something different; instead of > 'faster on similar hardware' you might look at 'cheaper'. > Could a specialised Haskell processor use less energy than > a standard CPU? > > Don't quit your day job, but don't be too sure there's nothing > left to think of either. > > > > > ------------------------------ > > Message: 13 > Date: Wed, 20 Jan 2016 10:05:48 +0300 > From: Gleb Popov <6yearold at gmail.com> > To: Henning Thielemann > Cc: Haskell Cafe > Subject: Re: [Haskell-cafe] CPU with Haskell support > Message-ID: > < > CALH631nuDoV_4kBkRWjZvNuZK59kJ6fV6A_hnAKGh9YYznHbSQ at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > On Wed, Jan 20, 2016 at 1:12 AM, Henning Thielemann < > lemming at henning-thielemann.de> wrote: > > > > > Hi all, > > > > every now and then I think it would be cool to have a microprocessor that > > supports Haskell in a way. A processor where lazy evaluation is not > > overhead but an optimization opportunity, a processor that can make use > of > > the explicit data dependencies in Haskell programs in order to utilize > many > > computation units in parallel. I know of the Reduceron project, which > > evolves only slowly and if it somewhen is ready for use it is uncertain > > whether it can compete with stock CPUs since FPGA's need much more chip > > space for the same logic. > > > > I got to know that in todays x86 processors you can alter the instruction > > set, which is mainly used for bugfixes. Wouldn't it be interesting to add > > some instructions for Haskell support? However, I suspect that such a > patch > > might be rendered invalid by new processor generations with changed > > internal details. Fortunately, there are processors that are designed for > > custom instruction set extensions: > > https://en.wikipedia.org/wiki/Xtensa > > > > Would it be sensible to create a processor based on such a design? I have > > no idea what it might cost, and you would still need some peripheral > > circuitry to run it. What could processor instructions for Haskell > support > > look like? Has anyone already thought in this direction? > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > I remember reading relevent paper: The Reduceron reconfigured and > re-evaluated. Authors are MATTHEW NAYLOR and COLIN RUNCIMAN. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160120/df0fdb6a/attachment-0001.html > > > > ------------------------------ > > Message: 14 > Date: Wed, 20 Jan 2016 08:09:38 +0100 > From: Sven Panne > To: Mikhail Glushenkov > Cc: "cabal-devel at haskell.org" , > "haskell-cafe at haskell.org" > Subject: Re: [Haskell-cafe] Doc generation? > Message-ID: > muZG_pSYHVyciMORC5nu0-y+ykhPdU-npLyCZ4J3srv3A at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > 2016-01-19 23:40 GMT+01:00 Mikhail Glushenkov < > mikhail.glushenkov at gmail.com> > : > > > Note that with the Git version of cabal-install you can run 'cabal > > upload --doc' to upload the docs to Hackage manually. > > > > Are there any safeguards on the Hackage server side to guarantee > consistency between the uploaded package and the uploaded docs? (i.e. make > sure it's the right version etc.) Are there checks on the server side that > the cross-package links are correct? Does the server make sure that the > docs contain source links? > > If the answer to any of these questions is "no", I consider even the > possibility of uploading docs by hand a bug. Wrong/partial documentation is > worse than no documentation at all... > > Cheers, > S. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160120/621dacc2/attachment-0001.html > > > > ------------------------------ > > Message: 15 > Date: Wed, 20 Jan 2016 08:35:02 +0100 > From: Mikhail Glushenkov > To: Sven Panne > Cc: "cabal-devel at haskell.org" , > "haskell-cafe at haskell.org" > Subject: Re: [Haskell-cafe] Doc generation? > Message-ID: > u4q1Q at mail.gmail.com> > Content-Type: text/plain; charset=UTF-8 > > Hi, > > On 20 January 2016 at 08:09, Sven Panne wrote: > > Are there any safeguards on the Hackage server side to guarantee > consistency > > between the uploaded package and the uploaded docs? (i.e. make sure it's > the > > right version etc.) Are there checks on the server side that the > > cross-package links are correct? Does the server make sure that the docs > > contain source links? > > If I'm reading [1] correctly, no such checks are performed. > > > If the answer to any of these questions is "no", I consider even the > > possibility of uploading docs by hand a bug. Wrong/partial documentation > is > > worse than no documentation at all... > > You're welcome to open a ticket on the hackage-server bug tracker. > > [1] > https://github.com/haskell/hackage-server/blob/master/Distribution/Server/Features/Documentation.hs#L223 > > > ------------------------------ > > Message: 16 > Date: Wed, 20 Jan 2016 08:51:23 +0100 > From: Joachim Durchholz > To: haskell-cafe at haskell.org > Subject: Re: [Haskell-cafe] CPU with Haskell support > Message-ID: <569F3C7B.80508 at durchholz.org> > Content-Type: text/plain; charset=windows-1252; format=flowed > > Am 19.01.2016 um 23:12 schrieb Henning Thielemann: > > > > Fortunately, there are > > processors that are designed for custom instruction set extensions: > > https://en.wikipedia.org/wiki/Xtensa > > Unfortunately, the WP article does not say anything that couldn't be > said about, say, an ARM core. Other than that Xtensa core being some > VLIW design. > > > Would it be sensible to create a processor based on such a design? > > Very, very unlikely, for multiple reasons. > > Special-purpose CPUs have been built, most notably for LISP, less > notably for Java, and probably for other purposes that I haven't heard of. > Invariably, their architectural advantages were obsoleted by economy of > scale: Mainstream CPUs are being produced in such huge numbers that > Intel etc. could affort more engineers to optimize every nook and > cranny, more engineers to optimize the structure downscaling, and larger > fabs that could do more chips on more one-time expensive but per-piece > cheap equipment, and in the end, the special-purpose chips were slower > and more expensive. It's an extremely strong competition you are facing > if you try this. > > Also, it is very easy to misidentify the actual bottlenecks and make > instructions for the wrong ones. > If caching is the main bottleneck (which it usually is), no amount of > CPU improvement will help you and you'll simply need a larger cache. Or, > probably, a compiler that knows enough about the program and its data > flow to arrange the data in a cache-line-friendly fashion. > > I do not think this is going to be a long-term problem though. Pure > languages have huge advantages for fine-grained parallel processing, and > CPU technology is pushing towards multiple cores, so that's a natural > match. As pure languages come into more widespread use, the engineers at > Intel, AMD etc. will look at what the pure languages need, and add > optimizations for these. > > Just my 2 cents. > Jo > > > ------------------------------ > > Message: 17 > Date: Wed, 20 Jan 2016 09:49:31 +0100 > From: Sven Panne > To: Mikhail Glushenkov > Cc: "cabal-devel at haskell.org" , > "haskell-cafe at haskell.org" > Subject: Re: [Haskell-cafe] Doc generation? > Message-ID: > muhAXJaQQcTsbYGdsva-cTgC8si+U+-_Mjzt8Z3iJLZMA at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > 2016-01-20 8:35 GMT+01:00 Mikhail Glushenkov >: > > > You're welcome to open a ticket on the hackage-server bug tracker. > > > > I've reported this several times through several channels, not sure if yet > another report will have an impact. Here a quick summary of > doc-generation-related issues: > > https://github.com/haskell/hackage-server/issues/464 > https://github.com/haskell/hackage-server/issues/463 > https://github.com/haskell/hackage-server/issues/421 > https://github.com/haskell/hackage-server/issues/420 > https://github.com/haskell/hackage-server/issues/368 > https://github.com/haskell/hackage-server/issues/244 > https://github.com/haskell/hackage-server/issues/183 > https://github.com/haskell/hackage-server/issues/145 > https://github.com/haskell/hackage-server/issues/55 > > >From a packager maintainer POV, it's totally unpredictable if/when > documentation gets built, see e.g. > http://hackage.haskell.org/package/GLURaw-2.0.0.1: Currently there are no > docs, while the previous version had docs, and the only change was relaxing > the upper bound of OpenGLRaw ( > https://github.com/haskell-opengl/GLURaw/compare/v2.0.0.0...v2.0.0.1). > Basically I gave up any hope and rely on stackage and/or local docs... :-( > > Cheers, > S. > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20160120/505370f1/attachment.html > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > ------------------------------ > > End of Haskell-Cafe Digest, Vol 149, Issue 18 > ********************************************* > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hesselink at gmail.com Wed Jan 20 12:59:58 2016 From: hesselink at gmail.com (Erik Hesselink) Date: Wed, 20 Jan 2016 13:59:58 +0100 Subject: [Haskell-cafe] Doc generation? In-Reply-To: References: <20160119080859.GA5300@casa.casa> Message-ID: Note that if you do this, people who install your package will locally still have the documentation bug, so it's probably better to upload a new version anyway. Erik On 20 January 2016 at 13:41, Oliver Charles wrote: > This would require the ability to re-upload a package that only has > documentation changes. I regularly re-upload documentation when people > report a documentation bug (such as a typo). I wouldn't want to lose that > ability. > > On Wed, Jan 20, 2016 at 10:46 AM Sven Panne wrote: >> >> 2016-01-20 10:11 GMT+01:00 Mikhail Glushenkov >> : >>> >>> Those tickets are about missing docs, not missing checks of manually >>> uploaded docs. >> >> >> Ah, OK, then we misunderstood each other. My point is: If doc generation >> actually worked on Hackage, the manual upload could be disabled immediately, >> I see it only as a fragile workaround, so there is no point in opening a >> ticket for improving that when we already have tons of tickets for the >> *real* problem (which somehow seems to be ignored for ages). >> >> Cheers, >> S. >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From johannes.waldmann at htwk-leipzig.de Wed Jan 20 14:53:39 2016 From: johannes.waldmann at htwk-leipzig.de (Johannes Waldmann) Date: Wed, 20 Jan 2016 15:53:39 +0100 Subject: [Haskell-cafe] localized memory management? Message-ID: <569F9F73.9050705@htwk-leipzig.de> Dear cafe, how would you approach this task: find (enumerate) those x from a list xs for which some computation is not successful within some resource bound (it weeds out uninteresting data, leaving just the hard cases, which will be treated later by other means) Input and result could be lazy lists, computations are pure. If the bounded resource is time, then I can use System.Timeout. This puts me into IO, but OK. Perhaps I want to do some logging output anyways. The problem is with bounding space. Assume that "computation on x" (sometimes) allocates a lot. Then the whole program will just die with "heap exhausted", while in fact I want to terminate just the computation on this x, garbage-collect, and continue. I could make the space usage explicit: each step of the computation could additionally compute a number that approximates memory usage. (Assume that this usage varies wildly with each step.) Then I can stop iterating when this reaches some bound. Or, I just compile the computation into a separate executable, and I call it (for each x) via the operating system, because there I can bound space (with ulimit) Is there some way to achieve this in Haskell land? - J.W. From mrz.vtl at gmail.com Wed Jan 20 17:09:53 2016 From: mrz.vtl at gmail.com (Maurizio Vitale) Date: Wed, 20 Jan 2016 12:09:53 -0500 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: Message-ID: Many attempts were done in this area in the late 80s/early 90s (and before if you don't focus on lazy evaluation; search lisp machine). Search hardware graph reduction. The major problem has always been that producing a dedicated CPU is expensive and there's no way to keep up with the progresses in general purpose processor that can justify investments with a hugely larger user base. At some point there were even attempts of placing computation in the memory itself (typically using very fine grain combinators, SKI reduction and the such) People have given up even on hardware support for small subproblems, such as garbage collection. As for modifying the instruction set of an Intel processor, I don't know how feasible it is. But even if it is, consider that the entire architecture, pipelining, caching, predictions, speculative everythinbg etc. is hugely optimized for the typical workflow. You change that and all bets are off w.r.t performance and you may or not be ahead of the same CPU executing normal code out of a haskell compiler. On Tue, Jan 19, 2016 at 5:12 PM, Henning Thielemann < lemming at henning-thielemann.de> wrote: > > Hi all, > > every now and then I think it would be cool to have a microprocessor that > supports Haskell in a way. A processor where lazy evaluation is not > overhead but an optimization opportunity, a processor that can make use of > the explicit data dependencies in Haskell programs in order to utilize many > computation units in parallel. I know of the Reduceron project, which > evolves only slowly and if it somewhen is ready for use it is uncertain > whether it can compete with stock CPUs since FPGA's need much more chip > space for the same logic. > > I got to know that in todays x86 processors you can alter the instruction > set, which is mainly used for bugfixes. Wouldn't it be interesting to add > some instructions for Haskell support? However, I suspect that such a patch > might be rendered invalid by new processor generations with changed > internal details. Fortunately, there are processors that are designed for > custom instruction set extensions: > https://en.wikipedia.org/wiki/Xtensa > > Would it be sensible to create a processor based on such a design? I have > no idea what it might cost, and you would still need some peripheral > circuitry to run it. What could processor instructions for Haskell support > look like? Has anyone already thought in this direction? > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From will.yager at gmail.com Wed Jan 20 17:11:01 2016 From: will.yager at gmail.com (Will Yager) Date: Wed, 20 Jan 2016 11:11:01 -0600 Subject: [Haskell-cafe] localized memory management? In-Reply-To: <569F9F73.9050705@htwk-leipzig.de> References: <569F9F73.9050705@htwk-leipzig.de> Message-ID: <49DB4AC2-DB8E-4D8B-8EB1-ECCB8BABB559@gmail.com> The problem is that the semantics of pure languages approximate the behavior of ideal machines (turing machines, lambda machines, etc.). Resource constraints are not first-class features of the language because analyzing resource (time and memory) usage of pure functions is inherently impure. The ideas you've outlined are pretty much what I would do as well. You can use IO to keep track of time and either IO or a custom pure monad that keeps an approximate memory usage count to keep track of memory. I'm not sure if you can place memory constraints on forkIO threads, but that would be better than using a separate process. Will > On Jan 20, 2016, at 08:53, Johannes Waldmann wrote: > > Dear cafe, how would you approach this task: > > find (enumerate) those x from a list xs > for which some computation is not successful > within some resource bound > > (it weeds out uninteresting data, leaving just the hard cases, > which will be treated later by other means) > Input and result could be lazy lists, computations are pure. > > If the bounded resource is time, then I can use System.Timeout. > This puts me into IO, but OK. > Perhaps I want to do some logging output anyways. > > The problem is with bounding space. > Assume that "computation on x" (sometimes) allocates a lot. > Then the whole program will just die with "heap exhausted", > while in fact I want to terminate just the computation on > this x, garbage-collect, and continue. > > I could make the space usage explicit: > each step of the computation could additionally > compute a number that approximates memory usage. > (Assume that this usage varies wildly with each step.) > Then I can stop iterating when this reaches some bound. > > Or, I just compile the computation into a separate executable, > and I call it (for each x) via the operating system, > because there I can bound space (with ulimit) > > Is there some way to achieve this in Haskell land? > > - J.W. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From paraseba at gmail.com Wed Jan 20 17:30:14 2016 From: paraseba at gmail.com (=?UTF-8?Q?Sebasti=C3=A1n_Galkin?=) Date: Wed, 20 Jan 2016 09:30:14 -0800 Subject: [Haskell-cafe] localized memory management? In-Reply-To: <49DB4AC2-DB8E-4D8B-8EB1-ECCB8BABB559@gmail.com> References: <569F9F73.9050705@htwk-leipzig.de> <49DB4AC2-DB8E-4D8B-8EB1-ECCB8BABB559@gmail.com> Message-ID: This could help with memory https://git.haskell.org/ghc.git/commitdiff/b0534f78a73f972e279eed4447a5687bd6a8308e I think that's the result of a Facebook contribution, they needed it to avoid a single unlikely request taking too much memory and slowing down everything else. -------------- next part -------------- An HTML attachment was scrubbed... URL: From monkleyon at googlemail.com Wed Jan 20 18:17:30 2016 From: monkleyon at googlemail.com (insanemole .) Date: Wed, 20 Jan 2016 19:17:30 +0100 Subject: [Haskell-cafe] Re: localized memory management? In-Reply-To: <569F9F73.9050705@htwk-leipzig.de> References: <569F9F73.9050705@htwk-leipzig.de> Message-ID: One thing you can try is access current garbage collector stats by using the GHC.Stats library. I don't know how that mingles with multiple threads and I don't think it's very precise, but it sounds simple enough. You just have to start GHC with stats enabled. Afterwards, you can check if they are enabled with getGCStatsEnabled and get a record of stats with getGCStats. Then experiment away with the entries of that record. Hope that helps. Am 20.01.2016 15:53 schrieb "Johannes Waldmann" < johannes.waldmann at htwk-leipzig.de>: > Dear cafe, how would you approach this task: > > find (enumerate) those x from a list xs > for which some computation is not successful > within some resource bound > > (it weeds out uninteresting data, leaving just the hard cases, > which will be treated later by other means) > Input and result could be lazy lists, computations are pure. > > If the bounded resource is time, then I can use System.Timeout. > This puts me into IO, but OK. > Perhaps I want to do some logging output anyways. > > The problem is with bounding space. > Assume that "computation on x" (sometimes) allocates a lot. > Then the whole program will just die with "heap exhausted", > while in fact I want to terminate just the computation on > this x, garbage-collect, and continue. > > I could make the space usage explicit: > each step of the computation could additionally > compute a number that approximates memory usage. > (Assume that this usage varies wildly with each step.) > Then I can stop iterating when this reaches some bound. > > Or, I just compile the computation into a separate executable, > and I call it (for each x) via the operating system, > because there I can bound space (with ulimit) > > Is there some way to achieve this in Haskell land? > > - J.W. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jo at durchholz.org Wed Jan 20 18:23:29 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Wed, 20 Jan 2016 19:23:29 +0100 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: References: Message-ID: <569FD0A1.9010500@durchholz.org> Am 20.01.2016 um 18:09 schrieb Maurizio Vitale: > As for modifying the instruction set of an Intel processor, I don't know > how feasible it is. Some CPUs allowed this. There were even designs that had PL/1 instructions implemented as assembly by way of microcode. In theory it's possible for newer Intel CPUs. One point against doing so is that the microcode updates are 2-8 kBytes in size. You'd need to switch microcode with every context switch in the operating system. The other point is that microcode updates are encrypted and signed. Only Intel has the private keys needed to provide data so that an Intel CPU will accept a microcode update (this is supposed to prevent tampering with the microcode update data by malicious third parties). Sources: http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-vol-3a-part-1-manual.pdf#zoom=100 pp. 338 ff. has infos about the general format of an update. http://www.delidded.com/how-to-update-cpu-microcode-in-award-or-phoenix-bios/ lists typical microcode update sizes near the end (not sure whether these are in bytes or in DWORDs). Regards, Jo From jo at durchholz.org Wed Jan 20 18:56:44 2016 From: jo at durchholz.org (Joachim Durchholz) Date: Wed, 20 Jan 2016 19:56:44 +0100 Subject: [Haskell-cafe] CPU with Haskell support In-Reply-To: <569FD0A1.9010500@durchholz.org> References: <569FD0A1.9010500@durchholz.org> Message-ID: <569FD86C.8020209@durchholz.org> More info on microcode updates can be found in https://www.dcddcc.com/pubs/paper_microcode.pdf Salient points: - Microcode is entirely undocumented. - Microcode is indeed encrypted, though there seem to be loopholes. - Microcode update times can take up to 2 million CPU cyles. - Microcode updates could be an unexpected attack vector. Maybe. Regards, Jo From dominikbollmann at gmail.com Wed Jan 20 19:01:34 2016 From: dominikbollmann at gmail.com (Dominik Bollmann) Date: Wed, 20 Jan 2016 20:01:34 +0100 Subject: [Haskell-cafe] [Template Haskell Question] On defining recursive templates. Message-ID: <8760yojcs1.fsf@t450s.i-did-not-set--mail-host-address--so-tickle-me> Hello Haskellers, I'm currently diving into Template Haskell and I just read the original TH paper [1]. There they give the following example of a generic zip function: -- | A generic zip function. Use (e.g.,) as $(zipN 3) xs ys zs. zipN :: Int -> ExpQ zipN n = [| let zp = $(mkZip n [| zp |]) in zp |] -- | Helper function for zipN. mkZip :: Int -> ExpQ -> ExpQ mkZip n contZip = lamE pYs (caseE (tupE eYs) [m1, m2]) where (pXs, eXs) = genPEs "x" n (pXSs, eXSs) = genPEs "xs" n (pYs, eYs) = genPEs "y" n allCons = tupP $ zipWith (\x xs -> [p| $x : $xs |]) pXs pXSs m1 = match allCons continue [] m2 = match wildP stop [] continue = normalB [| $(tupE eXs) : $(appsE (contZip:eXSs))|] stop = normalB (conE '[]) -- | Generates n pattern and expression variables. genPEs :: String -> Int -> ([PatQ], [ExpQ]) genPEs x n = (pats, exps) where names = map (\k -> mkName $ x ++ show k) [1..n] (pats, exps) = (map varP names, map varE names) This works as expected, e.g., `$(zipN 3) [1..3] [4..6] [7..9]' gives [(1,4,7),(2,5,8), (3,6,9)]. However, I found this definition of passing `[| zp |]' as a helper function slightly confusing, so I tried to make it more succinct and to call zipN directly in the recursion: zipN' :: Int -> ExpQ zipN' n = lamE pYs (caseE (tupE eYs) [m1, m2]) where (pXs, eXs) = genPEs "x" n (pXSs, eXSs) = genPEs "xs" n (pYs, eYs) = genPEs "y" n allCons = tupP $ zipWith (\x xs -> [p| $x : $xs |]) pXs pXSs m1 = match allCons continue [] m2 = match wildP stop [] continue = normalB [| $(tupE eXs) : $(appsE (zipN n:eXSs)) |] stop = normalB (conE '[]) This subtle change, however, causes the compiler to diverge and to get stuck at compiling splice `$(zipN' 3) [1..3] [4..6] [7..9]'... Could anyone explain to me why the first approach works, but the 2nd small deviation does not? Is it because the compiler keeps trying to inline the recursive call to (zip N) into the template indefinitely? Are there easier (more straightforward) alternative implementations to the (imo) slightly convoluted example of zipN from the paper? Any hints are very much appreciated. Thanks, Dominik. [1] http://research.microsoft.com/~simonpj/papers/meta-haskell/ From anselm.scholl at tu-harburg.de Wed Jan 20 19:13:44 2016 From: anselm.scholl at tu-harburg.de (Jonas Scholl) Date: Wed, 20 Jan 2016 20:13:44 +0100 Subject: [Haskell-cafe] [Template Haskell Question] On defining recursive templates. In-Reply-To: <8760yojcs1.fsf@t450s.i-did-not-set--mail-host-address--so-tickle-me> References: <8760yojcs1.fsf@t450s.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: <569FDC68.6030601@tu-harburg.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Its not the compiler diverging, its your own code generating an infinite expression. [| zp |] is just a variable capturing the zp from the let expression. The generated code looks like this: let zp = \ y1 y2 y3 -> case (y1, y2, y3) of ((x1:xs1), (x2:xs2), (x3:xs3)) -> (x1, x2, x3) : zp xs1 xs2 xs3 _ -> [] in zp You on the other hand call zipN from zipN', so the above expression is inserted instead of zp. And in this expression there is again a zp, which you replaced... and so on... so it looks like this: \ y1 y2 y3 -> case (y1, y2, y3) of ((x1:xs1), (x2:xs2), (x3:xs3)) -> (x1, x2, x3) : (\ y1 y2 y3 -> case (y1, y2, y3) of ((x1:xs1), (x2:xs2), (x3:xs3)) -> (x1, x2, x3) : (\ y1 y2 y3 -> case (y1, y2, y3) of ((x1:xs1), (x2:xs2), (x3:xs3)) -> (x1, x2, x3) : (...) xs1 xs2 xs3 _ -> []) xs1 xs2 xs3 _ -> []) xs1 xs2 xs3 _ -> [] Hopefully it is now clear what is happening ; ) Jonas On 01/20/2016 08:01 PM, Dominik Bollmann wrote: > > Hello Haskellers, > > I'm currently diving into Template Haskell and I just read the > original TH paper [1]. There they give the following example of a > generic zip function: > > -- | A generic zip function. Use (e.g.,) as $(zipN 3) xs ys zs. > zipN :: Int -> ExpQ > zipN n = [| let zp = $(mkZip n [| zp |]) in zp |] > > -- | Helper function for zipN. > mkZip :: Int -> ExpQ -> ExpQ > mkZip n contZip = lamE pYs (caseE (tupE eYs) [m1, m2]) > where > (pXs, eXs) = genPEs "x" n > (pXSs, eXSs) = genPEs "xs" n > (pYs, eYs) = genPEs "y" n > allCons = tupP $ zipWith (\x xs -> [p| $x : $xs |]) pXs pXSs > m1 = match allCons continue [] > m2 = match wildP stop [] > continue = normalB [| $(tupE eXs) : $(appsE (contZip:eXSs))|] > stop = normalB (conE '[]) > > -- | Generates n pattern and expression variables. > genPEs :: String -> Int -> ([PatQ], [ExpQ]) > genPEs x n = (pats, exps) > where names = map (\k -> mkName $ x ++ show k) [1..n] > (pats, exps) = (map varP names, map varE names) > > This works as expected, e.g., `$(zipN 3) [1..3] [4..6] [7..9]' gives > [(1,4,7),(2,5,8), (3,6,9)]. > > However, I found this definition of passing `[| zp |]' as a helper > function slightly confusing, so I tried to make it more succinct and to > call zipN directly in the recursion: > > zipN' :: Int -> ExpQ > zipN' n = lamE pYs (caseE (tupE eYs) [m1, m2]) > where > (pXs, eXs) = genPEs "x" n > (pXSs, eXSs) = genPEs "xs" n > (pYs, eYs) = genPEs "y" n > allCons = tupP $ zipWith (\x xs -> [p| $x : $xs |]) pXs pXSs > m1 = match allCons continue [] > m2 = match wildP stop [] > continue = normalB [| $(tupE eXs) : $(appsE (zipN n:eXSs)) |] > stop = normalB (conE '[]) > > This subtle change, however, causes the compiler to diverge and to get > stuck at compiling splice `$(zipN' 3) [1..3] [4..6] [7..9]'... > > Could anyone explain to me why the first approach works, but the 2nd > small deviation does not? Is it because the compiler keeps trying to > inline the recursive call to (zip N) into the template indefinitely? > > Are there easier (more straightforward) alternative implementations to > the (imo) slightly convoluted example of zipN from the paper? > > Any hints are very much appreciated. > > Thanks, > Dominik. > > [1] http://research.microsoft.com/~simonpj/papers/meta-haskell/ > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJWn9xYAAoJEM0PYZBmfhoBPT0IAJGgYDj2ISQheiA2OoTCB+k2 ELcbPWAlCFjpWqN4v2DUtTS1XSecJflvmusYyadGtW2s5OzBi1jOopwBFmB1KAz9 P8Lu4tM7OBbvlD5zQaIOD8rktOVtNrjT1r4mouu/dPPDgEF4ekVPkI4tphE3UD7Z grYG6eRsRix6dnFX/Ee+0/EYQeANPsAXUZiEcBbkGVUR2jY44MoycilrH5gjwzTS QSY2FZcAgeMqf1eBeYi3N3sMhCXH8zT2a2z+qzAVO6wMWTeXubXR6Hvk8naHf3Zv ttIw1N1RXE2ncpOQLjnw3NmYY3wmlq4t/tltr1ubUKb0a/1PU/nx3Q+H4YIsz3g= =v7H8 -----END PGP SIGNATURE----- From devnull1999 at yahoo.com Wed Jan 20 19:15:23 2016 From: devnull1999 at yahoo.com (Eric) Date: Wed, 20 Jan 2016 19:15:23 +0000 (UTC) Subject: [Haskell-cafe] 'stack test --trace' fails References: <906884964.8914590.1453317323174.JavaMail.yahoo.ref@mail.yahoo.com> Message-ID: <906884964.8914590.1453317323174.JavaMail.yahoo@mail.yahoo.com> I just upgraded Stack to v1.0.2 on Mac OS X. ?I naively ran 'stack test --trace' hoping to get better messages out of any errorWithStackTrace calls in my tests. ? After a long, successful build, the tests failed with the message 'Could not understand these extra arguments:? +RTS ? -xc'.? Am I doing something obviously wrong? ?Or is this a Stack bug? --Eric -------------- next part -------------- An HTML attachment was scrubbed... URL: From ok at cs.otago.ac.nz Thu Jan 21 04:01:34 2016 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Thu, 21 Jan 2016 17:01:34 +1300 Subject: [Haskell-cafe] localized memory management? In-Reply-To: <569F9F73.9050705@htwk-leipzig.de> References: <569F9F73.9050705@htwk-leipzig.de> Message-ID: <268CC05D-E911-4831-9739-35B62CFF0565@cs.otago.ac.nz> On 21/01/2016, at 3:53 am, Johannes Waldmann wrote: > Or, I just compile the computation into a separate executable, > and I call it (for each x) via the operating system, > because there I can bound space (with ulimit) Just how big are the time and space limits you have in mind? You'd clearly rather *not* create separate processes, but if they are taking time enough and space enough, the overheads might not be worth worrying about, and there would be fewer problems to worry about. A typical problem: what if your estimate of allocation is wrong, and some task is cheerfully grinding away safely within its estimate but really far outside it, and takes your whole program down? With a separate process, that won't happen. Also, there are two different things. There is the amount of space the task has ever ALLOCATED (which is what's not *too* horrible to estimate) and the amount of space the task NEEDS right now (which is what ulimit will constrain). Estimating allocation may (will!) be pessimistic. Estimating need requires you to predict what the garbage collector is going to do and I don't see that as easy. With a separate process you can also stop worrying about estimating the space needs of code you didn't write. From ezyang at mit.edu Thu Jan 21 04:23:38 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Wed, 20 Jan 2016 20:23:38 -0800 Subject: [Haskell-cafe] localized memory management? In-Reply-To: <569F9F73.9050705@htwk-leipzig.de> References: <569F9F73.9050705@htwk-leipzig.de> Message-ID: <1453350042-sup-3462@sabre> Hello Johannes, We wrote a PLDI paper on precisely this topic! http://ezyang.com/rlimits.html The feature never got merged to GHC proper, however, because it required some changes to GHC's generated code which were a slight pessimization for people who were not planning on using resource limits (the loss was characterized in the paper), and it didn't seem right to add an entirely new way (ala profiling/dynamic/etc) to GHC just to support resource limits. Edward Excerpts from Johannes Waldmann's message of 2016-01-20 06:53:39 -0800: > Dear cafe, how would you approach this task: > > find (enumerate) those x from a list xs > for which some computation is not successful > within some resource bound > > (it weeds out uninteresting data, leaving just the hard cases, > which will be treated later by other means) > Input and result could be lazy lists, computations are pure. > > If the bounded resource is time, then I can use System.Timeout. > This puts me into IO, but OK. > Perhaps I want to do some logging output anyways. > > The problem is with bounding space. > Assume that "computation on x" (sometimes) allocates a lot. > Then the whole program will just die with "heap exhausted", > while in fact I want to terminate just the computation on > this x, garbage-collect, and continue. > > I could make the space usage explicit: > each step of the computation could additionally > compute a number that approximates memory usage. > (Assume that this usage varies wildly with each step.) > Then I can stop iterating when this reaches some bound. > > Or, I just compile the computation into a separate executable, > and I call it (for each x) via the operating system, > because there I can bound space (with ulimit) > > Is there some way to achieve this in Haskell land? > > - J.W. From jeffbrown.the at gmail.com Thu Jan 21 08:08:42 2016 From: jeffbrown.the at gmail.com (Jeffrey Brown) Date: Thu, 21 Jan 2016 00:08:42 -0800 Subject: [Haskell-cafe] an automatic refactoring idea Message-ID: I had a data structure with a redundant field [1]. I refactored to make that field go away. Here is the code [2]. The following is a simplification of it. I was using this type: data X = X1 | X2 Int To make the Int go away, I made a duplicate type: data X' = X1' | X2' Then, every function for which any argument was of type X, I similarly duplicated, replacing every X with X', X1 for X1', and X2 for X2'. Next I had to similarly duplicate every function that used one of those. And so on, and so on ... until eventually I was duplicating the tests. At that point I was able to determine from the duplicate tests that everything was working. There was a little more work involved than that, but the vast, vast majority of the edits that got me there were the simple duplication I just described. I looked once at automatic refactoring in Haskell and was either unimpressed or scared. I may not have understood what I was looking at enough to appreciate its power. Is there something that can do the refactoring described above? [1] The Int in a Tplt could be inferred from context, so keeping a duplicate of that information in the Tplt was dangerous, because it makes invalid state possible (e.g. if one is updated and not the other). So I decided to stop using it. [2] https://github.com/JeffreyBenjaminBrown/digraphs-with-text -- Jeffrey Benjamin Brown -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Thu Jan 21 09:12:53 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 21 Jan 2016 09:12:53 +0000 Subject: [Haskell-cafe] an automatic refactoring idea In-Reply-To: References: Message-ID: <20160121091253.GD22858@weber> On Thu, Jan 21, 2016 at 12:08:42AM -0800, Jeffrey Brown wrote: > I had a data structure with a redundant field [1]. I refactored to make > that field go away. Here is the code [2]. The following is a simplification > of it. > > I was using this type: > data X = X1 | X2 Int > > To make the Int go away, I made a duplicate type: > data X' = X1' | X2' You might be interested in this, specifically the section "The limitations of refactoring via modifying text files in place, and what to do instead" https://pchiusano.github.io/2015-04-23/unison-update7.html Tom From chneukirchen at gmail.com Thu Jan 21 13:15:45 2016 From: chneukirchen at gmail.com (Christian Neukirchen) Date: Thu, 21 Jan 2016 14:15:45 +0100 Subject: [Haskell-cafe] Munich Haskell Meeting, 2016-01-25 @ 19:30 Message-ID: <87lh7jqdj2.fsf@gmail.com> Dear all, Next week, our monthly Munich Haskell Meeting will take place again on Monday, January 25 at Cafe Puck at 19h30. For details see here: http://chneukirchen.github.io/haskell-munich.de/dates If you plan to join, please add yourself to this dudle so we can reserve enough seats! It is OK to add yourself to the dudle anonymously or pseudonymously. https://dudle.inf.tu-dresden.de/haskell-munich-jan-2016/ Everybody is welcome! cu, -- Christian Neukirchen http://chneukirchen.org From ky3 at atamo.com Fri Jan 22 17:00:51 2016 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Sat, 23 Jan 2016 00:00:51 +0700 Subject: [Haskell-cafe] Haskell Weekly News Message-ID: Dear Gentle Reader, Many, many beautiful gems in the Haskell Weekly News archives are worth a second look. To give you a taste, I reproduce below excerpts from the quotes section of the Jan 31, 2007 issue -- yes, that's 9 years ago -- under the editorship of Don Stewart. Enjoy. Best, Kim-Ee Yeoh *Top Picks* - Edsko de Vries designs O(1)-amortized and O(1)-worst-case queues using a technique different from the standard literature by Chris Okasaki. In particular, the O(1)-worst-case queue employs a Progress datatype that could be reused to also optimize data structures other than queues. On the other hand, Lennart Augustsson on /r/haskell was pleased as a plum until he saw the unsafeInterleaveST required to pull off the Progress technique. Elsewhere, Hacker News rates the article highly enough for it to stay on the front page for five hours . However, the comments there belie that the advanced Haskell goes swoosh over the head of the average HN reader. - Philipp Schuster sketches a FRP implementation based on temporal logic . Neel Krishnaswami explains on /r/haskell why it suffers from space leaks like most other FRP implementations and ways of fixing it. - Dan Burton reports that the latest version 0.10 of the json parsing package aeson suffers from deal-breaking bugs . Aeson author Bryan O'Sullivan, of an older email-centered generation, explains that he has "a life outside of checking github issues" in the /r/haskell discussion . In any case, the next stepping 5 of Stackage LTS rolls it back to version 0.9 . - A redditor asks, "What's the TypeInType extension planned for the upcoming version 8 of GHC?" The short answer is that it's used for dependent type programming. Detailed answers can be found in the actual /r/haskell Q&A . - GHC on ARM used to suffer over 100 failures on the testsuite . Ben Gamari girds his loins and over the last 6 weeks battled against "the villains that plague this poor architecture." Result? Nightly builds now compile clean. Go Ben! *A Blast from the Past (Quotes from #Haskell IRC):* - *huschi:* Programing in haskell seems a bit frustrating. i'm missing searching for errors :( - *bakert:* I know all my programs can be reduced to only one tenth the size if only I can learn all these crazy functions *Quote of the Week* - Will Jones: The more I write Haskell, the more it feels like Forth. Where I'm basically just inventing a language for my problem, then writing the program in that instead. (Ed. Dear Will: Remember how Dijkstra once said "Always design your programs as a member of a whole family of programs, including those that are likely to succeed it"? He would have warmly congratulated you on your discovery.) -- Kim-Ee Yeoh -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrz.vtl at gmail.com Fri Jan 22 19:38:38 2016 From: mrz.vtl at gmail.com (Maurizio Vitale) Date: Fri, 22 Jan 2016 14:38:38 -0500 Subject: [Haskell-cafe] question on the design of protohaskell (in Write You a Haskell) Message-ID: I know this part has been left incomplete when Stephen got a real job, but I have a question. In http://dev.stephendiehl.com/fun/007_path.html, compiler steps are piped together using the Kleisli (>=>) operator, presumably the one from Control.Monad. they also return some flavour of AST (typically Syn.Module or Core.Module). Assuming that the >=> is the one from Control.Monad, what is the purpose of this returned value (all the steps are in a compilerMonad and presumably would have to update the state w/ the new AST anyhow)? Or is the intention to have a special >=> that also updates the state (but I wouldn't know how to deal w/ Syn.Core vs Core.Module) I would understand some form of phantom type to ensure that compilation steps are in the right order, but as it is I'm puzzled. Anybody has some insight? Thanks, Maurizio -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Sat Jan 23 11:00:47 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Sat, 23 Jan 2016 11:00:47 +0000 Subject: [Haskell-cafe] question on the design of protohaskell (in Write You a Haskell) In-Reply-To: References: Message-ID: <20160123110047.GF26360@weber> On Fri, Jan 22, 2016 at 02:38:38PM -0500, Maurizio Vitale wrote: > In http://dev.stephendiehl.com/fun/007_path.html compiler steps are piped > together using the Kleisli (>=>) operator, presumably the one from > Control.Monad. Yes, it uses the Monad instance of CompilerMonad, which is type CompilerMonad = ExceptT Msg (StateT CompilerState IO) so the monad can do IO, read and modify the CompilerState, and throw Msg exceptions. > they also return some flavour of AST (typically Syn.Module or Core.Module). > Assuming that the >=> is the one from Control.Monad, what is the purpose of > this returned value (all the steps are in a compilerMonad and presumably > would have to update the state w/ the new AST anyhow)? In CompilerState there is a field _ast :: Maybe Syn.Module -- ^ Frontend AST >From the comment I presume this is the AST taken directly from the source file, to be used for error-reporting and such. I would guess after the parseP stage it doesn't change, and the result of each stage is the Syn.Module returned from that stage not something set in the monad. Tom From amindfv at gmail.com Sat Jan 23 21:39:52 2016 From: amindfv at gmail.com (amindfv at gmail.com) Date: Sat, 23 Jan 2016 16:39:52 -0500 Subject: [Haskell-cafe] [ANN] nano-erl Message-ID: <206EE012-E793-4F35-ABBD-28C853AAF0C9@gmail.com> I've written a tiny library for Erlang-style actor semantics in Haskell: hackage.haskell.org/package/nano-erl It's fast, and meant to be a lightweight abstraction that's easy to integrate with existing code. Enjoy! Tom From tkoster at gmail.com Sun Jan 24 06:46:22 2016 From: tkoster at gmail.com (Thomas Koster) Date: Sun, 24 Jan 2016 17:46:22 +1100 Subject: [Haskell-cafe] When are MVars better than STM? Message-ID: Hi friends, Using Criterion, I have been running benchmarks to measure the relative performance of STM and MVars for some simple transactions that I expect will be typical in my application. I am using GHC 7.10.2 and libraries as at Stackage LTS 3.2. I have found that STM is faster than MVars in all my benchmarks, without exception. This seems to go against accepted wisdom [1][2][3]. I have not included my source code here to save space, but if you suspect that I am using MVars incorrectly, just say so and I will post my source code separately. I have two questions: 1. When are MVars faster than STM? If the answer is "never", then when are MVars "better" than STM? (Choose your own definition of "better".) 2. When given two capabilities (+RTS -N2), MVars are suddenly an order of magnitude slower than with just one capability. Why? For those who want details: My benchmark forks four Haskell threads. Each thread repeats a transaction that increments a shared counter many, many times. These transactions must be serialized. The counter is therefore highly contended. One version uses an MVar to store the counter in the obvious way. The other version uses a TVar instead. By the way, simply using "atomic-primops" to increment the counter won't do because the increment operation is actually a mock substitute for a more complex operation. I use the counter for my benchmarks because the real operation needs much more memory and I don't want the additional, unpredictable GC cost to affect my measurements. Typical measurements are: 1 capability, using MVar: 37.30 ms 1 capability, using TVar: 24.88 ms 2 capabilities, using MVar: 1.564 s 2 capabilities, using TVar: 80.09 ms 4 capabilities, using MVar: 2.890 s 4 capabilities, using TVar: 207.8 ms Notice that the MVar version suddenly slows by an order of magnitude when run with more than one capability. Why is this so? (This is question 2.) Despite the absolute time elapsed, I realize that the CPU usage characteristics of the two versions are also quite different. I realize that the MVar version interlocks the four threads so that only one capability is ever busy at a time, irrespective of the number of capabilities available, whereas the STM version allows up to four capabilities to be busy at once. However, I believe that the additional parallel transactions in the STM version would be mostly wasted, destined to be retried. Unless I am mistaken, this assumption appears to be consistent with the observation that the STM version with -N1 is the fastest of all. Despite all this wasted work by the thundering herd, the total CPU time (i.e. my power bill) for the STM version is still less than for the MVar version, because the MVar version so dramatically slow. Paradoxically, MVars seem to be the wrong tool for this job. So when are MVars faster than STM? (This is question 1.) [1] https://stackoverflow.com/questions/15439966/when-why-use-an-mvar-over-a-tvar [2] https://www.reddit.com/r/haskell/comments/39ef3y/ioref_vs_mvar_vs_tvar_vs_tmvar/ [3] https://mail.haskell.org/pipermail/haskell-cafe/2014-January/112158.html Thanks, Thomas Koster From cma at bitemyapp.com Sun Jan 24 06:55:18 2016 From: cma at bitemyapp.com (Christopher Allen) Date: Sun, 24 Jan 2016 00:55:18 -0600 Subject: [Haskell-cafe] When are MVars better than STM? In-Reply-To: References: Message-ID: Could you post the code please? On Sun, Jan 24, 2016 at 12:46 AM, Thomas Koster wrote: > Hi friends, > > Using Criterion, I have been running benchmarks to measure the > relative performance of STM and MVars for some simple transactions > that I expect will be typical in my application. I am using GHC 7.10.2 > and libraries as at Stackage LTS 3.2. > > I have found that STM is faster than MVars in all my benchmarks, > without exception. This seems to go against accepted wisdom [1][2][3]. > I have not included my source code here to save space, but if you > suspect that I am using MVars incorrectly, just say so and I will post > my source code separately. > > I have two questions: > > 1. When are MVars faster than STM? If the answer is "never", then when > are MVars "better" than STM? (Choose your own definition of "better".) > > 2. When given two capabilities (+RTS -N2), MVars are suddenly an order > of magnitude slower than with just one capability. Why? > > > For those who want details: > > My benchmark forks four Haskell threads. Each thread repeats a > transaction that increments a shared counter many, many times. These > transactions must be serialized. The counter is therefore highly > contended. One version uses an MVar to store the counter in the > obvious way. The other version uses a TVar instead. > > By the way, simply using "atomic-primops" to increment the counter > won't do because the increment operation is actually a mock substitute > for a more complex operation. I use the counter for my benchmarks > because the real operation needs much more memory and I don't want the > additional, unpredictable GC cost to affect my measurements. > > Typical measurements are: > > 1 capability, using MVar: 37.30 ms > 1 capability, using TVar: 24.88 ms > 2 capabilities, using MVar: 1.564 s > 2 capabilities, using TVar: 80.09 ms > 4 capabilities, using MVar: 2.890 s > 4 capabilities, using TVar: 207.8 ms > > Notice that the MVar version suddenly slows by an order of magnitude > when run with more than one capability. Why is this so? (This is > question 2.) > > Despite the absolute time elapsed, I realize that the CPU usage > characteristics of the two versions are also quite different. I > realize that the MVar version interlocks the four threads so that only > one capability is ever busy at a time, irrespective of the number of > capabilities available, whereas the STM version allows up to four > capabilities to be busy at once. However, I believe that the > additional parallel transactions in the STM version would be mostly > wasted, destined to be retried. Unless I am mistaken, this assumption > appears to be consistent with the observation that the STM version > with -N1 is the fastest of all. Despite all this wasted work by the > thundering herd, the total CPU time (i.e. my power bill) for the STM > version is still less than for the MVar version, because the MVar > version so dramatically slow. > > Paradoxically, MVars seem to be the wrong tool for this job. So when > are MVars faster than STM? (This is question 1.) > > [1] > https://stackoverflow.com/questions/15439966/when-why-use-an-mvar-over-a-tvar > [2] > https://www.reddit.com/r/haskell/comments/39ef3y/ioref_vs_mvar_vs_tvar_vs_tmvar/ > [3] > https://mail.haskell.org/pipermail/haskell-cafe/2014-January/112158.html > > Thanks, > Thomas Koster > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- Chris Allen Currently working on http://haskellbook.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkoster at gmail.com Sun Jan 24 07:13:27 2016 From: tkoster at gmail.com (Thomas Koster) Date: Sun, 24 Jan 2016 18:13:27 +1100 Subject: [Haskell-cafe] When are MVars better than STM? In-Reply-To: References: Message-ID: On Sun, Jan 24, 2016 at 12:46 AM, Thomas Koster wrote: > Using Criterion, I have been running benchmarks to measure the > relative performance of STM and MVars for some simple transactions > that I expect will be typical in my application. I am using GHC 7.10.2 > and libraries as at Stackage LTS 3.2. > > I have found that STM is faster than MVars in all my benchmarks, > without exception. This seems to go against accepted wisdom [1][2][3]. > I have not included my source code here to save space, but if you > suspect that I am using MVars incorrectly, just say so and I will post > my source code separately. > > I have two questions: > > 1. When are MVars faster than STM? If the answer is "never", then when > are MVars "better" than STM? (Choose your own definition of "better".) > > 2. When given two capabilities (+RTS -N2), MVars are suddenly an order > of magnitude slower than with just one capability. Why? On 24 January 2016 at 17:55, Christopher Allen wrote: > Could you post the code please? module Main (main) where import Control.Concurrent.Async import Control.Concurrent.MVar import Control.Concurrent.STM import Control.Monad import Criterion.Main main = defaultMain [ bgroup "thrash" [ bench "MVar" $ whnfIO (thrashTest mvarNew mvarInc mvarGet), bench "TVar" $ whnfIO (thrashTest tvarNew tvarInc tvarGet) ] ] thrashTest :: IO a -> (a -> IO ()) -> (a -> IO b) -> IO b thrashTest new inc get = do var <- new threads <- replicateM 4 (async (replicateM_ 100000 $ inc var)) forM_ threads wait get var mvarNew :: IO (MVar Int) mvarNew = newMVar 0 mvarInc :: MVar Int -> IO () mvarInc var = modifyMVar_ var $ \ i -> return $! succ i mvarGet :: MVar Int -> IO Int mvarGet = readMVar tvarNew :: IO (TVar Int) tvarNew = newTVarIO 0 tvarInc :: TVar Int -> IO () tvarInc var = atomically $ do i <- readTVar var writeTVar var $! succ i tvarGet :: TVar Int -> IO Int tvarGet = readTVarIO -- Thomas Koster From fryguybob at gmail.com Sun Jan 24 14:04:17 2016 From: fryguybob at gmail.com (Ryan Yates) Date: Sun, 24 Jan 2016 09:04:17 -0500 Subject: [Haskell-cafe] When are MVars better than STM? In-Reply-To: References: Message-ID: Hi Thomas, I'm sorry I don't have time right now for a proper response (buried under paper deadlines). There are certainly times when one will be faster then the other and the reasons are quite complicated. To complicate matters further it is very difficult to get benchmarks that don't lie about performance in this space. There are also alternative implementations that change the balance drastically. The only broad advice I can give is to benchmark the target application with both implementations to see how all the implications fall out. A broad description of the differences in implementation would be that MVars have a fairness guarantee (that does not come for free) for waking waiting threads. STM does not have this fairness which can lead to problems for programs that have quick transactions that always win over occasional long transactions (there are ways to avoid with a different implementation or with the cost of shifted to the programmer). My guess is in your particular benchmark the unfairness of STM works to your advantage and all the work is happening sequentially while the MVar version's fairness incurs frequent cache misses. Ryan On Sun, Jan 24, 2016 at 2:13 AM, Thomas Koster wrote: > On Sun, Jan 24, 2016 at 12:46 AM, Thomas Koster wrote: > > Using Criterion, I have been running benchmarks to measure the > > relative performance of STM and MVars for some simple transactions > > that I expect will be typical in my application. I am using GHC 7.10.2 > > and libraries as at Stackage LTS 3.2. > > > > I have found that STM is faster than MVars in all my benchmarks, > > without exception. This seems to go against accepted wisdom [1][2][3]. > > I have not included my source code here to save space, but if you > > suspect that I am using MVars incorrectly, just say so and I will post > > my source code separately. > > > > I have two questions: > > > > 1. When are MVars faster than STM? If the answer is "never", then when > > are MVars "better" than STM? (Choose your own definition of "better".) > > > > 2. When given two capabilities (+RTS -N2), MVars are suddenly an order > > of magnitude slower than with just one capability. Why? > > > On 24 January 2016 at 17:55, Christopher Allen wrote: > > Could you post the code please? > > > module Main (main) where > > import Control.Concurrent.Async > import Control.Concurrent.MVar > import Control.Concurrent.STM > import Control.Monad > import Criterion.Main > > main = > defaultMain > [ > bgroup "thrash" > [ > bench "MVar" $ whnfIO (thrashTest mvarNew mvarInc mvarGet), > bench "TVar" $ whnfIO (thrashTest tvarNew tvarInc tvarGet) > ] > ] > > thrashTest :: IO a > -> (a -> IO ()) > -> (a -> IO b) > -> IO b > thrashTest new inc get = do > var <- new > threads <- replicateM 4 (async (replicateM_ 100000 $ inc var)) > forM_ threads wait > get var > > mvarNew :: IO (MVar Int) > mvarNew = newMVar 0 > > mvarInc :: MVar Int -> IO () > mvarInc var = > modifyMVar_ var $ \ i -> > return $! succ i > > mvarGet :: MVar Int -> IO Int > mvarGet = readMVar > > tvarNew :: IO (TVar Int) > tvarNew = newTVarIO 0 > > tvarInc :: TVar Int -> IO () > tvarInc var = > atomically $ do > i <- readTVar var > writeTVar var $! succ i > > tvarGet :: TVar Int -> IO Int > tvarGet = readTVarIO > > -- > Thomas Koster > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shumovichy at gmail.com Sun Jan 24 15:09:32 2016 From: shumovichy at gmail.com (Yuras Shumovich) Date: Sun, 24 Jan 2016 18:09:32 +0300 Subject: [Haskell-cafe] When are MVars better than STM? In-Reply-To: References: Message-ID: <1453648172.2267.5.camel@gmail.com> On Sun, 2016-01-24 at 17:46 +1100, Thomas Koster wrote: >? > 2. When given two capabilities (+RTS -N2), MVars are suddenly an > order > of magnitude slower than with just one capability. Why? One possible explanation is closure locking which is not performed when there is only one capability. In my quick measurements it gives 40% speedup:?https://ghc.haskell.org/trac/ghc/ticket/693#comment:9 From ollie at ocharles.org.uk Sun Jan 24 18:21:38 2016 From: ollie at ocharles.org.uk (Oliver Charles) Date: Sun, 24 Jan 2016 18:21:38 +0000 Subject: [Haskell-cafe] transformers appears to benefit from more inline Message-ID: Hi everyone, I've recently been playing with a little mtl-like approach to streaming data, whereby I have a single type class - MonadYield - which can yield data, and then I use various implementations of this type class to implement operations. My usual approach to this is to implement this by newtyping around appropriate "off-the-shelf" monad transformers from the transformers library, but I found that this incurs a significant performance penalty. I've tried to put some fairly extensive benchmarks in place, which you can find at https://github.com/ocharles/monad-yield. In that repository is a README.md file that describes how I have been performing these benchmarks. The benchmarks are defined over a common interface that each implementation of MonadYield exports. The benchmarks are defined in "Benchmarks.hs", and the three implementations are "Transformers.hs" (using transformers from GHC), "TransformersInline.hs" (using transformers-ocharles from that repository, which has many more INLINE pragmas) and "Inline.hs" (which doesn't depend on anything other than base). There are three main benchmarks that are ran - one is benchmarking essentially the cost of ReaderT, the next the cost of StateT, and the last a composition of ReaderT over StateT over ReaderT. The results of the benchmark can be found here: https://ocharles.github.io/monad-yield/. It seems that the current darcs release of transformers loses every time, but if I sprinkle {-# INLINE #-} across the definition of lazy state, I get identical performance to just writing out the lazy state monad by hand. I was very surprised to see that I have to pay when I use transformers, and it seems like this cost can be removed at the cost of slightly larger interface files. Before I submit a patch, I'd love to hear others thoughts. Should {-# INLINE #-} be necessary? Is there any reason not to add it to every symbol in transformers? -- ocharles -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkoster at gmail.com Sun Jan 24 22:54:53 2016 From: tkoster at gmail.com (Thomas Koster) Date: Mon, 25 Jan 2016 09:54:53 +1100 Subject: [Haskell-cafe] When are MVars better than STM? In-Reply-To: References: Message-ID: Ryan, On Sun, Jan 24, 2016 at 12:46 AM, Thomas Koster wrote: > Using Criterion, I have been running benchmarks to measure the > relative performance of STM and MVars for some simple transactions > that I expect will be typical in my application. I am using GHC 7.10.2 > and libraries as at Stackage LTS 3.2. > > I have found that STM is faster than MVars in all my benchmarks, > without exception. This seems to go against accepted wisdom [1][2][3]. > I have not included my source code here to save space, but if you > suspect that I am using MVars incorrectly, just say so and I will post > my source code separately. > > I have two questions: > > 1. When are MVars faster than STM? If the answer is "never", then when > are MVars "better" than STM? (Choose your own definition of "better".) > > 2. When given two capabilities (+RTS -N2), MVars are suddenly an order > of magnitude slower than with just one capability. Why? On 25 January 2016 at 01:04, Ryan Yates wrote: > I'm sorry I don't have time right now for a proper response (buried under > paper deadlines). There are certainly times when one will be faster then > the other and the reasons are quite complicated. To complicate matters > further it is very difficult to get benchmarks that don't lie about > performance in this space. There are also alternative implementations that > change the balance drastically. The only broad advice I can give is to > benchmark the target application with both implementations to see how all > the implications fall out. That is fair. From what I can tell, the time spent in the runtime dominates my user time, so I am basically benchmarking the GHC runtime, which I am not qualified to do :) I had only hoped to be able to decide on MVar vs STM before getting into the nitty gritty. > A broad description of the differences in > implementation would be that MVars have a fairness guarantee (that does not > come for free) for waking waiting threads. STM does not have this fairness > which can lead to problems for programs that have quick transactions that > always win over occasional long transactions (there are ways to avoid with a > different implementation or with the cost of shifted to the programmer). My > guess is in your particular benchmark the unfairness of STM works to your > advantage and all the work is happening sequentially while the MVar > version's fairness incurs frequent cache misses. Fairness may actually be very important to my application. Unlike my benchmark, the complexity of real transactions can vary enormously. Let me think about this. Thanks for your response. -- Thomas Koster From tkoster at gmail.com Sun Jan 24 23:04:27 2016 From: tkoster at gmail.com (Thomas Koster) Date: Mon, 25 Jan 2016 10:04:27 +1100 Subject: [Haskell-cafe] When are MVars better than STM? In-Reply-To: <1453648172.2267.5.camel@gmail.com> References: <1453648172.2267.5.camel@gmail.com> Message-ID: Yuras, On Sun, 2016-01-24 at 17:46 +1100, Thomas Koster wrote: > 2. When given two capabilities (+RTS -N2), MVars are suddenly an > order > of magnitude slower than with just one capability. Why? On 25 January 2016 at 02:09, Yuras Shumovich wrote: > One possible explanation is closure locking which is not performed when > there is only one capability. In my quick measurements it gives 40% > speedup: https://ghc.haskell.org/trac/ghc/ticket/693#comment:9 This makes sense. After all, why bother with locks and barriers when the process is single-threaded anyway? Thanks for your response. -- Thomas Koster From cma at bitemyapp.com Sun Jan 24 23:17:17 2016 From: cma at bitemyapp.com (Christopher Allen) Date: Sun, 24 Jan 2016 17:17:17 -0600 Subject: [Haskell-cafe] When are MVars better than STM? In-Reply-To: References: <1453648172.2267.5.camel@gmail.com> Message-ID: Well, there are cases where even with single-threading you want a memory barrier to prevent the CPU reordering instructions, but the shift to a single-threaded runtime should elide _some_ locks expressly designed to cope with multithreading. On Sun, Jan 24, 2016 at 5:04 PM, Thomas Koster wrote: > Yuras, > > On Sun, 2016-01-24 at 17:46 +1100, Thomas Koster wrote: > > 2. When given two capabilities (+RTS -N2), MVars are suddenly an > > order > > of magnitude slower than with just one capability. Why? > > On 25 January 2016 at 02:09, Yuras Shumovich wrote: > > One possible explanation is closure locking which is not performed when > > there is only one capability. In my quick measurements it gives 40% > > speedup: https://ghc.haskell.org/trac/ghc/ticket/693#comment:9 > > This makes sense. After all, why bother with locks and barriers when > the process is single-threaded anyway? > > Thanks for your response. > > -- > Thomas Koster > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- Chris Allen Currently working on http://haskellbook.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From vigalchin at gmail.com Mon Jan 25 07:22:27 2016 From: vigalchin at gmail.com (Vasili I. Galchin) Date: Mon, 25 Jan 2016 01:22:27 -0600 Subject: [Haskell-cafe] trying to build Hackage hoq-0.3 Message-ID: 1) ghc version 7.6.3 2) cabal version 1.16.0.2 3) when i run "cabal install", I receive on stdout : Resolving dependencies... Configuring readline-1.0.3.0... checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for GNUreadline.framework... checking for readline... no checking for tputs in -lncurses... no checking for tputs in -ltermcap... no checking for tputs in -lcurses... no checking for rl_readline_version... no configure: error: readline not found, so this package cannot be built See `config.log' for more details. Failed to install readline-1.0.3.0 cabal: Error: some packages failed to install: hoq-0.3 depends on readline-1.0.3.0 which failed to install. readline-1.0.3.0 failed during the configure step. The exception was: ExitFailure 1 4) I googled Haskell and readline and I found the same error with other Haskell packages like Twitter. How do I resolve my build readline problem? Vasya From utdemir at gmail.com Mon Jan 25 07:36:47 2016 From: utdemir at gmail.com (Utku Demir) Date: Mon, 25 Jan 2016 07:36:47 +0000 Subject: [Haskell-cafe] trying to build Hackage hoq-0.3 In-Reply-To: References: Message-ID: It looks like the "readline" library(the C one, not the Haskell one) is missing on your system. Can you install it via your distributions package manager (assuming you're using Linux)? Like; $ sudo apt-get install libreadline-dev # could be libreadline6-dev On Mon, 25 Jan 2016 at 09:22 Vasili I. Galchin wrote: > 1) ghc version 7.6.3 > > 2) cabal version 1.16.0.2 > > 3) when i run "cabal install", > > I receive on stdout : > > Resolving dependencies... > Configuring readline-1.0.3.0... > checking for gcc... gcc > checking for C compiler default output file name... a.out > checking whether the C compiler works... yes > checking whether we are cross compiling... no > checking for suffix of executables... > checking for suffix of object files... o > checking whether we are using the GNU C compiler... yes > checking whether gcc accepts -g... yes > checking for gcc option to accept ISO C89... none needed > checking for GNUreadline.framework... checking for readline... no > checking for tputs in -lncurses... no > checking for tputs in -ltermcap... no > checking for tputs in -lcurses... no > checking for rl_readline_version... no > configure: error: readline not found, so this package cannot be built > See `config.log' for more details. > Failed to install readline-1.0.3.0 > cabal: Error: some packages failed to install: > hoq-0.3 depends on readline-1.0.3.0 which failed to install. > readline-1.0.3.0 failed during the configure step. The exception was: > ExitFailure 1 > > 4) I googled Haskell and readline and I found the same error with > other Haskell packages like Twitter. > > > How do I resolve my build readline problem? > > Vasya > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vigalchin at gmail.com Mon Jan 25 07:46:13 2016 From: vigalchin at gmail.com (Vasili I. Galchin) Date: Mon, 25 Jan 2016 01:46:13 -0600 Subject: [Haskell-cafe] trying to build Hackage hoq-0.3 In-Reply-To: References: Message-ID: Let me try Utku. BTW I am running Ubuntu Linux. Vasya On Mon, Jan 25, 2016 at 1:36 AM, Utku Demir wrote: > It looks like the "readline" library(the C one, not the Haskell one) is > missing on your system. Can you install it via your distributions package > manager (assuming you're using Linux)? > > Like; > > $ sudo apt-get install libreadline-dev # could be libreadline6-dev > > On Mon, 25 Jan 2016 at 09:22 Vasili I. Galchin wrote: >> >> 1) ghc version 7.6.3 >> >> 2) cabal version 1.16.0.2 >> >> 3) when i run "cabal install", >> >> I receive on stdout : >> >> Resolving dependencies... >> Configuring readline-1.0.3.0... >> checking for gcc... gcc >> checking for C compiler default output file name... a.out >> checking whether the C compiler works... yes >> checking whether we are cross compiling... no >> checking for suffix of executables... >> checking for suffix of object files... o >> checking whether we are using the GNU C compiler... yes >> checking whether gcc accepts -g... yes >> checking for gcc option to accept ISO C89... none needed >> checking for GNUreadline.framework... checking for readline... no >> checking for tputs in -lncurses... no >> checking for tputs in -ltermcap... no >> checking for tputs in -lcurses... no >> checking for rl_readline_version... no >> configure: error: readline not found, so this package cannot be built >> See `config.log' for more details. >> Failed to install readline-1.0.3.0 >> cabal: Error: some packages failed to install: >> hoq-0.3 depends on readline-1.0.3.0 which failed to install. >> readline-1.0.3.0 failed during the configure step. The exception was: >> ExitFailure 1 >> >> 4) I googled Haskell and readline and I found the same error with >> other Haskell packages like Twitter. >> >> >> How do I resolve my build readline problem? >> >> Vasya >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From vigalchin at gmail.com Mon Jan 25 07:48:01 2016 From: vigalchin at gmail.com (Vasili I. Galchin) Date: Mon, 25 Jan 2016 01:48:01 -0600 Subject: [Haskell-cafe] trying to build Hackage hoq-0.3 In-Reply-To: References: Message-ID: thx . hoq built and installed! Kind thanks. vasya On Mon, Jan 25, 2016 at 1:46 AM, Vasili I. Galchin wrote: > Let me try Utku. BTW I am running Ubuntu Linux. > Vasya > > On Mon, Jan 25, 2016 at 1:36 AM, Utku Demir wrote: >> It looks like the "readline" library(the C one, not the Haskell one) is >> missing on your system. Can you install it via your distributions package >> manager (assuming you're using Linux)? >> >> Like; >> >> $ sudo apt-get install libreadline-dev # could be libreadline6-dev >> >> On Mon, 25 Jan 2016 at 09:22 Vasili I. Galchin wrote: >>> >>> 1) ghc version 7.6.3 >>> >>> 2) cabal version 1.16.0.2 >>> >>> 3) when i run "cabal install", >>> >>> I receive on stdout : >>> >>> Resolving dependencies... >>> Configuring readline-1.0.3.0... >>> checking for gcc... gcc >>> checking for C compiler default output file name... a.out >>> checking whether the C compiler works... yes >>> checking whether we are cross compiling... no >>> checking for suffix of executables... >>> checking for suffix of object files... o >>> checking whether we are using the GNU C compiler... yes >>> checking whether gcc accepts -g... yes >>> checking for gcc option to accept ISO C89... none needed >>> checking for GNUreadline.framework... checking for readline... no >>> checking for tputs in -lncurses... no >>> checking for tputs in -ltermcap... no >>> checking for tputs in -lcurses... no >>> checking for rl_readline_version... no >>> configure: error: readline not found, so this package cannot be built >>> See `config.log' for more details. >>> Failed to install readline-1.0.3.0 >>> cabal: Error: some packages failed to install: >>> hoq-0.3 depends on readline-1.0.3.0 which failed to install. >>> readline-1.0.3.0 failed during the configure step. The exception was: >>> ExitFailure 1 >>> >>> 4) I googled Haskell and readline and I found the same error with >>> other Haskell packages like Twitter. >>> >>> >>> How do I resolve my build readline problem? >>> >>> Vasya >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From m.farkasdyck at gmail.com Mon Jan 25 08:11:07 2016 From: m.farkasdyck at gmail.com (M Farkas-Dyck) Date: Mon, 25 Jan 2016 00:11:07 -0800 Subject: [Haskell-cafe] Local types Message-ID: <20160125081107.GA22709@mintha.lan> I often wish to be able to define local types and instances, so, for example: import Data.Set as Set ordNubBy :: (a -> a -> Ordering) -> [a] -> [a] ordNubBy cmp = go Set.empty where newtype T = T a instance Ord T where T x `compare` T y = cmp x y go _ [] = [] go ys (x:xs) = bool (x:) id (T x ? ys) $ go (Set.insert (T x) ys) xs The notion is that type and instance declarations would become legal in `let` or `where` bonds; they would effectively declare new types and instances in that scope for each use of that term; and such local types and instances could never leave the scope they were defined in, by enforcement, so consistency of instances would be preserved. I know some related work which proposes local instances [0] but none i know proposes also local types. I'm seeking some feedback here before i formalize further and potentially start making implements. Thoughts? Questions? Better ideas? Critical flaws? [0] http://okmij.org/ftp/Haskell/TypeClass.html#local From imantc at gmail.com Mon Jan 25 08:38:54 2016 From: imantc at gmail.com (Imants Cekusins) Date: Mon, 25 Jan 2016 09:38:54 +0100 Subject: [Haskell-cafe] Local types In-Reply-To: <20160125081107.GA22709@mintha.lan> References: <20160125081107.GA22709@mintha.lan> Message-ID: > local types and instances is this not an attempt to move module functionality inside function? would this not encourage larger functions? are larger functions a good thing? could this not be better addressed by allowing explicit export / hiding of instances in a module? does not type declaration inside function come at a performance cost? From skosyrev at ptsecurity.com Mon Jan 25 09:59:49 2016 From: skosyrev at ptsecurity.com (Kosyrev Serge) Date: Mon, 25 Jan 2016 12:59:49 +0300 Subject: [Haskell-cafe] Local types In-Reply-To: (sfid-20160125_120318_296350_3C2EADE7) (Imants Cekusins's message of "Mon, 25 Jan 2016 09:38:54 +0100") References: <20160125081107.GA22709@mintha.lan> Message-ID: <874me2ou7e.fsf@ptsecurity.com> Languages should express our intent, and giving type and instance declarations the ability to be localised will enable them to serve as better vehicles of expression. That said, Imants Cekusins writes: >> local types and instances > > is this not an attempt to move module functionality inside function? Same counter-argument can be applied to local functions. > would this not encourage larger functions? are larger functions a good thing? Same as above. > could this not be better addressed by allowing explicit export / > hiding of instances in a module? If I interpret the intent correctly, this is specifically about finer granularity than module level. > does not type declaration inside function come at a performance cost? Generally speaking, and whether this suggested cost is real, Haskell programmers are more or less used to pay the cost of abstraction. -- ? ???????e? / respectfully, ??????? ?????? From imantc at gmail.com Mon Jan 25 10:48:55 2016 From: imantc at gmail.com (Imants Cekusins) Date: Mon, 25 Jan 2016 11:48:55 +0100 Subject: [Haskell-cafe] Local types In-Reply-To: <874me2ou7e.fsf@ptsecurity.com> References: <20160125081107.GA22709@mintha.lan> <874me2ou7e.fsf@ptsecurity.com> Message-ID: >> is this not an attempt to move module functionality inside function? > Same counter-argument can be applied to local functions. not quite. Let's look at the current state: Module inside module: ok (module reexport) Type inside type: ok (data declaration etc) Function inside function: ok (local functions) Type / Instance / module inside function is quite a bit different > finer granularity than module level. by turning function into a mini module? Currently module does not give control over instance export. Should this not be addressed first? This would be a step towards more complexity in syntax. While there may be benefits, complexity always comes at a price. > Haskell programmers are more or less used to pay the cost of abstraction. well should we not start thinking about keeping those costs down if possible - performance and otherwise? Ideally, minimizing them? From dominikbollmann at gmail.com Mon Jan 25 10:58:49 2016 From: dominikbollmann at gmail.com (Dominik Bollmann) Date: Mon, 25 Jan 2016 11:58:49 +0100 Subject: [Haskell-cafe] [Template Haskell] Message-ID: <87oacaapsm.fsf@t450s.i-did-not-set--mail-host-address--so-tickle-me> Hi all, I'm just getting my feet wet with template haskell, and I tried to write a tmap function which maps a function over the ith component of an n-tuple (which uses a slightly different approach than the given version on the TH wiki): -- | Selects the ith component of an n-tuple tsel :: Int -> Int -> ExpQ -- n-tuple a -> a tsel i n = [| \t -> $(caseE [| t |] [alt]) |] where alt = match (tupP pats) body [] pats = map varP xs xs = [ mkName ("x" ++ show k) | k <- [1..n] ] body = normalB . varE $ xs !! (i-1) -- | Maps a function over the ith component of an n-tuple tmap :: Int -> Int -> ExpQ -- :: (a -> b) -> n-tuple -> n-tuple tmap i n = do f <- newName "f" t <- newName "t" lamE [varP f, varP t] $ [| let prefix = map extract [1..(i-1)] new = $f ($(tsel i n) $t) suffix = map extract [(i+1)..n] extract k = $(tsel k n) t in tupE $ prefix ++ [new] ++ suffix |] However, this code results in the following error: Sandbox.hs:26:29: Stage error: ?k? is bound at stage 2 but used at stage 1 ? In the splice: $(tsel k n) In the Template Haskell quotation [| let prefix = map extract [1 .. (i - 1)] new = $f ($(tsel i n) ($t)) suffix = map extract [(i + 1) .. n] extract k = $(tsel k n) $t in tupE $ prefix ++ [new] ++ suffix |] Compilation failed. Could anyone explain to me what stage 2 and stage 1 refer to, and further, what the logical flaw in the above snippet is? What exactly is wrong with line `extract k = $(tsel k n) $t' ? Thanks! Dominik. From imantc at gmail.com Mon Jan 25 11:11:09 2016 From: imantc at gmail.com (Imants Cekusins) Date: Mon, 25 Jan 2016 12:11:09 +0100 Subject: [Haskell-cafe] Local types In-Reply-To: <874me2ou7e.fsf@ptsecurity.com> References: <20160125081107.GA22709@mintha.lan> <874me2ou7e.fsf@ptsecurity.com> Message-ID: > Languages should express our intent Vocabulary of a certain size is necessary to write good books, or read them. Quality or popularity of those good books is however not measured by the size of vocabulary. Language richness is only a part of the story. Language use is just as important. From shumovichy at gmail.com Mon Jan 25 13:37:19 2016 From: shumovichy at gmail.com (Yuras Shumovich) Date: Mon, 25 Jan 2016 16:37:19 +0300 Subject: [Haskell-cafe] Local types In-Reply-To: <20160125081107.GA22709@mintha.lan> References: <20160125081107.GA22709@mintha.lan> Message-ID: <1453729039.2267.19.camel@gmail.com> On Mon, 2016-01-25 at 00:11 -0800, M Farkas-Dyck wrote: > I often wish to be able to define local types and instances, so, for > example: > > import Data.Set as Set > > ordNubBy :: (a -> a -> Ordering) -> [a] -> [a] > ordNubBy cmp = go Set.empty > ? where newtype T = T a > > ????????instance Ord T where T x `compare` T y = cmp x y > > ????????go _ [] = [] > ????????go ys (x:xs) = bool (x:) id (T x ? ys) $ go (Set.insert (T x) > ys) xs I like local types a lot. I'm not sure how your example works w.r.t. type variable scopes. Are you assuming ScopedTypeVariables? It will probably require explicit quantification to work: ordNubBy :: forall a. (a -> a -> Ordering) -> [a] -> [a] Without ScopedTypeVariables `a` in `newtype T = T a` is different from the one in top level type signature, is it? At least that is how type variables work in local function signatures. In general, I see increasing demand for change scoping in haskell. For example see https://ghc.haskell.org/trac/ghc/wiki/Proposal/OpenImportExtension? https://ghc.haskell.org/trac/ghc/wiki/Records/NestedModules https://ghc.haskell.org/trac/ghc/ticket/10478 http://blog.haskell-exists.com/yuras/posts/namespaces-modules-qualified-imports-and-a-constant-pain.html Probably all of that should be addressed in a complex. > > The notion is that type and instance declarations would become legal in `let` or `where` bonds; they would effectively declare new types and instances in that scope for each use of that term; and such local types and instances could never leave the scope they were defined in, by enforcement, so consistency of instances would be preserved. > > I know some related work which proposes local instances [0] but none i know proposes also local types. > > I'm seeking some feedback here before i formalize further and potentially start making implements. Thoughts? Questions? Better ideas? Critical flaws? > > [0] http://okmij.org/ftp/Haskell/TypeClass.html#local > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From masahiro.sakai at gmail.com Mon Jan 25 14:53:59 2016 From: masahiro.sakai at gmail.com (Masahiro Sakai) Date: Mon, 25 Jan 2016 23:53:59 +0900 Subject: [Haskell-cafe] ANN: toysolver 0.4.0 released Message-ID: I'm announcing the release of the toysolver package, version 0.4.0. The toysolver provides solver implementations of various problems including SAT, SMT, Max-SAT, PBS (Pseudo Boolean Satisfaction), PBO (Pseudo Boolean Optimization), MILP (Mixed Integer Linear Programming) and non-linear real arithmetic. The highlight of this release is the introduction of SMT (Satisfiablity Modulo Theories) solver 'toysmt'. At the moment, toysmt is very experimental and only supports the theory of uninterpreted functions and the theory of linear real arithmetic. http://hackage.haskell.org/package/toysolver https://github.com/msakai/toysolver/releases/tag/v0.4.0 Thanks, -- Masahiro Sakai From kc1956 at gmail.com Mon Jan 25 14:56:48 2016 From: kc1956 at gmail.com (KC) Date: Mon, 25 Jan 2016 06:56:48 -0800 Subject: [Haskell-cafe] ANN: toysolver 0.4.0 released In-Reply-To: References: Message-ID: Does MILP call another package? -- -- Sent from an expensive device which will be obsolete in a few months! :D Casey On Jan 25, 2016 6:54 AM, "Masahiro Sakai" wrote: > I'm announcing the release of the toysolver package, > version 0.4.0. > > The toysolver provides solver implementations of various > problems including SAT, SMT, Max-SAT, PBS (Pseudo Boolean > Satisfaction), PBO (Pseudo Boolean Optimization), MILP > (Mixed Integer Linear Programming) and non-linear real > arithmetic. > > The highlight of this release is the introduction of > SMT (Satisfiablity Modulo Theories) solver 'toysmt'. > > At the moment, toysmt is very experimental and only > supports the theory of uninterpreted functions and > the theory of linear real arithmetic. > > http://hackage.haskell.org/package/toysolver > https://github.com/msakai/toysolver/releases/tag/v0.4.0 > > Thanks, > > -- Masahiro Sakai > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From masahiro.sakai at gmail.com Mon Jan 25 15:51:24 2016 From: masahiro.sakai at gmail.com (Masahiro Sakai) Date: Tue, 26 Jan 2016 00:51:24 +0900 Subject: [Haskell-cafe] ANN: toysolver 0.4.0 released In-Reply-To: References: Message-ID: Hi, 2016-01-25 23:56 GMT+09:00 KC : > Does MILP call another package? No, toysolver has its own branch-and-bound solver written in Haskell, but the implementation is naive and the performance is not so good. Masahiro From johannes.waldmann at htwk-leipzig.de Mon Jan 25 17:26:29 2016 From: johannes.waldmann at htwk-leipzig.de (Johannes Waldmann) Date: Mon, 25 Jan 2016 18:26:29 +0100 Subject: [Haskell-cafe] Local types Message-ID: <56A65AC5.5060206@htwk-leipzig.de> > I often wish to be able to define local types and instances ... https://mail.haskell.org/pipermail/haskell-cafe/2014-October/116291.html From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Mon Jan 25 17:29:37 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Mon, 25 Jan 2016 17:29:37 +0000 Subject: [Haskell-cafe] Local types In-Reply-To: <56A65AC5.5060206@htwk-leipzig.de> References: <56A65AC5.5060206@htwk-leipzig.de> Message-ID: <20160125172937.GJ30661@weber> On Mon, Jan 25, 2016 at 06:26:29PM +0100, Johannes Waldmann wrote: > > I often wish to be able to define local types and instances ... > > https://mail.haskell.org/pipermail/haskell-cafe/2014-October/116291.html It seems to me that this is a different issue. M Farkas-Dyck wants to be able to define local types *and instances for them*. Oleg is talking about local instances for global types. Tom From imantc at gmail.com Mon Jan 25 18:25:43 2016 From: imantc at gmail.com (Imants Cekusins) Date: Mon, 25 Jan 2016 19:25:43 +0100 Subject: [Haskell-cafe] Local types In-Reply-To: <20160125172937.GJ30661@weber> References: <56A65AC5.5060206@htwk-leipzig.de> <20160125172937.GJ30661@weber> Message-ID: is this possible to implement hiding (suppress export) of all instances defined in current module? something like: module A ( (-) ) where like this, or with a Language pragma (top of the module or just above the instance)? From mgsloan at gmail.com Mon Jan 25 20:14:01 2016 From: mgsloan at gmail.com (Michael Sloan) Date: Mon, 25 Jan 2016 12:14:01 -0800 Subject: [Haskell-cafe] [Template Haskell] In-Reply-To: <87oacaapsm.fsf@t450s.i-did-not-set--mail-host-address--so-tickle-me> References: <87oacaapsm.fsf@t450s.i-did-not-set--mail-host-address--so-tickle-me> Message-ID: Hi! The issue is that "extract k = ..." is a binding of k which will be present in the generated code (and so will be available at runtime). The anti-quote $(tsel k n) cannot depend on k, because it gets run at compiletime. Seems to me like that error message could use some improvement. Why not something more like "Stage error: `k' is bound in generated code but used in compiletime code"? AFAIK there is no such thing as stage 3 or stage 0, so the numbering seems a bit arbitrary. -Michael On Mon, Jan 25, 2016 at 2:58 AM, Dominik Bollmann wrote: > > Hi all, > > I'm just getting my feet wet with template haskell, and I tried to write > a tmap function which maps a function over the ith component of an > n-tuple (which uses a slightly different approach than the given > version on the TH wiki): > > -- | Selects the ith component of an n-tuple > tsel :: Int -> Int -> ExpQ -- n-tuple a -> a > tsel i n = [| \t -> $(caseE [| t |] [alt]) |] > where alt = match (tupP pats) body [] > pats = map varP xs > xs = [ mkName ("x" ++ show k) | k <- [1..n] ] > body = normalB . varE $ xs !! (i-1) > > -- | Maps a function over the ith component of an n-tuple > tmap :: Int -> Int -> ExpQ -- :: (a -> b) -> n-tuple -> n-tuple > tmap i n = do > f <- newName "f" > t <- newName "t" > lamE [varP f, varP t] $ [| > let prefix = map extract [1..(i-1)] > new = $f ($(tsel i n) $t) > suffix = map extract [(i+1)..n] > extract k = $(tsel k n) t > in tupE $ prefix ++ [new] ++ suffix |] > > However, this code results in the following error: > > Sandbox.hs:26:29: Stage error: ?k? is bound at stage 2 but used at stage 1 > ? > In the splice: $(tsel k n) > In the Template Haskell quotation > [| let > prefix = map extract [1 .. (i - 1)] > new = $f ($(tsel i n) ($t)) > suffix = map extract [(i + 1) .. n] > extract k = $(tsel k n) $t > in tupE $ prefix ++ [new] ++ suffix |] > Compilation failed. > > Could anyone explain to me what stage 2 and stage 1 refer to, and > further, what the logical flaw in the above snippet is? What exactly is > wrong with line `extract k = $(tsel k n) $t' ? > > Thanks! > > Dominik. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.farkasdyck at gmail.com Tue Jan 26 06:03:12 2016 From: m.farkasdyck at gmail.com (M Farkas-Dyck) Date: Mon, 25 Jan 2016 22:03:12 -0800 Subject: [Haskell-cafe] Local types In-Reply-To: <20160125081107.GA22709@mintha.lan> References: <20160125081107.GA22709@mintha.lan> Message-ID: On 25/01/2016, Imants Cekusins wrote: > could this not be better addressed by allowing explicit export / > hiding of instances in a module? That wouldn't enable such definitions as mine earlier, where the instance is defined in terms of other local terms. > does not type declaration inside function come at a performance cost? Newtype declaration doesn't as they are elided at build time, and (at least with GHC) the program would merely pass a class dictionary with the locally-defined methods. On 25/01/2016, Imants Cekusins wrote: > Currently module does not give control over instance export. Should > this not be addressed first? To my knowledge this would be a breach of consistency of instances: one could so define terms with instances in conflict [0]. [0] http://stackoverflow.com/questions/8728596/explicitly-import-instances On 25/01/2016, Yuras Shumovich wrote: > I'm not sure how your example works w.r.t. type variable scopes. Are > you assuming ScopedTypeVariables? It will probably require explicit > quantification to work: > > ordNubBy :: forall a. (a -> a -> Ordering) -> [a] -> [a] Yes, indeed, thanks for catching that ? > In general, I see increasing demand for change scoping in haskell. For > example see > > ... > > Probably all of that should be addressed in a complex. I think my proposal is orthogonal to all those as none of them seems to allow local instances defined in terms of other local terms. From imantc at gmail.com Tue Jan 26 09:11:37 2016 From: imantc at gmail.com (Imants Cekusins) Date: Tue, 26 Jan 2016 10:11:37 +0100 Subject: [Haskell-cafe] Local types In-Reply-To: References: <20160125081107.GA22709@mintha.lan> Message-ID: > That wouldn't enable such definitions as mine earlier, where the instance is defined in terms of other local terms. agree - that wouldn't. However tweaks over instance export would allow you to move a complex function to its own module where module-local instances could be defined. > this would be a breach of consistency of instances: one could so define terms with instances in conflict [0]. could GHC (with appropriate changes) figure conflict out and throw error? e.g. GHC could enforce that instances can only be hidden if they are not referred to in any of exported types or functions in the instance defining module? .. it could cause error to import the same instance-defining module (including imports by imports) with and without instances hidden. .. mark instance (with a pragma?) as module-local .. only most basic instances (which compile without pragmas) could be hidden .. hide / export all or no instances (for a given class?) within a module .. another safety net? personally, I am happy with the current GHC as it is. Just wondering: is function - the appropriate place to define types and instances. From imantc at gmail.com Tue Jan 26 09:52:38 2016 From: imantc at gmail.com (Imants Cekusins) Date: Tue, 26 Jan 2016 10:52:38 +0100 Subject: [Haskell-cafe] Local types In-Reply-To: References: <20160125081107.GA22709@mintha.lan> Message-ID: .. another safety net: module-local instance could be defined only for module-local (non-exported) types. Would this not offer what you are looking for, without breaking instance consistency? From imantc at gmail.com Tue Jan 26 13:46:48 2016 From: imantc at gmail.com (Imants Cekusins) Date: Tue, 26 Jan 2016 14:46:48 +0100 Subject: [Haskell-cafe] .cabal: API-compatible library versions Message-ID: Stack may already do something like this, I don't know. Anyway, here is an idea. Currently .cabal lists version or version range. What if package info included lists of API-compatible versions of this package? Let's say one library requires v10 of packageA. Another library requires v15 of the same packageA. This is a hypothetical scenario :-P However v10, v15 are API-compatible as far as package maintainer knows. Package info specifies v10 and v15 as API-compatible too. What if it were possible to issue: cabal install packageA .. and then if v10, v15 are compatible, v15 is returned without warnings. If v10 and v15 are not compatible, cabal would warn. Basically instead of library clients needing to test multiple library versions or requiring 1 exact version, deps could be specified as 1 version (with which development took place) and cabal (with hints from package maintainers) would figure this out. It might even be possible to fine-tune it to check modules & symbols actually used by library consumer app. ? From danburton.email at gmail.com Tue Jan 26 16:36:43 2016 From: danburton.email at gmail.com (Dan Burton) Date: Tue, 26 Jan 2016 08:36:43 -0800 Subject: [Haskell-cafe] .cabal: API-compatible library versions In-Reply-To: References: Message-ID: This is exactly what the Package Versioning Policy is for. By using semantic versioning, your .cabal file can specify that you use version 1.0.* of a given package. Any API-compatible versions of that package are supposed to be in the 1.0 range. Versions 1.1 and above are considered API incompatible. On Tuesday, January 26, 2016, Imants Cekusins wrote: > Stack may already do something like this, I don't know. Anyway, here is an > idea. > > > Currently .cabal lists version or version range. > > What if package info included lists of API-compatible versions of this > package? > > > Let's say one library requires v10 of packageA. Another library > requires v15 of the same packageA. This is a hypothetical scenario :-P > > However v10, v15 are API-compatible as far as package maintainer > knows. Package info specifies v10 and v15 as API-compatible too. > > What if it were possible to issue: > cabal install packageA > .. and then if v10, v15 are compatible, v15 is returned without warnings. > If v10 and v15 are not compatible, cabal would warn. > > > Basically instead of library clients needing to test multiple library > versions or requiring 1 exact version, deps could be specified as 1 > version (with which development took place) and cabal (with hints from > package maintainers) would figure this out. > > It might even be possible to fine-tune it to check modules & symbols > actually used by library consumer app. > > > ? > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- -- Dan Burton -------------- next part -------------- An HTML attachment was scrubbed... URL: From imantc at gmail.com Tue Jan 26 17:06:41 2016 From: imantc at gmail.com (Imants Cekusins) Date: Tue, 26 Jan 2016 18:06:41 +0100 Subject: [Haskell-cafe] .cabal: API-compatible library versions In-Reply-To: References: Message-ID: > semantic versioning ... Any API-compatible versions of that package are supposed to be in the 1.0 range. Versions 1.1 and above are considered API incompatible. Thank you Dan. so changes in 1/10th indicate API changes? however if package info included symbols / modules with breaking changes, a wider range of versions might become compatible. e.g. half package modules might behave just as before yet a few API methods in the other half might have changed leading to a different version and (possibly unnecessary) version clash. by making cabal smarter and better informed, compatible version range would increase. it seems From imantc at gmail.com Tue Jan 26 17:10:59 2016 From: imantc at gmail.com (Imants Cekusins) Date: Tue, 26 Jan 2016 18:10:59 +0100 Subject: [Haskell-cafe] .cabal: API-compatible library versions In-Reply-To: References: Message-ID: .. also (with just semantic versioning as a guide) what about new API features which do not break previous API? new version -> clash? From adam at bergmark.nl Tue Jan 26 17:32:07 2016 From: adam at bergmark.nl (Adam Bergmark) Date: Tue, 26 Jan 2016 18:32:07 +0100 Subject: [Haskell-cafe] .cabal: API-compatible library versions In-Reply-To: References: Message-ID: > Thank you Dan. so changes in 1/10th indicate API changes? Yes the first two components signify the major version. The mysterious Backpack might be able to help with some of these things but in general it's not enough just to look at the visible API. I might require V10 of a library because it fixes a bug in a subtle way (such as changing the encoding of certain characters in a URI), or changed behavior that isn't visible in haddocks (a ToJSON instance that now produces property names in lowercase). It can be argued that both of these should be changed in a major update, but the PVP does not require this and neither can it specify breaking changes to an arbitrary level of granularity. As a maintainer I might still need to both whitelist versions I know are good and blacklist those that are not to make sure users don't get a package combination that doesn't work as expected. - Adam On Tue, Jan 26, 2016 at 6:10 PM, Imants Cekusins wrote: > .. also (with just semantic versioning as a guide) > what about new API features which do not break previous API? > > new version -> clash? > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From imantc at gmail.com Tue Jan 26 17:51:24 2016 From: imantc at gmail.com (Imants Cekusins) Date: Tue, 26 Jan 2016 18:51:24 +0100 Subject: [Haskell-cafe] .cabal: API-compatible library versions In-Reply-To: References: Message-ID: > As a maintainer I might still need to both whitelist versions I know are good and blacklist those that are not to make sure users don't get a package combination that doesn't work as expected. of course, maintainer input would be essential. For simple utility libraries we could get away with file compare. However even in different versions there may be fully compatible code. If a package only uses still compatible code, why not save users a reinstall? semantic versioning as it is, is very restrictive (or misleading if versions are not upped for 'small changes'). From imantc at gmail.com Tue Jan 26 17:56:42 2016 From: imantc at gmail.com (Imants Cekusins) Date: Tue, 26 Jan 2016 18:56:42 +0100 Subject: [Haskell-cafe] .cabal: API-compatible library versions In-Reply-To: References: Message-ID: .. as to the additional info, would it not be enough to list modules and symbols which produce different results compared to previous version? could cabal be tweaked to compare a list of (installed, required) versions (with sets of blacklisted symbols) with symbols actually used in a library, to suggest ok / reinstall? From julian at getcontented.com.au Wed Jan 27 05:29:55 2016 From: julian at getcontented.com.au (Julian Leviston) Date: Wed, 27 Jan 2016 16:29:55 +1100 Subject: [Haskell-cafe] [Ann] A new, fun, easy book/tute on Haskell Message-ID: <329AEC6A-42D5-4836-832A-451F9FC627FD@getcontented.com.au> [Ann] A new, fun, easy book/tute on Haskell http://www.happylearnhaskelltutorial.com Just released this two days ago, roughly. We've released 7 out of 20 sections on the site so far, and the book is up on leanpub ($8 or $4 - $20 set your own price), at about 95% done. We don't want to put it to 100% until we've had at least 20 or more technical people read it and give us feedback, and until we've done a thorough revision for corrections. We felt like there's a need for an updated education resource that's both fun and for super beginners who have never programmed before. Essentially a slightly less drawing-focussed and slightly more technically advanced version of the 1980's Usborne Computer Guides http://www.usborne.com/catalogue/feature-page/computer-and-coding-books.aspx something that appeals to people who are more visually focussed as we are. Our plan is to keep expanding it into more volumes if there's enough interest and paid support. We're also very interested in hearing critical criticism and or praise! :) Here is a rough breakdown of our take on approaching education: 1. Tackling reading first, then slowly introducing writing when enough examples have been seen to increase confidence, because reading & writing are separate skills 2. Many small, fun examples for each thing to keep interest high: fun examples helps with motivation 3. Gradual, partial introduction of topics, in context at first: graded, less to take in at once, using the writing phase to soldify understanding, will possibly add a revision phase later 4. Not so much theory introduced before the practical has been introduced (examples-first), which gives a concrete context for the theory 5. Pictures. Some of these are visual aides, which are useful as explanations: a picture tells 1000 words, so they say 6. No assumption of previous programming experience. Almost every other guide available assume some programming 7. Smaller sections because completing small things gives a real sense of achievement, which increases motivation 8. Will be tackling how to deconstruct problems using both top down and bottom up approaches: most guides don't tackle this in a simple or basics-first way 9. It's free to read and online, so able to be discussed in public - some guides are, others aren't 10. Not so math focussed including "mathy jargon". We keep away from terms like catamorphism, lambda calculus, monad, until we need them, and these are/will be only introduced when appropriate amounts of concrete practical knowledge are present through repeated exposure to examples, so that it's obvious what is meant Thanks all! Julian - (of GetContented) From vigalchin at gmail.com Wed Jan 27 17:00:39 2016 From: vigalchin at gmail.com (Vasili I. Galchin) Date: Wed, 27 Jan 2016 11:00:39 -0600 Subject: [Haskell-cafe] Hackage hoq? Message-ID: Hello, I successfully built hoq. When I run th hoq executable, I get ">" prompts but am not sure what to do. I read some of the code and I tried putting .hoq on the command line, That didn't work. Has anyone tried hoq?? Vasily From hyangfji at gmail.com Thu Jan 28 03:05:56 2016 From: hyangfji at gmail.com (Hong Yang) Date: Wed, 27 Jan 2016 21:05:56 -0600 Subject: [Haskell-cafe] How does -N allot cores? Message-ID: Hi Cafe, On an 1P system, there are 8 cores. Does -N8 send one thread to each core no matter how loaded each core is? Assuming 6 cores are busy, will -N8 send threads only to the two idle cores? If -N8 sends threads to eight cores no matter what, is there any function that detects number of idle (or usage less than certain percentage) cores? Suppose a 4P system has 32 cores (4 processors with each processor having 8 cores). Assuming the whole system is idle, how will -N32 work? Thanks, Hong -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwlato at gmail.com Thu Jan 28 05:26:17 2016 From: jwlato at gmail.com (John Lato) Date: Thu, 28 Jan 2016 05:26:17 +0000 Subject: [Haskell-cafe] When are MVars better than STM? In-Reply-To: References: <1453648172.2267.5.camel@gmail.com> Message-ID: This has nothing to do with your questions, but are you sure that mvarInc is sufficiently strict? On 15:17, Sun, Jan 24, 2016 Christopher Allen wrote: > Well, there are cases where even with single-threading you want a memory > barrier to prevent the CPU reordering instructions, but the shift to a > single-threaded runtime should elide _some_ locks expressly designed to > cope with multithreading. > > On Sun, Jan 24, 2016 at 5:04 PM, Thomas Koster wrote: > >> Yuras, >> >> On Sun, 2016-01-24 at 17:46 +1100, Thomas Koster wrote: >> > 2. When given two capabilities (+RTS -N2), MVars are suddenly an >> > order >> > of magnitude slower than with just one capability. Why? >> >> On 25 January 2016 at 02:09, Yuras Shumovich >> wrote: >> > One possible explanation is closure locking which is not performed when >> > there is only one capability. In my quick measurements it gives 40% >> > speedup: https://ghc.haskell.org/trac/ghc/ticket/693#comment:9 >> >> This makes sense. After all, why bother with locks and barriers when >> the process is single-threaded anyway? >> >> Thanks for your response. >> >> -- >> Thomas Koster >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > > > -- > Chris Allen > Currently working on http://haskellbook.com > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkoster at gmail.com Thu Jan 28 05:54:58 2016 From: tkoster at gmail.com (Thomas Koster) Date: Thu, 28 Jan 2016 16:54:58 +1100 Subject: [Haskell-cafe] When are MVars better than STM? In-Reply-To: References: <1453648172.2267.5.camel@gmail.com> Message-ID: On 24 January 2016 at 17:46, Thomas Koster wrote: > I have found that STM is faster than MVars in all my benchmarks, > without exception. This seems to go against accepted wisdom. On 24 January 2016 at 18:13, Thomas Koster wrote: > module Main (main) where > > import Control.Concurrent.Async > import Control.Concurrent.MVar > import Control.Concurrent.STM > import Control.Monad > import Criterion.Main > > main = > defaultMain > [ > bgroup "thrash" > [ > bench "MVar" $ whnfIO (thrashTest mvarNew mvarInc mvarGet), > bench "TVar" $ whnfIO (thrashTest tvarNew tvarInc tvarGet) > ] > ] > > thrashTest :: IO a > -> (a -> IO ()) > -> (a -> IO b) > -> IO b > thrashTest new inc get = do > var <- new > threads <- replicateM 4 (async (replicateM_ 100000 $ inc var)) > forM_ threads wait > get var > > mvarNew :: IO (MVar Int) > mvarNew = newMVar 0 > > mvarInc :: MVar Int -> IO () > mvarInc var = > modifyMVar_ var $ \ i -> > return $! succ i > > mvarGet :: MVar Int -> IO Int > mvarGet = readMVar > > tvarNew :: IO (TVar Int) > tvarNew = newTVarIO 0 > > tvarInc :: TVar Int -> IO () > tvarInc var = > atomically $ do > i <- readTVar var > writeTVar var $! succ i > > tvarGet :: TVar Int -> IO Int > tvarGet = readTVarIO On 28 January 2016 at 16:26, John Lato wrote: > This has nothing to do with your questions, but are you sure that mvarInc is > sufficiently strict? I think so. If you think it isn't, I would love to know why, since strictness and correct use of seq are still a bit of a black art for me. The strictness characteristics of the MVar version and the STM version as written ought to be identical. If not, I would love to know why as well. -- Thomas Koster From t_gass at gmx.de Thu Jan 28 16:37:33 2016 From: t_gass at gmx.de (Tilmann) Date: Thu, 28 Jan 2016 17:37:33 +0100 Subject: [Haskell-cafe] FICS client in Haskell Message-ID: <56AA43CD.9020305@gmx.de> Hi, I've been working on a FICS (Free Internet Chess Server) Client for a bit over a year now and it is finally in a presentable state. It would be great if you would like to have a look at the source code and let me know what you think! When I started with this project I was just starting to learn Haskell and I would like to know if the code is accessible/readable/idiomatic, if the overall organization seems reasonable as well or if you might have any other suggestion about what I could improve. (ie: I haven't decided on what logging framework to use.) The code is on github: https://github.com/tgass/macbeth I thank you all very much in advance! Tilmann https://github.com/tgass/macbeth/blob/master/src/Macbeth/Fics/FicsConnection.hs Opens a telnet connection to freechess.org. Using conduit and attoparsec messages from the server are parsed to FicsMessages and put into a Chan. https://github.com/tgass/macbeth/blob/master/src/Macbeth/Fics/FicsMessage.hs The domain model The UI is using wx widgets (wxHaskell). Each wx-frame gets a copy of Chan FicsMessage and updates the UI when new Messages are available, ie here: https://github.com/tgass/macbeth/blob/master/src/Macbeth/Wx/ToolBox.hs https://github.com/tgass/macbeth/blob/master/src/Macbeth/Wx/Game.hs From david.sorokin at gmail.com Thu Jan 28 18:01:35 2016 From: david.sorokin at gmail.com (David Sorokin) Date: Thu, 28 Jan 2016 21:01:35 +0300 Subject: [Haskell-cafe] Hide the data constructor for type family instance Message-ID: <33BF52DD-EDD2-4EF0-80D4-788DAFC5419C@gmail.com> Hi Cafe, When instantiating a type class with type family, I see that haddock shows the data constructor for the type family as visible. This is what I would like to avoid. I would like to hide the details of the data constructor in the documentation. An example is stated below: -- | An implementation of the 'FCFS' queue strategy. instance QueueStrategy BrIO FCFS where -- | A queue used by the 'FCFS' strategy. newtype StrategyQueue BrIO FCFS a = FCFSQueue (LL.DoubleLinkedList BrIO a) newStrategyQueue s = fmap FCFSQueue LL.newList strategyQueueNull (FCFSQueue q) = LL.listNull q Here the FCFSQueue data constructor is visible together with all its contents in the documentation! I would like it would be hidden completely. At least, I would like to hide the contents. Is it possible to do? Thanks, David -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomasmiedema at gmail.com Thu Jan 28 19:43:28 2016 From: thomasmiedema at gmail.com (Thomas Miedema) Date: Thu, 28 Jan 2016 20:43:28 +0100 Subject: [Haskell-cafe] How does -N allot cores? In-Reply-To: References: Message-ID: You might be interested in this ticket: https://ghc.haskell.org/trac/ghc/ticket/10229 ("setThreadAffinity assumes a certain CPU virtual core layout") On Thu, Jan 28, 2016 at 4:05 AM, Hong Yang wrote: > Hi Cafe, > > On an 1P system, there are 8 cores. Does -N8 send one thread to each core > no matter how loaded each core is? Assuming 6 cores are busy, will -N8 send > threads only to the two idle cores? If -N8 sends threads to eight cores no > matter what, is there any function that detects number of idle (or usage > less than certain percentage) cores? > > Suppose a 4P system has 32 cores (4 processors with each processor having > 8 cores). Assuming the whole system is idle, how will -N32 work? > > Thanks, > > Hong > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.fine at gmail.com Thu Jan 28 20:30:56 2016 From: mark.fine at gmail.com (Mark Fine) Date: Thu, 28 Jan 2016 12:30:56 -0800 Subject: [Haskell-cafe] A Sliding TChan? Message-ID: We're currently using a TMChan to broadcast from a single producer thread to many consumer threads. This works well! However, we're seeing issues with a fast producer and/or a slow consumer, with the channel growing unbounded. Fortunately, our producer-consumer communication is time-sensitive and tolerant of loss: we're ok with the producer always writing at the expense of dropping communication to a slow consumer. A TMBChan provides a bounded channel (but no means to dupe/broadcast) where a writer will block once the channel fills up. In our use case, we'd like to continue writing to the channel but dropping off the end of the channel. Clojure's core-async module has some related concepts, in particular the notion of a sliding buffer that drops the oldest elements once full. Has anyone encountered something similar in working with channels and/or have any solutions? Thanks! Mark -------------- next part -------------- An HTML attachment was scrubbed... URL: From eric at oco.nnor.org Thu Jan 28 21:57:23 2016 From: eric at oco.nnor.org (Eric O'Connor) Date: Thu, 28 Jan 2016 14:57:23 -0700 Subject: [Haskell-cafe] A Sliding TChan? In-Reply-To: References: Message-ID: <56AA8EC3.9040701@oco.nnor.org> Perhaps a circular buffer interface to TArray would be nice: data CircularTChan a = CircularTChan { tchanHead :: TVar Int , tchanLength :: TVar Int , tchanArray :: TArray Int (Maybe a) } On 2016-01-28 13:30, Mark Fine wrote: > We're currently using a TMChan to broadcast from a single producer > thread to many consumer threads. This works well! However, we're > seeing issues with a fast producer and/or a slow consumer, with the > channel growing unbounded. Fortunately, our producer-consumer > communication is time-sensitive and tolerant of loss: we're ok with > the producer always writing at the expense of dropping communication > to a slow consumer. > > A TMBChan provides a bounded channel (but no means to > dupe/broadcast) where a writer will block once the channel fills up. > In our use case, we'd like to continue writing to the channel but > dropping off the end of the channel. Clojure's core-async module has > some related concepts, in particular the notion of a sliding buffer > > > that drops the oldest elements once full. Has anyone encountered > something similar in working with channels and/or have any > solutions? Thanks! > > Mark > > > _______________________________________________ Haskell-Cafe mailing > list Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From noonslists at gmail.com Thu Jan 28 22:17:46 2016 From: noonslists at gmail.com (Noon Silk) Date: Fri, 29 Jan 2016 09:17:46 +1100 Subject: [Haskell-cafe] A Sliding TChan? In-Reply-To: References: Message-ID: I think you should be able to do this with the `pipes` and `pipes-concurrency` libraries, in particular have a look at: http://haddock.stackage.org/lts-5.0/pipes-concurrency-2.0.5/Pipes-Concurrent.html#v:newest -- Noon On Fri, Jan 29, 2016 at 7:30 AM, Mark Fine wrote: > We're currently using a TMChan to broadcast from a single producer thread > to many consumer threads. This works well! However, we're seeing issues > with a fast producer and/or a slow consumer, with the channel growing > unbounded. Fortunately, our producer-consumer communication is > time-sensitive and tolerant of loss: we're ok with the producer always > writing at the expense of dropping communication to a slow consumer. > > A TMBChan provides a bounded channel (but no means to dupe/broadcast) > where a writer will block once the channel fills up. In our use case, we'd > like to continue writing to the channel but dropping off the end of the > channel. Clojure's core-async module has some related concepts, in > particular the notion of a sliding buffer > > that drops the oldest elements once full. Has anyone encountered something > similar in working with channels and/or have any solutions? Thanks! > > Mark > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -- Noon Silk, ? https://silky.github.io/ "Every morning when I wake up, I experience an exquisite joy ? the joy of being this signature." -------------- next part -------------- An HTML attachment was scrubbed... URL: From imantc at gmail.com Thu Jan 28 22:22:33 2016 From: imantc at gmail.com (Imants Cekusins) Date: Thu, 28 Jan 2016 23:22:33 +0100 Subject: [Haskell-cafe] Hide the data constructor for type family instance In-Reply-To: <33BF52DD-EDD2-4EF0-80D4-788DAFC5419C@gmail.com> References: <33BF52DD-EDD2-4EF0-80D4-788DAFC5419C@gmail.com> Message-ID: Hello David, > FCFSQueue data constructor is visible together with all its contents > I would like it would be hidden completely Did you try to export all public symbols but not the constructor? From hjgtuyl at chello.nl Thu Jan 28 23:45:19 2016 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Fri, 29 Jan 2016 00:45:19 +0100 Subject: [Haskell-cafe] setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not exist Message-ID: L.S., Why do I get the message: setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not exist after command "cabal install", and what can I do about it ? Regards, Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From ezyang at mit.edu Fri Jan 29 00:00:18 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Thu, 28 Jan 2016 16:00:18 -0800 Subject: [Haskell-cafe] setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not exist In-Reply-To: References: Message-ID: <1454025527-sup-954@sabre> Hello Henk, I do not know if this would work, but it is worth trying to upgrade Cabal and cabal-install (1.22.5.0 is not the latest version) and seeing if this resolves your problem: cabal update cabal install Cabal cabal install cabal-install Edward Excerpts from Henk-Jan van Tuyl's message of 2016-01-28 15:45:19 -0800: > > L.S., > > Why do I get the message: > setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not > exist > after command "cabal install", and what can I do about it ? > > Regards, > Henk-Jan van Tuyl > > > -- > Folding at home > What if you could share your unused computer power to help find a cure? In > just 5 minutes you can join the world's biggest networked computer and get > us closer sooner. Watch the video. > http://folding.stanford.edu/ > > > http://Van.Tuyl.eu/ > http://members.chello.nl/hjgtuyl/tourdemonad.html > Haskell programming From david.sorokin at gmail.com Fri Jan 29 03:53:51 2016 From: david.sorokin at gmail.com (David Sorokin) Date: Fri, 29 Jan 2016 06:53:51 +0300 Subject: [Haskell-cafe] Hide the data constructor for type family instance In-Reply-To: References: <33BF52DD-EDD2-4EF0-80D4-788DAFC5419C@gmail.com> Message-ID: Hi Imants, I fogot to add that the mentioned data constructor becomes visible in the haddoc section Instances for the exported BarIO monad. I would like to see that there is such an instance but without details about data constructor. Probably, the issue is more related to that how the BarIO type is shown in the documenation. On level of the module, where the QueueStrategy instance is defined, I can regulate the export list somehow. Thanks David 29.01.2016 1:22 ???????????? "Imants Cekusins" ???????: Hello David, > FCFSQueue data constructor is visible together with all its contents > I would like it would be hidden completely Did you try to export all public symbols but not the constructor? _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjgtuyl at chello.nl Fri Jan 29 15:46:45 2016 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Fri, 29 Jan 2016 16:46:45 +0100 Subject: [Haskell-cafe] setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not exist In-Reply-To: <1454025527-sup-954@sabre> References: <1454025527-sup-954@sabre> Message-ID: Thanks for the suggestion, but it didn't help. Regards, Henk-Jan van Tuyl On Fri, 29 Jan 2016 01:00:18 +0100, Edward Z. Yang wrote: > Hello Henk, > > I do not know if this would work, but it is worth trying to upgrade > Cabal and cabal-install (1.22.5.0 is not the latest version) and seeing > if this resolves your problem: > > cabal update > cabal install Cabal > cabal install cabal-install > > Edward > > Excerpts from Henk-Jan van Tuyl's message of 2016-01-28 15:45:19 -0800: >> >> L.S., >> >> Why do I get the message: >> setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not >> exist >> after command "cabal install", and what can I do about it ? >> >> Regards, >> Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From t_gass at gmx.de Fri Jan 29 16:17:34 2016 From: t_gass at gmx.de (Tilmann) Date: Fri, 29 Jan 2016 17:17:34 +0100 Subject: [Haskell-cafe] setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not exist In-Reply-To: References: Message-ID: <56AB909E.7040407@gmx.de> Hi Henk, I got that error too, earlier today when I tried to build macbeth with stack (upgrading to GHC 7.10). There is a cabal bug open that might fit. The bug is related to projects containing both library and executable. https://github.com/haskell/cabal/issues/2780 https://github.com/commercialhaskell/stack/issues/976 Am 29.01.16 um 00:45 schrieb Henk-Jan van Tuyl: > > L.S., > > Why do I get the message: > setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does > not exist > after command "cabal install", and what can I do about it ? > > Regards, > Henk-Jan van Tuyl > > From t_gass at gmx.de Fri Jan 29 16:18:13 2016 From: t_gass at gmx.de (Tilmann) Date: Fri, 29 Jan 2016 17:18:13 +0100 Subject: [Haskell-cafe] FICS client in Haskell In-Reply-To: <56AACFEC.5090803@gmail.com> References: <56AA43CD.9020305@gmx.de> <56AACFEC.5090803@gmail.com> Message-ID: <56AB90C5.2000107@gmx.de> Hi Tony, thank you for having a look! I haven't given putting it on hackage much thought yet. But I would certainly do it if it's helping someone. Macbeth probably runs on other platforms as well. It's just that I developed it with OSX in mind (I was using babaschess on windows before) and provide an OSX binary only. Am 29.01.16 um 03:35 schrieb Tony Morris: > Hi Tilmann, > I'll give it a crack, though I haven't used FICS in a while. > > Couple of questions: > * have you put it on hackage? > * is it specific to Mac OSX? (if so, why?) > > > On 29/01/16 02:37, Tilmann wrote: >> Hi, >> I've been working on a FICS (Free Internet Chess Server) Client for a >> bit over a year now and it is finally in a presentable state. It would >> be great if you would like to have a look at the source code and let me >> know what you think! >> >> When I started with this project I was just starting to learn Haskell >> and I would like to know if the code is accessible/readable/idiomatic, >> if the overall organization seems reasonable as well or if you might >> have any other suggestion about what I could improve. (ie: I haven't >> decided on what logging framework to use.) >> >> The code is on github: https://github.com/tgass/macbeth >> >> I thank you all very much in advance! >> Tilmann >> >> >> >> https://github.com/tgass/macbeth/blob/master/src/Macbeth/Fics/FicsConnection.hs >> >> Opens a telnet connection to freechess.org. Using conduit and attoparsec >> messages from the server are parsed to FicsMessages and put into a Chan. >> >> https://github.com/tgass/macbeth/blob/master/src/Macbeth/Fics/FicsMessage.hs >> >> The domain model >> >> The UI is using wx widgets (wxHaskell). Each wx-frame gets a copy of >> Chan FicsMessage and updates the UI when new Messages are available, ie >> here: >> https://github.com/tgass/macbeth/blob/master/src/Macbeth/Wx/ToolBox.hs >> https://github.com/tgass/macbeth/blob/master/src/Macbeth/Wx/Game.hs >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From ky3 at atamo.com Fri Jan 29 17:03:06 2016 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Sat, 30 Jan 2016 00:03:06 +0700 Subject: [Haskell-cafe] Haskell Weekly News Message-ID: *Top Picks:* - Oskar Wickstr?m rewrites the Oden-to-Go transpiler from Racket to Haskell . Oden is an FP language comprising an ML type system and LISP syntax. Oskar explains that he made the migration because of several advantages that Haskell offered over Racket: exhaustive pattern-match checking, type-guided refactoring, monad transformers, and faster execution times. Apropos, the convo over at lobste.rs links to this claim by Gabriel Gonzalez: "Haskell is an amazing language for writing your own compiler. If you are writing a compiler in another language you should genuinely consider switching." - Reviewing 2015 work month-by-month, Gracjan Polak tells the story of how he decided to lead the development of Haskell Mode , "a bunch of Emacs major and minor modes put together in a single package." Discussion over at /r/haskell . - Jared Tobin presents monadic versions of five recursion-schemas , namely: cata-, ana-, para-, apo-, and hylomorphisms. *Quotes of the Week:* - Tim Kellogg: I?ve known a few old programmers nearing retirement that have a long list of very impressive accomplishments. The older and more accomplished they get, the more they prefer redundancy over dependency. The oldest and most accomplished will write their own load balancers, TCP stacks, loggers, everything if need be. Are they on to something? - From HN: If you have the time, I'd advise you to learn Haskell, in order to stretch your mind and become an excellent OCaml developer, the way learning Latin makes you a better French or Italian writer. - HN markov chain parody site headline: $690 for an hour minimum wage for state management in haskell *Recorded Talk of the Week:* - On Dec 17 last year , Andrew Gibiansky demoed IHaskell, a Mathematica-like "capable graphical alternative" to the ghci REPL at the NorCal Hacker Dojo. Thanks go to Joe "begriffs" Nelson who recorded the talk and summarized it into bullet points . Joe's page was well-received at both Hacker News and Haskell Reddit . p.s. There will be no News next week. HWN will resume the week after. -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.sorokin at gmail.com Fri Jan 29 17:58:00 2016 From: david.sorokin at gmail.com (David Sorokin) Date: Fri, 29 Jan 2016 20:58:00 +0300 Subject: [Haskell-cafe] [ANN] Aivika: Branching Discrete Event Simulation in Haskell Message-ID: <9F700A60-00B2-4CFF-9531-478E06233E00@gmail.com> Hi Cafe, I?m glad to announce the release of three my simulation libraries [1, 2, 3]. I would like to tell more about the third library especially. The aivika-brances package [3] is new. It allows creating branches to run nested simulations within simulation so that the source simulation remains intact. It allows us to forecast the behavior looking into the future of the model. There is a similar method in the financial modeling, when estimating the option instrument. We build a tree of future possible changes to estimate the present value. The present depends on the future. My library allows building the same tree of nested simulations. At the same time, this is a general purpose discrete event simulation library with such things as random streams of orders (transacts), limited resources, queues, discontinuous processes, the global event queue, event-based activities and so on. I think that we can create very sophisticated simulation models with elements of prediction and forecasting. Probably, the library can be useful for financial modeling. Initially, a few years ago I created the aivika simulation package [1] trying to repeat what other simulation libraries and software vendors provided in the field. Then an year or two ago I generalized my simulation library and created package aivika-transformers [2] with wide use of monad transformers and type families. I actually had plans to use that second package for the distributed and parallel discrete event simulation, but the package was so general that I could use it for what I would call a ?branching discrete event simulation?. Today I finished my third package aivika-branches, which is a very small additional package that introduces a new computation and a couple of new functions, where the main function is as follows: futureEvent :: Double -> Event BrIO a -> Event BrIO a It creates a new independent branch of the current simulation and then returns the result of the specified event-based computation in the desired time point, leaving the current computation intact. All pending events will be processed in the derived branch as if it were the current simulation. The very important thing is that the futureEvent function is relatively cheap. It was possible thanks to wide using the functional programming approach in my libraries. We can clone the simulation world as many times as we need for running nested simulations within simulation and it works. Frankly speaking, I don?t know of other general purpose simulation libraries that would offer the same functionality. I?m looking forward to hearing of your comments. Especially, I would be interested to participate in the projects related to simulation and modeling. I think that Haskell allows doing fantastic things here! Best regards, David Sorokin [1] http://hackage.haskell.org/package/aivika [2] http://hackage.haskell.org/package/aivika-transformers [3] http://hackage.haskell.org/package/aivika-branches From ezyang at mit.edu Fri Jan 29 23:25:07 2016 From: ezyang at mit.edu (Edward Z. Yang) Date: Fri, 29 Jan 2016 15:25:07 -0800 Subject: [Haskell-cafe] setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not exist In-Reply-To: References: Message-ID: <1454109804-sup-9295@sabre> Hello Henk, Try passing -j1 to cabal. It sounds like there is some problem with setup executable caching, so I imagine that if you can disable it that should solve the problem. I believe running Cabal HEAD should also fix this situation. Edward Excerpts from Henk-Jan van Tuyl's message of 2016-01-28 15:45:19 -0800: > > L.S., > > Why do I get the message: > setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not > exist > after command "cabal install", and what can I do about it ? > > Regards, > Henk-Jan van Tuyl > > > -- > Folding at home > What if you could share your unused computer power to help find a cure? In > just 5 minutes you can join the world's biggest networked computer and get > us closer sooner. Watch the video. > http://folding.stanford.edu/ > > > http://Van.Tuyl.eu/ > http://members.chello.nl/hjgtuyl/tourdemonad.html > Haskell programming From takenobu.hs at gmail.com Sat Jan 30 02:34:07 2016 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Sat, 30 Jan 2016 11:34:07 +0900 Subject: [Haskell-cafe] Type introduction illustrated for casual haskellers Message-ID: Dear Haskellers, I'm enjoying Haskell. After FTP(Foldable/Traversable in Prelude proposal), newcomers encounter Foldable signatures in their early stage. So I drew a few simple illustrations about a type introduction for newcomers/casual haskellers. Type introduction illustrated for casual haskellers http://takenobu-hs.github.io/downloads/type_introduction_illustrated.pdf https://github.com/takenobu-hs/type-introduction-illustrated If I have misunderstood, please teach me. I'll correct them. Thank you =), Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjgtuyl at chello.nl Sat Jan 30 18:00:50 2016 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Sat, 30 Jan 2016 19:00:50 +0100 Subject: [Haskell-cafe] setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not exist In-Reply-To: References: Message-ID: On Fri, 29 Jan 2016 00:45:19 +0100, Henk-Jan van Tuyl wrote: > Why do I get the message: > setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not > exist > after command "cabal install", and what can I do about it ? It was the .cabal file that caused the problem; after changing the line: license-file: "" to license-file: "LICENSE" the package compiled properly. Regards, Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From adam at bergmark.nl Sat Jan 30 18:13:50 2016 From: adam at bergmark.nl (Adam Bergmark) Date: Sat, 30 Jan 2016 19:13:50 +0100 Subject: [Haskell-cafe] setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not exist In-Reply-To: References: Message-ID: Is this package on hackage? If so can you file a ticket for hackage-server to disallow this in uploads please? Either way, a cabal ticket for being more helpful here would good! On Sat, Jan 30, 2016 at 7:00 PM, Henk-Jan van Tuyl wrote: > On Fri, 29 Jan 2016 00:45:19 +0100, Henk-Jan van Tuyl > wrote: > > Why do I get the message: >> setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not >> exist >> after command "cabal install", and what can I do about it ? >> > > It was the .cabal file that caused the problem; after changing the line: > license-file: "" > to > license-file: "LICENSE" > the package compiled properly. > > > Regards, > Henk-Jan van Tuyl > > > -- > Folding at home > What if you could share your unused computer power to help find a cure? In > just 5 minutes you can join the world's biggest networked computer and get > us closer sooner. Watch the video. > http://folding.stanford.edu/ > > > http://Van.Tuyl.eu/ > http://members.chello.nl/hjgtuyl/tourdemonad.html > Haskell programming > -- > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Sat Jan 30 18:18:31 2016 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Sat, 30 Jan 2016 18:18:31 +0000 Subject: [Haskell-cafe] setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not exist In-Reply-To: References: Message-ID: <20160130181831.GM9565@weber> On Sat, Jan 30, 2016 at 07:00:50PM +0100, Henk-Jan van Tuyl wrote: > On Fri, 29 Jan 2016 00:45:19 +0100, Henk-Jan van Tuyl wrote: > >Why do I get the message: > > setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : > >does not exist > >after command "cabal install", and what can I do about it ? > > It was the .cabal file that caused the problem; after changing the line: > license-file: "" > to > license-file: "LICENSE" > the package compiled properly. Ha, what an incredibly correct yet unhelpful error message! Nice debugging. From hjgtuyl at chello.nl Sun Jan 31 00:42:14 2016 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Sun, 31 Jan 2016 01:42:14 +0100 Subject: [Haskell-cafe] setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does not exist In-Reply-To: References: Message-ID: It is not on Hackage; I tried 'cabal check' and it warns about this mistake: > cabal check The following warnings are likely affect your build negatively: * The 'license-file' field refers to the file '' which does not exist. [...] On Sat, 30 Jan 2016 19:13:50 +0100, Adam Bergmark wrote: > Is this package on hackage? If so can you file a ticket for > hackage-server > to disallow this in uploads please? Either way, a cabal ticket for being > more helpful here would good! > > On Sat, Jan 30, 2016 at 7:00 PM, Henk-Jan van Tuyl > wrote: > >> On Fri, 29 Jan 2016 00:45:19 +0100, Henk-Jan van Tuyl >> >> wrote: >> >> Why do I get the message: >>> setup-Simple-Cabal-1.22.5.0-x86_64-windows-ghc-7.10.3.exe: : does >>> not >>> exist >>> after command "cabal install", and what can I do about it ? >>> >> >> It was the .cabal file that caused the problem; after changing the line: >> license-file: "" >> to >> license-file: "LICENSE" >> the package compiled properly. >> >> >> Regards, >> Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From mail at nh2.me Sun Jan 31 03:30:27 2016 From: mail at nh2.me (=?UTF-8?Q?Niklas_Hamb=c3=bcchen?=) Date: Sun, 31 Jan 2016 04:30:27 +0100 Subject: [Haskell-cafe] ANNOUNCE: call-haskell-from-anything 1.0 Message-ID: <56AD7FD3.4070309@nh2.me> Heya, I'm happy to announce a new release of call-haskell-from-anything [1], my library for FFI-via-serialisation that allows to easily call Haskell functions from any other language that can open shared object files (`.so` via `dlopen()`) and has a MessagePack library available. This is almost all programming languages; for examples for Python and Ruby see [2]. The FFI-via-serialisation approach makes it possible to export most functions to other languages "for free": no FFI type unpacking boilerplate, everything that has a MessagePack instance (much easier to write than `Storable` instances) will do. For example if you have a function chooseMax :: [Int] -> Int all you have do to make it callable is foreign export ccall chooseMax_export :: CString -> IO CString chooseMax_export = export chooseMax Version 1.0 uses closed type families to remove the restriction that so far, pure functions has to be wrapped into the Identity monad to be exported: a -> b -> ... -> Identity r With 1.0, this is no longer necessary. You can now export any function of type a -> b -> ... -> r to be called from your favourite Haskell contender languages (of course those have no chance ...). Cheers, Niklas [1]: https://hackage.haskell.org/package/call-haskell-from-anything-1.0.0.0 [2]: https://github.com/nh2/call-haskell-from-anything From aeyakovenko at gmail.com Sun Jan 31 23:13:14 2016 From: aeyakovenko at gmail.com (Anatoly Yakovenko) Date: Sun, 31 Jan 2016 23:13:14 +0000 Subject: [Haskell-cafe] How stable is the typerep fingerprint value? Message-ID: How stable is the typerep fingerprint value? Is it going to change between builds based on the same sources? Thanks, Anatoly -------------- next part -------------- An HTML attachment was scrubbed... URL: