From gershomb at gmail.com Wed Apr 1 04:22:24 2015 From: gershomb at gmail.com (Gershom B) Date: Wed, 1 Apr 2015 00:22:24 -0400 Subject: [Haskell-cafe] An Easy Solution to PVP Bounds and Cabal Hell Message-ID: Recently there has been some discussion about how we can fix the problem of ?Cabal Hell?. Some people advocate restrictive upper bounds, to prevent packages from being broken by new updates. Some other people point out that too-restrictive bounds can lead to bad install plans, since some packages might want newer versions of some dependencies, and others older versions. Still other people say that we can _retroactively_ fix upper bounds (in either direction) by modifying cabal files using the new features in Hackage. Some people think this is a terrible idea because it looks like we are mutating things, and this confuses hashes. In turn, these people support either nix or a nix-like approach by which packages have hashes that encompass the full versions of all their transitive dependencies. With that in hand, we can cache builds and mix-and-match to build the precise environment we want for each package while reducing redundant computation. However, the cache of all the various binary combinations may still grow large! And none of this fully begins to address the dreaded ?diamond dependency? problem. Here is a chart of some such solutions: http://www.well-typed.com/blog/aux/images/cabal-hell/cabal-hell-solutions.png One way to look at a particular build is in an n-dimensional state space (of the Hilbert sort) determined by all its dependencies, not least the compiler itself. The solver acts as a particle traversing this space. But this description is too simple. Our state of dependencies and constraints itself varies and grows over time. So another approach is to think of the dependencies as a transitive graph, where each node may vary along a time axis, and as they slide along the axis, this in turn affects their children. We now have not just one Hilbert space, but a collection of them related by branching trees as authors locally modify the dependencies of their packages. If we keep this simple model in mind, it is easy to see why everyone is always having these debates. Some people want to fix the graph, and others want to simplify it. Some people want an immutable store, and some people want to rebase. But we can?t ?really? rebase in our current model. Bearing in mind our model of a space-time continuum of hackage dependences, the solution emerges ? enforce immutability, but allow retroactive mutation. For instance, suppose Fred tries to install package Foo on Friday evening, but discovers that it depends on version 1.0 of Bar (released the previous Friday) that in turn depends on version 0.5 of Baz but Foo also depends on version 0.8 of Baz. So Fred branches Bar and changes the dependency, which in turn informs Betty, that there is also a 1.0 of Bar with different dependencies and we have forked our package timeline. On getting this message on Monday, Betty can merge by pushing with --force-rewrites and this goes back in the timeline and makes it so that Baz retroactively had the right dependencies and now Fred, as of the previous Friday, no longer has this problem. (That way he still has the weekend). Now the failed build is cut off temporarily into a cycle in the package timeline that is disconnected from the rewrite. We stash it with ?hackage stash? until Monday at which time the dependency graph is 100 percent equalized and primed for new patches.? At this point we unstash Foo as of Friday and it is replaced by the Foo from the new timeline. Friday Fred needs to remain stashed lest he run into himself. The longer he can be avoided by his Monday self the better. Future work could include bots which automate pruning of artifacts from redundant branches. If this description was too abrupt, here is a diagram with a fuller description of the workflow: http://bit.ly/15IIGac I know there are some new ideas to take in here, and there is a little technical work necessary to make it feasible, but in my opinion if you can understand the current cabal situation, and you can understand how git and darcs work, then you should be able to understand this too. Hopefully by this time next year, we?ll be able to say that our problems with cabal have been truly wiped from our collective memory. HTH, HAND. Gershom From mike at izbicki.me Wed Apr 1 07:49:52 2015 From: mike at izbicki.me (Mike Izbicki) Date: Wed, 1 Apr 2015 00:49:52 -0700 Subject: [Haskell-cafe] ANNOUNCE: Parsed 0.0.1 Message-ID: I'm pleased to announce the first release of the Parsed library (pronounced par-s?d). Parsed is a reimplementation of Haskell's excellent Parsec library in the Unix shell. In particular, the Unix pipe operator | corresponds exactly to Haskell's Applicative bind *>. Here's a quick example to get a feel for the syntax. The following bash one liner creates a parser for matching balanced parenthesis: ``` parens() { choice "$1" "match '(' | parens \"$1\" | match ')'"; } ``` For more detailed examples and implementation notes, please see the Parsed paper accepted at SIGBOVIK2015: https://github.com/mikeizbicki/parsed/raw/master/sigbovik2015/paper.pdf You can download Parsed from its github page at: https://github.com/mikeizbicki/parsed From Graham.Hutton at nottingham.ac.uk Wed Apr 1 08:23:10 2015 From: Graham.Hutton at nottingham.ac.uk (Graham Hutton) Date: Wed, 1 Apr 2015 09:23:10 +0100 Subject: [Haskell-cafe] Journal of Functional Programming - Call for PhD Abstracts Message-ID: <47970F09-AE8B-4354-983C-4D803836B001@exmail.nottingham.ac.uk> ============================================================ CALL FOR PHD ABSTRACTS Journal of Functional Programming Deadline: 30th April 2015 http://tinyurl.com/jfp-phd-abstracts ============================================================ PREAMBLE: Many students complete PhDs in functional programming each year, but there is currently no common location in which to promote and advertise the resulting work. The Journal of Functional Programming would like to change that! As a service to the community, JFP recently launched a new feature, in the form of a regular publication of abstracts from PhD dissertations that were completed during the previous year. The abstracts are made freely available on the JFP website, i.e. not behind any paywall, and do not require any transfer for copyright, merely a license from the author. Please submit dissertation abstracts according to the instructions below. A dissertation is eligible if parts of it have or could have appeared in JFP, that is, if it is in the general area of functional programming. JFP will not have these abstracts reviewed. We welcome submissions from both the PhD student and PhD advisor/ supervisor although we encourage them to coordinate. ============================================================ SUBMISSION: Please submit the following information to Graham Hutton by 30th April 2015. o Dissertation title: (including any subtitle) o Student: (full name) o Awarding institution: (full name and country) o Date of PhD award: (month and year; depending on the institution, this may be the date of the viva, corrections being approved, graduation ceremony, or otherwise) o Advisor/supervisor: (full names) o Dissertation URL: (please provide a permanently accessible link to the dissertation if you have one, such as to an institutional repository or other public archive; links to personal web pages should be considered a last resort) o Dissertation abstract: (plain text, maximum 1000 words; you may use \emph{...} for emphasis, but we prefer no other markup or formatting in the abstract, but do get in touch if this causes significant problems) Please do not submit a copy of the dissertation itself, as this is not required. JFP reserves the right to decline to publish abstracts that are not deemed appropriate. ============================================================ PHD ABSTRACT EDITOR: Graham Hutton School of Computer Science University of Nottingham Nottingham NG8 1BB United Kingdom ============================================================ This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. This message has been checked for viruses but the contents of an attachment may still contain software viruses which could damage your computer system, you are advised to perform your own checks. Email communications with the University of Nottingham may be monitored as permitted by UK legislation. From 0slemi0 at gmail.com Wed Apr 1 08:58:01 2015 From: 0slemi0 at gmail.com (Andras Slemmer) Date: Wed, 1 Apr 2015 10:58:01 +0200 Subject: [Haskell-cafe] An Easy Solution to PVP Bounds and Cabal Hell In-Reply-To: References: Message-ID: 10/10 "enforce immutability, but allow retroactive mutation". Word. Finally a simple and coherent solution On 1 April 2015 at 06:22, Gershom B wrote: > Recently there has been some discussion about how we can fix the problem > of ?Cabal Hell?. Some people advocate restrictive upper bounds, to prevent > packages from being broken by new updates. Some other people point out that > too-restrictive bounds can lead to bad install plans, since some packages > might want newer versions of some dependencies, and others older versions. > Still other people say that we can _retroactively_ fix upper bounds (in > either direction) by modifying cabal files using the new features in > Hackage. Some people think this is a terrible idea because it looks like we > are mutating things, and this confuses hashes. In turn, these people > support either nix or a nix-like approach by which packages have hashes > that encompass the full versions of all their transitive dependencies. With > that in hand, we can cache builds and mix-and-match to build the precise > environment we want for each package while reducing redundant computation. > However, the cache of all the various binary combinations may still grow > large! And none of this fully begins to address the dreaded ?diamond > dependency? problem. > > Here is a chart of some such solutions: > http://www.well-typed.com/blog/aux/images/cabal-hell/cabal-hell-solutions.png > > One way to look at a particular build is in an n-dimensional state space > (of the Hilbert sort) determined by all its dependencies, not least the > compiler itself. The solver acts as a particle traversing this space. But > this description is too simple. Our state of dependencies and constraints > itself varies and grows over time. So another approach is to think of the > dependencies as a transitive graph, where each node may vary along a time > axis, and as they slide along the axis, this in turn affects their > children. We now have not just one Hilbert space, but a collection of them > related by branching trees as authors locally modify the dependencies of > their packages. > > If we keep this simple model in mind, it is easy to see why everyone is > always having these debates. Some people want to fix the graph, and others > want to simplify it. Some people want an immutable store, and some people > want to rebase. But we can?t ?really? rebase in our current model. Bearing > in mind our model of a space-time continuum of hackage dependences, the > solution emerges ? enforce immutability, but allow retroactive mutation. > > For instance, suppose Fred tries to install package Foo on Friday evening, > but discovers that it depends on version 1.0 of Bar (released the previous > Friday) that in turn depends on version 0.5 of Baz but Foo also depends on > version 0.8 of Baz. So Fred branches Bar and changes the dependency, which > in turn informs Betty, that there is also a 1.0 of Bar with different > dependencies and we have forked our package timeline. On getting this > message on Monday, Betty can merge by pushing with --force-rewrites and > this goes back in the timeline and makes it so that Baz retroactively had > the right dependencies and now Fred, as of the previous Friday, no longer > has this problem. (That way he still has the weekend). Now the failed build > is cut off temporarily into a cycle in the package timeline that is > disconnected from the rewrite. We stash it with ?hackage stash? until > Monday at which time the dependency graph is 100 percent equalized and > primed for new patches. > > At this point we unstash Foo as of Friday and it is replaced by the Foo > from the new timeline. Friday Fred needs to remain stashed lest he run into > himself. The longer he can be avoided by his Monday self the better. Future > work could include bots which automate pruning of artifacts from redundant > branches. > > If this description was too abrupt, here is a diagram with a fuller > description of the workflow: http://bit.ly/15IIGac > > I know there are some new ideas to take in here, and there is a little > technical work necessary to make it feasible, but in my opinion if you can > understand the current cabal situation, and you can understand how git and > darcs work, then you should be able to understand this too. > > Hopefully by this time next year, we?ll be able to say that our problems > with cabal have been truly wiped from our collective memory. > > HTH, HAND. > Gershom > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey.bushnyak at sigrlami.eu Wed Apr 1 09:40:58 2015 From: sergey.bushnyak at sigrlami.eu (Sergey Bushnyak) Date: Wed, 01 Apr 2015 12:40:58 +0300 Subject: [Haskell-cafe] Specify OS version Message-ID: <551BBD2A.8060902@sigrlami.eu> Hi Cafe! I have program written in Haskell and Swift that runs on Mac. This program uses some features that was introduced in last version of OS X ?Yosemite? 10.10, but some of my users use previous version "Maverics " 10.9. Is there some technique that allows me to extend my cabal file and specify which *version* of os is currently in use and don't load some of modules? AFAIK, Cabal's `conditional` doesn't provide such mechanism, but something like ``` if os(osx) && ver(=10.10) if os(osx) && ver(<=10.9) ``` will be nice. Maybe there is other approach to this problem?I don't want to use `CPP` extension to handle this within code. Thanks -- Best regards, Sergey Bushnyak -------------- next part -------------- An HTML attachment was scrubbed... URL: From rendel at informatik.uni-tuebingen.de Wed Apr 1 10:52:53 2015 From: rendel at informatik.uni-tuebingen.de (Tillmann Rendel) Date: Wed, 01 Apr 2015 12:52:53 +0200 Subject: [Haskell-cafe] ANNOUNCE: Parsed 0.0.1 In-Reply-To: References: Message-ID: <551BCE05.8010908@informatik.uni-tuebingen.de> Hi, Mike Izbicki wrote: > https://github.com/mikeizbicki/parsed I don't want to bash your library, but I fear you made a wrong choice. In parsec, (many1 (string "a") <|> many1 (string "b")) >> string "c" accepts "bc", but I don't see how the corresponding grammar can be par-s?d. Maybe you should replace a failure by a list of successes? Tillmann From rendel at informatik.uni-tuebingen.de Wed Apr 1 11:48:15 2015 From: rendel at informatik.uni-tuebingen.de (Tillmann Rendel) Date: Wed, 01 Apr 2015 13:48:15 +0200 Subject: [Haskell-cafe] ANNOUNCE: Parsed 0.0.1 In-Reply-To: <551BCE05.8010908@informatik.uni-tuebingen.de> References: <551BCE05.8010908@informatik.uni-tuebingen.de> Message-ID: <551BDAFF.2090908@informatik.uni-tuebingen.de> Hi again, I wrote: > In parsec, > > (many1 (string "a") <|> many1 (string "b")) >> string "c" > > accepts "bc", but I don't see how the corresponding grammar can be > par-s?d. Sorry, I think I was confused because your implementation looked like an attempt at unlimited backtracking to me. To compare with Parsec, it is better to treat your implementation as an attempt to implement Parsec's semantics. In that case, we should note that in Parsec, (string "ab" <|> string "a") rejects "a", but maybe your implementation would accept it? Tillmann From K.Bleijenberg at lijbrandt.nl Wed Apr 1 13:30:35 2015 From: K.Bleijenberg at lijbrandt.nl (Kees Bleijenberg) Date: Wed, 1 Apr 2015 15:30:35 +0200 Subject: [Haskell-cafe] Speed up ghc -O2 Message-ID: <002101d06c80$08f03cc0$1ad0b640$@lijbrandt.nl> My program reads a xls file and creates a haskell program that does exactly the same as what the xls does. Every cell in the xls is a function and contains 2 lines: the type declaration and 1 line the implementation. The type of a cell is XlsCell. data XlsCell = Reader ExternalCells XlsCellRes data XlsCellRes = XlsCellString String | XlsCellBool Bool | ..... type ExternalCells = Map String XlsCellRes -- search (or set) a value of a cell by its name. The externalCells are for the cells you can change at runtime. The generated program was first 30000 lines. When I compiled it with ghc 7.8.3. -O0 the program worked, but was terribly slow. With -O2 I got: out of memory. With a few trics the generated code is now split up in 2 files 5000 and 10000 lines. I've upgraded to ghc 7.8.4. If I do ghc -O2 the program takes more then 4 hours to compile. The -O2 compilation is essential. The resulting program is much (more then 10x) faster then without -O2. What can I do (changes in the generated code, changes to speed up ghc?) to make ghc faster with -O2? I tried ghc-make, but it doesn't make a big difference. Kees -------------- next part -------------- An HTML attachment was scrubbed... URL: From ttuegel at gmail.com Wed Apr 1 14:53:41 2015 From: ttuegel at gmail.com (Thomas Tuegel) Date: Wed, 1 Apr 2015 09:53:41 -0500 Subject: [Haskell-cafe] An Easy Solution to PVP Bounds and Cabal Hell In-Reply-To: References: Message-ID: Hi Gershom, On Tue, Mar 31, 2015 at 11:22 PM, Gershom B wrote: > One way to look at a particular build is in an n-dimensional state > space (of the Hilbert sort) determined by all its dependencies, not > least the compiler itself. The solver acts as a particle traversing > this space. But this description is too simple. Our state of > dependencies and constraints itself varies and grows over time. So > another approach is to think of the dependencies as a transitive > graph, where each node may vary along a time axis, and as they slide > along the axis, this in turn affects their children. We now have not > just one Hilbert space, but a collection of them related by > branching trees as authors locally modify the dependencies of their > packages. I don't know if you intended this to be a satirical remark about the package upper bounds, but I think you really hit the nail on the head. When we ask package authors to put upper bounds on their dependencies, what we are really doing is asking them to propagate information about _future_ incompatibilities back into the present. As none of us has access to future information [1], the upper bounds on our dependencies amount to a collection of bad guesses [2]. I think we should consider taking a more empirical approach [3]. One way (but certainly not the only way) to approach this would be to require packages to be uploaded to Hackage with a kind of "build-certificate," certifying that a successful build plan was possible given the state of Hackage at a particular timestamp. This allows us to infer minimal upper bounds for all of the package's transitive dependencies. Once the build-certificate is authenticated, Hackage only needs to store the latest timestamp of a successful build, so the overhead is very low. In this scheme, author-specified upper bounds would be relegated to ruling out known incompatibilities with already-released versions of dependencies. Of course, build failures will still occur. By using anonymous build-reporting to track the timestamp of the earliest failed build of a package, we can automatically infer the _true_ upper bounds on its dependencies. If contradictory reports occur, they can be resolved by the trustees or the package's maintainers. Just some food for thought. I hope the timing of my e-mail will not discourage anyone from taking my suggestions seriously. [1]. If I am mistaken and you think you do have access to future information, please respond privately; I have some questions for you about the stock market. [2]. If you've never had an upper bounds problem on a package you maintain, I'm happy for you, but there is mounting evidence that as a community, we are very bad guessers, on average. [3]. The PVP is orthogonal to this. It is a convenient set of assumptions and a reasonably good set of norms; nothing more. -- Thomas Tuegel From aldodavide at gmx.com Wed Apr 1 14:56:53 2015 From: aldodavide at gmx.com (Aldo Davide) Date: Wed, 1 Apr 2015 16:56:53 +0200 Subject: [Haskell-cafe] Specify OS version In-Reply-To: <551BBD2A.8060902@sigrlami.eu> References: <551BBD2A.8060902@sigrlami.eu> Message-ID: I don't see how you can avoid CPP. Even if cabal supported checking the OS version, you would still probably need CPP. ? ? Sent:?Wednesday, April 01, 2015 at 10:40 AM From:?"Sergey Bushnyak" To:?haskell-cafe at haskell.org Subject:?[Haskell-cafe] Specify OS version Hi Cafe! I have program written in Haskell and Swift that runs on Mac. This program uses some features that was introduced in last version of OS X ?Yosemite? 10.10, but some of my users use previous version "Maverics " 10.9. Is there some technique that allows me to extend my cabal file and specify which *version* of os is currently in use and don't load some of modules? AFAIK, Cabal's `conditional` doesn't provide such mechanism, but something like ``` if os(osx) && ver(=10.10) if os(osx) && ver(<=10.9) ``` will be nice. Maybe there is other approach to this problem?I don't want to use `CPP` extension to handle this within code. Thanks -- Best regards, Sergey Bushnyak _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From mike at izbicki.me Wed Apr 1 15:01:37 2015 From: mike at izbicki.me (Mike Izbicki) Date: Wed, 1 Apr 2015 08:01:37 -0700 Subject: [Haskell-cafe] ANNOUNCE: Parsed 0.0.1 In-Reply-To: <551BDAFF.2090908@informatik.uni-tuebingen.de> References: <551BCE05.8010908@informatik.uni-tuebingen.de> <551BDAFF.2090908@informatik.uni-tuebingen.de> Message-ID: For the Parsec parser (many1 (string "a") <|> many1 (string "b")) >> string "c" we have an equivalent Parsed parser given by: (choice "many 'match a'" "many 'match b'" | match c) Verifying it accepts your test case: $ (choice "many 'match a'" "many 'match b'" | match c) <<< 'bc' bc $ echo $? Indeed, you are correct about the slightly different semantics of choice. It essentially automatically wraps its arguments in "try". For those wanting the more traditional Alternative interface, I've just uploaded a combinator called (<|>). We can use this combinator exactly as you would in haskell (except that bash requires lots of escaping): $ (\(\<\|\>\) 'match ab' 'match a') <<< "a" $ echo $? 2 $ (\(\<\|\>\) 'match ab' 'match a') <<< "ab" ab $ echo $? 0 Thanks for the bug report! On Wed, Apr 1, 2015 at 4:48 AM, Tillmann Rendel wrote: > Hi again, > > I wrote: >> >> In parsec, >> >> (many1 (string "a") <|> many1 (string "b")) >> string "c" >> >> accepts "bc", but I don't see how the corresponding grammar can be >> par-s?d. > > > Sorry, I think I was confused because your implementation looked like an > attempt at unlimited backtracking to me. To compare with Parsec, it is > better to treat your implementation as an attempt to implement Parsec's > semantics. In that case, we should note that in Parsec, > > (string "ab" <|> string "a") > > rejects "a", but maybe your implementation would accept it? > > > Tillmann > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From allbery.b at gmail.com Wed Apr 1 15:05:37 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 1 Apr 2015 11:05:37 -0400 Subject: [Haskell-cafe] ANNOUNCE: Parsed 0.0.1 In-Reply-To: References: <551BCE05.8010908@informatik.uni-tuebingen.de> <551BDAFF.2090908@informatik.uni-tuebingen.de> Message-ID: On Wed, Apr 1, 2015 at 11:01 AM, Mike Izbicki wrote: > $ (\(\<\|\>\) 'match ab' 'match a') <<< "ab" > '(<|>)' should work as well and be a little more readable. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey.bushnyak at sigrlami.eu Wed Apr 1 15:02:03 2015 From: sergey.bushnyak at sigrlami.eu (Sergey Bushnyak) Date: Wed, 01 Apr 2015 18:02:03 +0300 Subject: [Haskell-cafe] Specify OS version In-Reply-To: References: <551BBD2A.8060902@sigrlami.eu> Message-ID: <551C086B.4040404@sigrlami.eu> Usually, I'm using technique described here http://blog.haskell-exists.com/yuras/posts/stop-abusing-cpp-in-haskell.html by using `extra-source-dir` key in cabal file and splitting implementation. On 04/01/2015 05:56 PM, Aldo Davide wrote: > I don't see how you can avoid CPP. Even if cabal supported checking the OS version, you would still probably need CPP. > > > > Sent: Wednesday, April 01, 2015 at 10:40 AM > From: "Sergey Bushnyak" > To: haskell-cafe at haskell.org > Subject: [Haskell-cafe] Specify OS version > > Hi Cafe! > I have program written in Haskell and Swift that runs on Mac. This program uses some features that was introduced in last version of OS X ?Yosemite? 10.10, but some of my users use previous version "Maverics " 10.9. Is there some technique that allows me to extend my cabal file and specify which *version* of os is currently in use and don't load some of modules? > AFAIK, Cabal's `conditional` doesn't provide such mechanism, but something like > ``` > if os(osx) && ver(=10.10) > if os(osx) && ver(<=10.9) > ``` > will be nice. > > Maybe there is other approach to this problem?I don't want to use `CPP` extension to handle this within code. > Thanks > -- > Best regards, > Sergey Bushnyak _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Wed Apr 1 15:08:52 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Wed, 1 Apr 2015 11:08:52 -0400 Subject: [Haskell-cafe] An Easy Solution to PVP Bounds and Cabal Hell In-Reply-To: References: Message-ID: On Wed, Apr 1, 2015 at 10:53 AM, Thomas Tuegel wrote: > [2]. If you've never had an upper bounds problem on a package you > maintain, I'm happy for you, but there is mounting evidence that as a > community, we are very bad guessers, on average. > I think upper bound with easy way to "slip" it (--allow-newer, as already implemented) is the best we can do. There's already a lot of evidence that guessing wrong in the other direction can and does break large chunks of the ecosystem --- as much as certain developers would prefer to ignore the fact and/or arrange that everyone other than them has to deal with the breakage. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From ky3 at atamo.com Wed Apr 1 16:18:25 2015 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Wed, 1 Apr 2015 23:18:25 +0700 Subject: [Haskell-cafe] Haskell Weekly News Message-ID: *Top picks:* - Bot attack on Trac pummels GHC HQ productivity! Do you know a thing or two about hardening web apps? Can you help? - A month ago you read about the absence of a correct operational spec for Core. Christiaan Baaij proffers rewriting rules for something "very much like Core" from his 2014 thesis on Digital Circuits in C?aSH, a tool designed for Computer Architecture for Embedded Systems (CAES). The consensus is that they probably also work for GHC Core. - Neil Mitchell reports Unable to load package Win32-2.3.1.0 . The problem? SetWindowLongPtrW exists only on 64-bit. The haskell win32 shim wasn't switching to SetWindowLongW on 32-bit. Darren Grant steps up to offer a fix, which Austin Seipp promptly checks in. - Ki Yung Ahn asks for a "wrapper that lifts actions of (State s1 a) to (State (s1,s2) a). " The answer? A function called "zoom" in lens libraries. - Chris Done has started the ball rolling on GPG-based package signing. So far, Michael Snoyman and Neil Mitchell have had their keys signed by Chris. He invites others to join the party. - Levant Erkok joins Lennart Augustsson in hitting a bug with signed zeros . The function isNegativeZero breaks under optimizations. - James Stevenson over at Safari Books Online reveals how they use Haskell to parse web logs more efficiently than Python. The top comment at Hacker News observes the absence of a proper benchmark pitting Python vs Haskell. James responds that they did an informal comparison that showed "the number of lines parsed/second [with Python] was far smaller than the attoparsec-based parser." Elsewhere, Luke Randall submits the link on reddit and thinks it's a "very gentle intro to parsing using attoparsec". - Ian Ross announces a new C2HS release christened "Snowmelt". Originally authored by Manuel Chakravarty, C2HS eases the pain of manually creating FFI shims for C libraries. The latest release, thanks to work contributed by Philipp Balzarek, achieves better cross-language alignment of C enum and Haskell Enum types, among other improvements. Reddit discussion here. - Michael Snoyman announces FPComplete's open sourcing of their IDE backend, comprising a wrapper around the GHC API. - Jon Sterling at PivotCloud hits an STM TQueue bug initially reported by John Lato seven months ago. A sufficiently fast writer can cause the reader to never get scheduled, which leads to live-lock in Jon's production code. The fix looks to be as simple as lazifying a case into a let in readTQueue. Curiously, the code uses let in Simon Marlow's book on Haskell concurrency but not in the STM package you have on your machine. *Tweets of the week:* - Michael Neale : Haskell Quickcheck enters a bar, asks for 1 beer, 42 beers, -Inifinity beers, shaves bartenders beard, sets off a tactical nuke. - Dierk K?nig : #Haskell is the gold standard for programming languages and #Frege makes it available on the #JVM -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlutz at hedmen.org Wed Apr 1 19:31:29 2015 From: rlutz at hedmen.org (Roland Lutz) Date: Wed, 1 Apr 2015 21:31:29 +0200 (CEST) Subject: [Haskell-cafe] How to implement a source-sink pattern Message-ID: Hi! I'm trying to implement a source-sink type of pattern where there are a number of sinks, each connected to exactly one source, and a number of sources, each connected to zero or more sinks. The program consists of some modules, each defining their own sources and sinks. To illustrate this, here's what this would look like in C: /* global.h */ struct source { char *state; /* some more fields */ } struct sink { struct source *source; char *state; /* some more fields */ } struct sink **get_sinks_for_source(struct source *source); /* module_a.c */ struct source a_source, another_source; struct sink foo, bar, baz; ... foo.source = &a_source; ... Since getting the list of sinks for a source is a common operation, I'd probably define some kind of reverse map which is updated when a sink is remapped to a new source. I tried to rebuild this in Haskell, but the result is ridiculously complicated. Since I can't use pointers, I had to define an ordinal type class to enumerate the sources and sinks and use this to look up the actual data from the world state. But then I couldn't define the Sink type properly as I can't do: data SinkInfo = Source a => SinkInfo { sinkSource :: a, sinkState :: String } I have to add the source type as a type attribute, leading to a world state with as many parameters as there are sinks. Also, I couldn't figure out how to implement a function like sinksForSource :: Source a => WorldState p q r -> a -> [Sink b => b] since the source types won't match the type of the lookup key, and the sinks can be from different modules. Now here is the actual code (comments indicate where I intend to split it into individual files): {- Global.hs -} data SourceInfo = SourceInfo { sourceState :: String } class Eq a => Source a where getSourceInfo :: WorldState p q r -> a -> SourceInfo data SinkInfo a = SinkInfo { sinkSource :: a, sinkState :: String } class Sink a where getSinkInfo :: Source x => WorldState x x x -> a -> SinkInfo x -- should allow different arguments to WorldState {- ModuleA.hs -} data ModuleASource = ASource | AnotherSource deriving (Eq) instance Source ModuleASource where getSourceInfo world ASource = aSource $ moduleAState world getSourceInfo world AnotherSource = anotherSource $ moduleAState world data ModuleASink = Foo | Bar | Baz instance Sink ModuleASink where getSinkInfo world Foo = foo $ moduleAState world getSinkInfo world Bar = bar $ moduleAState world getSinkInfo world Baz = baz $ moduleAState world data ModuleAState p q r = ModuleAState { aSource :: SourceInfo, anotherSource :: SourceInfo, foo :: SinkInfo p, bar :: SinkInfo q, baz :: SinkInfo r } sinksForSourceInModuleA :: Source x => ModuleAState x x x -> x -> [ModuleASink] -- should allow different arguments to ModuleAState and return Sink b => [b] sinksForSourceInModuleA (ModuleAState _ _ foo bar baz) source = (if sinkSource foo == source then [Foo] else []) ++ (if sinkSource bar == source then [Bar] else []) ++ (if sinkSource baz == source then [Baz] else []) {- Main.hs -} data WorldState p q r = WorldState { moduleAState :: ModuleAState p q r } initState :: WorldState ModuleASource ModuleASource ModuleASource initState = WorldState $ ModuleAState (SourceInfo "a source init state") (SourceInfo "another source init state") (SinkInfo ASource "foo init state") (SinkInfo ASource "bar init state") (SinkInfo AnotherSource "baz init state") remapBar :: WorldState p q r -> WorldState p ModuleASource r remapBar (WorldState a) = WorldState $ a { bar = SinkInfo AnotherSource (sinkState $ bar a) } sinksForSource :: Source x => WorldState x x x -> x -> [ModuleASink] -- should allow different arguments to ModuleAState and return Sink b => [b] sinksForSource (WorldState a) source = sinksForSourceInModuleA a source main :: IO () main = let before = initState after = remapBar before in do putStrLn $ "Number of sinks for another source before: " ++ (show $ length $ sinksForSource before AnotherSource) putStrLn $ "Number of sinks for another source after: " ++ (show $ length $ sinksForSource after AnotherSource) There are some problems with this code: * I couldn't figure out how to resolve the circular references which are created by splitting this into individual files. * Source and sink ordinals can't be mixed between modules, so I wouldn't be able to add a module B anyway. * Looking up the source/sink states by ordinal keys is kind of cumbersome but works. * To get the list of sinks connected to a given source, the whole world state has to be polled. I expect this to be grossly inefficient (unless Haskell does some magic here) but I'm not sure how to add a cache in a consistent way. I can't believe this should be so complicated in Haskell, so I guess I'm trying to do this in an un-Haskell-ish way, or maybe there's something obvious I haven't seen. I'd be happy about any suggestions. Roland From frank at fstaals.net Wed Apr 1 21:32:50 2015 From: frank at fstaals.net (Frank Staals) Date: Wed, 01 Apr 2015 23:32:50 +0200 Subject: [Haskell-cafe] How to implement a source-sink pattern In-Reply-To: (Roland Lutz's message of "Wed, 1 Apr 2015 21:31:29 +0200 (CEST)") References: Message-ID: Roland Lutz writes: > Hi! > > I'm trying to implement a source-sink type of pattern where there are a number > of sinks, each connected to exactly one source, and a number of sources, each > connected to zero or more sinks. The program consists of some modules, each > defining their own sources and sinks. To illustrate this, here's what this > would look like in C: Hey Roland, So essentially you want a data structure for some kind of bipartite graph. The most haskelly way to do that would probably to define the graph to be simply define the Bipartite graph to be a pair of Maps, and define functions to add/delete nodes and edges to the graph that make sure that the two maps keep in sync. This would give you something like: import qualified Data.Map as M data MySource = MySource { sourceState :: String , % and any other data specific to sources } data MySink = MySink { sinkState :: String, % and whatever else sinks have} data BiGraph src snk = BiGraph { sourceToSinkMap :: M.Map src [snk] , sinkToSourceMap :: M.Map snk src } addEdge :: (src,snk) -> BiGraph src snk -> BiGraph src snk addEdge (src,snk) (BiGraph m1 m2) = BiGraph (M.update src (snk :) m1) (M.insert snk src m2) % make sure to check that snk % does not already occur in % m2 etc. you essentially get your 'sinksForSource' functions for free: sinksForSource :: src -> BiGraph src snk -> [snk] sinksForSource src = M.lookup src . sourceToSinkMap In this model you cannot direclty mix the sources and sinks from different modules. I.e. a 'BiGraph MySource MySink' cannot be used to also store a (MySecondSource,MySecondSink) pairs. If you do want that, you would need some wrapper type that distinguishes between the various 'MySink', 'MySecondSink', versions. Note that instead of building such a graph type yourself you might also just want to use some existing graph library out there (i.e. fgl or so). Hope this helps a bit. -- - Frank From mrz.vtl at gmail.com Thu Apr 2 01:17:55 2015 From: mrz.vtl at gmail.com (Maurizio Vitale) Date: Wed, 1 Apr 2015 21:17:55 -0400 Subject: [Haskell-cafe] cabal haddock --executables Message-ID: I'm getting: cabal: internal error when calculating transitive package dependencies. Debug info: [] and it seems this has been a problem for quite some while (and it has to do with the executable depending on a local library) is there any known workaround? Thanks, Maurizio -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominic at steinitz.org Thu Apr 2 06:35:01 2015 From: dominic at steinitz.org (Dominic Steinitz) Date: Thu, 2 Apr 2015 07:35:01 +0100 Subject: [Haskell-cafe] Parallel Profiling In-Reply-To: References: <0CD0B231-DF22-4D83-8334-E44540D1993E@steinitz.org> Message-ID: <32889012-65E8-4AA1-BD96-1A9F3AC9FB53@steinitz.org> Hi Amos, Thanks very much - I am taking a look. Dominic Steinitz dominic at steinitz.org http://idontgetoutmuch.wordpress.com On 30 Mar 2015, at 22:05, Amos Robinson wrote: > Hi Dominic, > > A few years ago we wrote a program for analysing DPH runs, dph-event-seer. It provides a few general analyses like percent of time with N threads running, time between wake-ups etc. You might find it interesting, but I haven't actually looked at ghc-events-analyse, so I don't know what it provides. > > I'm sorry, but to compile it without DPH you'd have to modify it to remove DphOps*. > > https://github.com/ghc/packages-dph/blob/master/dph-event-seer/src/Main.hs > > Amos > > On Tue, 31 Mar 2015 at 04:38 Dominic Steinitz wrote: > Does anyone know of any tools for analysing parallel program performance? > > I am trying to use threadscope but it keeps crashing with my 100M log file and ghc-events-analyze is not going to help as I have many hundreds of threads all carrying out the same computation. I think I?d like a library that would allow me to construct my own analyses rather than display them via GTK. There is ghc-events but that seems to be just for parsing the logs and I couldn?t find anything that used it in the way I would like to (apart from threadscope and ghc-events-analyze of course). > > Thanks > > Dominic Steinitz > dominic at steinitz.org > http://idontgetoutmuch.wordpress.com > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Thu Apr 2 10:16:44 2015 From: michael at snoyman.com (Michael Snoyman) Date: Thu, 02 Apr 2015 10:16:44 +0000 Subject: [Haskell-cafe] Announcing: LTS (Long Term Support) Haskell 2 Message-ID: The LTS Haskell 2.0 release is officially out the door. More details are available at: https://www.fpcomplete.com/blog/2015/04/announcing-lts-2 For the tl;dr crowd, run the following inside your project to get a cabal.config with appropriate constraints: wget https://www.stackage.org/lts/cabal.config There are some questions about policy at the end of the post that I'd appreciate feedback on. -------------- next part -------------- An HTML attachment was scrubbed... URL: From omari at smileystation.com Thu Apr 2 13:06:35 2015 From: omari at smileystation.com (Omari Norman) Date: Thu, 2 Apr 2015 09:06:35 -0400 Subject: [Haskell-cafe] cabal haddock --executables In-Reply-To: References: Message-ID: Are you talking about this bug? https://github.com/haskell/cabal/issues/1919 I'm not sure what you mean by "local" library. My workaround is not to have the executables depend on the library that's in the same package. That leads to two problems: 1) longer compile times, and 2) duplication in the Cabal file. To deal with problem 2, I use Cartel: https://hackage.haskell.org/package/cartel which brings its own problems but also solves other problems and so it's the best solution I've found. But Cartel does not help with problem 1. One solution to that problem is to split the executables into different packages. That's no fun either. On Wed, Apr 1, 2015 at 9:17 PM, Maurizio Vitale wrote: > I'm getting: > > cabal: internal error when calculating transitive package dependencies. > Debug info: [] > > and it seems this has been a problem for quite some while (and it has to > do with the executable depending on a local library) > > is there any known workaround? > > Thanks, > > Maurizio > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrz.vtl at gmail.com Thu Apr 2 13:36:22 2015 From: mrz.vtl at gmail.com (Maurizio Vitale) Date: Thu, 2 Apr 2015 09:36:22 -0400 Subject: [Haskell-cafe] cabal haddock --executables In-Reply-To: References: Message-ID: That is the bug I'm referring to. Long term the right solution for me would be to have libraries and executables in different cabal packages, but during initial development it is nice to have everything in the same tree. The single tree solution helps also with the build of a related website. Although you could split that into its own package as well and inject the results of haddock, coverage, tests etc. from multiple packages into the generated html page, everything is much easier if it is all in one tree. I'll try with having separate packages under a toplevel directory with a Makefile that stitch everything together. On Thu, Apr 2, 2015 at 9:06 AM, Omari Norman wrote: > Are you talking about this bug? > > https://github.com/haskell/cabal/issues/1919 > > I'm not sure what you mean by "local" library. My workaround is not to > have the executables depend on the library that's in the same package. > That leads to two problems: 1) longer compile times, and 2) duplication in > the Cabal file. > > To deal with problem 2, I use Cartel: > > https://hackage.haskell.org/package/cartel > > which brings its own problems but also solves other problems and so it's > the best solution I've found. > > But Cartel does not help with problem 1. One solution to that problem is > to split the executables into different packages. That's no fun either. > > > On Wed, Apr 1, 2015 at 9:17 PM, Maurizio Vitale wrote: > >> I'm getting: >> >> cabal: internal error when calculating transitive package dependencies. >> Debug info: [] >> >> and it seems this has been a problem for quite some while (and it has to >> do with the executable depending on a local library) >> >> is there any known workaround? >> >> Thanks, >> >> Maurizio >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlutz at hedmen.org Thu Apr 2 17:44:58 2015 From: rlutz at hedmen.org (Roland Lutz) Date: Thu, 2 Apr 2015 19:44:58 +0200 (CEST) Subject: [Haskell-cafe] How to implement a source-sink pattern In-Reply-To: References: Message-ID: On Wed, 1 Apr 2015, Frank Staals wrote: > So essentially you want a data structure for some kind of bipartite > graph. Yes, with the additional constraint that the vertices in one partite set (the "sinks") each connect to exactly one edge. > The most haskelly way to do that would probably to define the graph to > be simply define the Bipartite graph to be a pair of Maps, and define > functions to add/delete nodes and edges to the graph that make sure that > the two maps keep in sync. This was actually my first approach, but I couldn't find appropriate key and value types to be stored in the map. Since the vertices are well-known global objects, it doesn't make much sense to store more than a handle here. But how do I connect the handle back to the data structure? > In this model you cannot direclty mix the sources and sinks from > different modules. I.e. a 'BiGraph MySource MySink' cannot be used to > also store a (MySecondSource,MySecondSink) pairs. If you do want that, > you would need some wrapper type that distinguishes between the various > 'MySink', 'MySecondSink', versions. That's one of the points that trouble me. How would such a wrapper look like? I experimented a bit with your code (see below). I noticed that I have to specify "Ord src =>" and "Ord snk =>" in multiple places. Is there a way to state that type arguments for BiGraph always have to be instances of Ord? Roland import qualified Data.List as L import qualified Data.Map as M data BiGraph src snk = BiGraph { sourceToSinkMap :: M.Map src [snk], sinkToSourceMap :: M.Map snk src } deriving Show collectKeys :: Eq a => a -> M.Map k a -> [k] collectKeys a = M.keys . M.filter (== a) applyToPair :: (k -> a) -> k -> (k, a) applyToPair f a = (a, f a) initializeGraph :: Ord src => [src] -> M.Map snk src -> BiGraph src snk initializeGraph srcs m2 = BiGraph (M.fromList $ map (applyToPair $ (flip collectKeys) m2) srcs) m2 updateEdge :: Ord src => Ord snk => (src, snk) -> BiGraph src snk -> BiGraph src snk updateEdge (src, snk) (BiGraph m1 m2) = if M.notMember src m1 then error "updateEdge: invalid source" else if M.notMember snk m2 then error "updateEdge: invalid sink" else let oldsrc = m2 M.! snk in BiGraph (M.adjust (snk :) src $ M.adjust (L.delete snk) oldsrc m1) (M.insert snk src m2) sinksForSource :: Ord src => src -> BiGraph src snk -> [snk] sinksForSource src = (M.! src) . sourceToSinkMap From clint at debian.org Thu Apr 2 19:38:16 2015 From: clint at debian.org (Clint Adams) Date: Thu, 2 Apr 2015 19:38:16 +0000 Subject: [Haskell-cafe] Haskell-programming sysadmin opportunity at SFLC Message-ID: <20150402193816.GA23589@scru.org> The Software Freedom Law Center is seeking a motivated systems administrator for our small office, where we use only free and open source software. The Systems Administrator will be responsible for three main areas of work: systems maintenance, user support, and systems development. Our existing systems are a collection of Debian servers that provide central services to the office. In addition to Debian, we use: - Apache - Asterisk - Ceph - deets - dirvish and BoxBackup - Ganeti with Xen and KVM - Gitolite - Hakyll - hledger and hledger-web - Ikiwiki - Monkeysphere - Nginx - OpenLDAP - OpenVPN - Postfix/SpamAssassin/Dovecot - Radicale - Yesod as well as a number of other systems. Ideal candidates will have experience with most of these components and the ability to quickly master the ones they know less about. As our new Systems Administrator, you will be responsible for maintaining and rebuilding our servers as necessary so some knowledge of server components and hardware configuration will be needed. You should have experience with firewall management and secure network design. Experience with low-latency bandwidth shaping preferred. Programming experience with Perl, Python, Haskell, and shell scripts will also be necessary for maintaining legacy systems. In addition to the servers, you will also be responsible for supporting 5?10 desktop users with their daily use of free software. Additional help with this activity is available around the office and many users are self-supporting for local system issues. All office user machines are running Debian. You will be frequently tasked with other functions, which may include code analysis, explaining technological concepts and licensing to lawyers, community outreach, and anything else which may require technical expertise. We intend to continue to improve our free software technology for running our law practice. There are many challenging projects we?d like to do and about which we hope you will be excited. Experience working with software development in addition to systems administration is therefore a big plus. Position includes full benefits. Please submit a cover letter and resume in a free document format to officemanager at softwarefreedom.org Applications should be submitted no later than April 15, 2015 From olf at aatal-apotheke.de Thu Apr 2 20:08:10 2015 From: olf at aatal-apotheke.de (Olaf Klinke) Date: Thu, 2 Apr 2015 22:08:10 +0200 Subject: [Haskell-cafe] An Easy Solution to PVP Bounds and Cabal Hell In-Reply-To: References: Message-ID: <69598BEB-3F57-484E-BFFA-170A76333271@aatal-apotheke.de> I am mostly a consumer of Haskell and have not worked much with the interior of cabal or GHC, but Thomas Tuegel correctly pointed out that with version numbers a package can never safely depend on another package version that was released after itself. The empirical approach of trying builds is probably the most inclusive one. But let us think why dependencies can break a package. It is because a function was removed from the namespace, conflicting things have been added to the namespace, a data type definition has changed or the modules have been re-structured. In the worst case, none of the above has happened but the semantics of a function has changed without affecting its type. But all other cases can be dealt with: When importing a module, either a smart compiler figures out or the programmer states explicitly which imported functions are expected to be provided by the imported module and what type they have. Thus, the version number of a package is replaced by the collection of classes, types and type signatures its modules export. After all, the version number can be seen as a crude ?hash? of this information. For example, instead of stating import Data.List (mapAccumL) on would say import Data.List (mapAccumL :: (acc -> x -> (acc, y)) -> acc -> [x] -> (acc, [y])) or rather the compiler would fill in this information and save it together with the package as a dependency. Now when a new version of Data.List comes out, it can be automatically checked whether all functions _used_ by the importing module are still exported and have the same type signature. Modulo semantics, this should be a pretty good estimate of whether a build will fail or succeed. One could possibly even add QuickCheck properties to the signatures stating that the imported functions are expected to behave in a certain way in the importing module. When installing the new package on a system, all these assertions need to be checked. Isn?t that what configure scripts do prior to executing a makefile? Olaf From austin at well-typed.com Fri Apr 3 11:36:43 2015 From: austin at well-typed.com (Austin Seipp) Date: Fri, 3 Apr 2015 06:36:43 -0500 Subject: [Haskell-cafe] Help wanted: working on the GHC webpage Message-ID: Hello *, For a while now, I've been wanting to do a facelift on the GHC homepage, but among many other things it's been a low priority. I'd like for people to help, so I've tried to get the ball rolling. The webpages existed in a Darcs repository previously (which wasn't available online), but earlier today I converted them to a git repository which you can find here: https://github.com/haskell-infra/ghc-homepage The site is currently composed of a set of "server side include" files that have a crude form of HTML templating. So it's mostly just pretty verbose to add or refactor anything, and the sites templating and styling is quite old (it dates back at least 10 years!) So, I'm making an official call for some help. At the very least, I'd like to at least end up converting the site to something like Hakyll which is doable without me causing a lot of damage to the stylesheets, but for the actual page itself I'd really appreciate it if anyone could help out! Please send pull requests or file issues, it's much appreciated. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From sergey.bushnyak at sigrlami.eu Fri Apr 3 13:16:24 2015 From: sergey.bushnyak at sigrlami.eu (Sergey Bushnyak) Date: Fri, 03 Apr 2015 16:16:24 +0300 Subject: [Haskell-cafe] Help wanted: working on the GHC webpage In-Reply-To: References: Message-ID: <551E92A8.7020901@sigrlami.eu> Hi, Austin. Here is my fork of yours `ghc-homepage' repo https://github.com/sigrlami/ghc-homepage. It builds main page with Hakyll and overall work could be done in couple of days or week. But I want to raise couple of issues related to design. 1)The site is built on outdated html markup and not consistent in it's structure, a lot of stuff linked to other pages which have different design. 2) Could we discuss possibility to make it look more like new haskell.org page and update design in general, switch to HTML5/CSS3 . 3) Also, it might be easier to publish blog posts on this site, like weekly news, without linking to `track`. I can reuse some code we are using for building haskell.od.ua (OdHug, Odessa Haskell User Group) I can work on this transition. -- Best regards, Sergey Bushnyak On 04/03/2015 02:36 PM, Austin Seipp wrote: > Hello *, > > For a while now, I've been wanting to do a facelift on the GHC > homepage, but among many other things it's been a low priority. I'd > like for people to help, so I've tried to get the ball rolling. > > The webpages existed in a Darcs repository previously (which wasn't > available online), but earlier today I converted them to a git > repository which you can find here: > > https://github.com/haskell-infra/ghc-homepage > > The site is currently composed of a set of "server side include" files > that have a crude form of HTML templating. So it's mostly just pretty > verbose to add or refactor anything, and the sites templating and > styling is quite old (it dates back at least 10 years!) > > So, I'm making an official call for some help. At the very least, I'd > like to at least end up converting the site to something like Hakyll > which is doable without me causing a lot of damage to the stylesheets, > but for the actual page itself I'd really appreciate it if anyone > could help out! Please send pull requests or file issues, it's much > appreciated. > From sergey.bushnyak at sigrlami.eu Fri Apr 3 14:08:26 2015 From: sergey.bushnyak at sigrlami.eu (Sergey Bushnyak) Date: Fri, 03 Apr 2015 17:08:26 +0300 Subject: [Haskell-cafe] Help wanted: working on the GHC webpage In-Reply-To: <87y4m9conm.fsf@gmail.com> References: <551E92A8.7020901@sigrlami.eu> <87y4m9conm.fsf@gmail.com> Message-ID: <551E9EDA.40301@sigrlami.eu> > Why would it be easier? What's difficult about publishing on > https://ghc.haskell.org/trac/ghc/blog? I'm actually don't know how it's published on track. From my standpoint as newcomer it's better to see what's happening from one place, with one design, have some shared git repo where people contribute in markdown. > Moreover, the GHC weekly news are intimately linked to Trac, as they > reference Trac-tickets and Git commits, which Trac is able to annotate > with meta-data (ticket-type, -status, and -title for Ticket references, > as well as part of the Git commit msg for Git-commit refs). Ok, it was just a suggestion. Maybe it's a bad idea, doesn't know about annotation. Anyway, still can help on updating ghc home page. From trebla at vex.net Sat Apr 4 17:40:03 2015 From: trebla at vex.net (Albert Y. C. Lai) Date: Sat, 04 Apr 2015 13:40:03 -0400 Subject: [Haskell-cafe] Is it possible for cabal sandboxes to use $HOME/.cabal/lib? In-Reply-To: References: , Message-ID: <552021F3.6050405@vex.net> On 2015-03-30 04:57 PM, Aldo Davide wrote: > So basically, I think it would be useful for cabal to be able to support an arbitrary list of directories, e.g.: > > /home/user/projects/foo/.cabal-sandbox/x86_64-linux-ghc-7.8.4-packages.conf.d > /home/user/.ghc/x86_64-linux-7.8.4/package.conf.d > /usr/lib64/ghc-7.8.4/package.conf.d > > Whenever a package is needed, then it would look it up in all of those directories. If a new package is to be installed, it would be installed in the topmost directory. In other words, it would overlay these directories one on top of the other, with the topmost one being used for read-write and the other ones being read-only. This has always been available. See my http://www.vex.net/~trebla/haskell/sicp.xhtml#sandbox In fact, read the whole thing. From aldodavide at gmx.com Sun Apr 5 00:36:27 2015 From: aldodavide at gmx.com (Aldo Davide) Date: Sun, 5 Apr 2015 02:36:27 +0200 Subject: [Haskell-cafe] Is it possible for cabal sandboxes to use $HOME/.cabal/lib? In-Reply-To: <552021F3.6050405@vex.net> References: , , <552021F3.6050405@vex.net> Message-ID: On April 04, 2015 at 6:40 PM, Albert Y. C. Lai wrote: > On 2015-03-30 04:57 PM, Aldo Davide wrote: > > So basically, I think it would be useful for cabal to be able to support an arbitrary list of directories, e.g.: > > > > /home/user/projects/foo/.cabal-sandbox/x86_64-linux-ghc-7.8.4-packages.conf.d > > /home/user/.ghc/x86_64-linux-7.8.4/package.conf.d > > /usr/lib64/ghc-7.8.4/package.conf.d > > > > Whenever a package is needed, then it would look it up in all of those directories. If a new package is to be installed, it would be installed in the topmost directory. In other words, it would overlay these directories one on top of the other, with the topmost one being used for read-write and the other ones being read-only. > > This has always been available. See my > > http://www.vex.net/~trebla/haskell/sicp.xhtml#sandbox > > In fact, read the whole thing. Thanks for the link to the article, interesting stuff. It's good to see that ghc as well as the install sub-command of cabal accept an arbitrary stack of package dbs. Unfortunately, I still haven't found a way to make sandboxes use these capabilities. I tried to create a cabal.config file inside a sandbox (like the comment at the top of cabal.sandbox.config suggests), but it does not make any difference if I include a package-db line in there. From mike at proclivis.com Sun Apr 5 07:11:11 2015 From: mike at proclivis.com (Michael Jones) Date: Sun, 5 Apr 2015 01:11:11 -0600 Subject: [Haskell-cafe] FFI Ptr Question Message-ID: I am having trouble figuring out how to pass callbacks to C. Given the definition below it is not clear how to define a function and pass it. I tried: getNumberOfRows :: CInt getNumberOfRows = 1 table <- wxcGridTableCreate getNumberOfRows? but the compiler barks at me: src/MainGui.hs:577:32: Not in scope: data constructor ?Ptr? even through I do the imports: import Foreign import Foreign.C There is also a _obj as the first value of wxcGridTableCreate and the callbacks, and I am not sure how to make a this* for it. I would have throughout that wxcGridTableCreate would return a Ptr. The goal is to pass haskell functions into wxcGridTableCreate so the grid can call back to haskell. Can someone show an example of just this one function, how to wrap a Ptr around it, and what to do with _obj? DEFINITION wxcGridTableCreate :: Ptr a -> Ptr b -> Ptr c -> Ptr d -> Ptr e -> Ptr f -> Ptr g -> Ptr h -> Ptr i -> Ptr j -> Ptr k -> Ptr l -> Ptr m -> Ptr n -> Ptr o -> Ptr p -> Ptr q -> IO (WXCGridTable ()) wxcGridTableCreate _obj _EifGetNumberRows _EifGetNumberCols _EifGetValue _EifSetValue _EifIsEmptyCell _EifClear _EifInsertRows _EifAppendRows _EifDeleteRows _EifInsertCols _EifAppendCols _EifDeleteCols _EifSetRowLabelValue _EifSetColLabelValue _EifGetRowLabelValue _EifGetColLabelValue = withObjectResult $ wx_ELJGridTable_Create _obj _EifGetNumberRows _EifGetNumberCols _EifGetValue _EifSetValue _EifIsEmptyCell _EifClear _EifInsertRows _EifAppendRows _EifDeleteRows _EifInsertCols _EifAppendCols _EifDeleteCols _EifSetRowLabelValue _EifSetColLabelValue _EifGetRowLabelValue _EifGetColLabelValue foreign import ccall "ELJGridTable_Create" wx_ELJGridTable_Create :: Ptr a -> Ptr b -> Ptr c -> Ptr d -> Ptr e -> Ptr f -> Ptr g -> Ptr h -> Ptr i -> Ptr j -> Ptr k -> Ptr l -> Ptr m -> Ptr n -> Ptr o -> Ptr p -> Ptr q -> IO (Ptr (TWXCGridTable ())) typedef int _cdecl (*TGridGetInt)(void* _obj); typedef int _cdecl (*TGridIsEmpty)(void* _obj, int row, int col); typedef void* _cdecl (*TGridGetValue)(void* _obj, int row, int col); typedef void _cdecl (*TGridSetValue)(void* _obj, int row, int col, void* val); typedef void _cdecl (*TGridClear)(void* _obj); typedef int _cdecl (*TGridModify)(void* _obj, int pos, int num); typedef int _cdecl (*TGridMultiModify)(void* _obj, int num); typedef void _cdecl (*TGridSetLabel)(void* _obj, int idx, void* val); typedef void* _cdecl (*TGridGetLabel)(void* _obj, int idx); EWXWEXPORT(void*,ELJGridTable_Create)(void* self,void* _EifGetNumberRows,void* _EifGetNumberCols,void* _EifGetValue,void* _EifSetValue,void* _EifIsEmptyCell,void* _EifClear,void* _EifInsertRows,void* _EifAppendRows,void* _EifDeleteRows,void* _EifInsertCols,void* _EifAppendCols,void* _EifDeleteCols,void* _EifSetRowLabelValue,void* _EifSetColLabelValue,void* _EifGetRowLabelValue,void* _EifGetColLabelValue) { return (void*)new ELJGridTable (self, From daniel.trstenjak at gmail.com Sun Apr 5 12:19:13 2015 From: daniel.trstenjak at gmail.com (Daniel Trstenjak) Date: Sun, 5 Apr 2015 14:19:13 +0200 Subject: [Haskell-cafe] Functional dependencies conflict Message-ID: <20150405121913.GA4309@machine> Hi, I'm getting the compile error: Gamgine/Image/PNG/Internal/Parser.hs:14:10: Functional dependencies conflict between instance declarations: instance Monad m => Stream LB.ByteString m Word8 -- Defined at Gamgine/Image/PNG/Internal/Parser.hs:14:10 instance Monad m => Stream LB.ByteString m Char -- Defined in ?Text.Parsec.Prim? The relevant stuff from the parsec 3.1.9 code[1] is: {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleContexts, UndecidableInstances #-} ... import qualified Data.ByteString.Lazy.Char8 as CL import qualified Data.ByteString.Char8 as C ... class (Monad m) => Stream s m t | s -> t where uncons :: s -> m (Maybe (t,s)) instance (Monad m) => Stream CL.ByteString m Char where uncons = return . CL.uncons instance (Monad m) => Stream C.ByteString m Char where uncons = return . C.uncons And from my code[2] is: {-# LANGUAGE BangPatterns, FlexibleInstances, MultiParamTypeClasses, FlexibleContexts #-} ... import qualified Data.ByteString.Lazy as LB ... instance (Monad m) => Stream LB.ByteString m Word8 where uncons = return . LB.uncons As you can see, the instances are for different ByteString types, therefore I don't quite get where GHC sees here any conflicts. Greetings, Daniel [1] https://github.com/aslatter/parsec/blob/master/Text/Parsec/Prim.hs [2] https://github.com/dan-t/Gamgine/blob/master/Gamgine/Image/PNG/Internal/Parser.hs From roma at ro-che.info Sun Apr 5 12:25:01 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Sun, 05 Apr 2015 15:25:01 +0300 Subject: [Haskell-cafe] Functional dependencies conflict In-Reply-To: <20150405121913.GA4309@machine> References: <20150405121913.GA4309@machine> Message-ID: <5521299D.3020007@ro-che.info> Data.ByteString.Lazy.Char8 exports the same lazy bytestring type as Data.ByteString.Lazy. Only functions and instances differ. On 05/04/15 15:19, Daniel Trstenjak wrote: > > Hi, > > I'm getting the compile error: > > Gamgine/Image/PNG/Internal/Parser.hs:14:10: > Functional dependencies conflict between instance declarations: > instance Monad m => Stream LB.ByteString m Word8 > -- Defined at Gamgine/Image/PNG/Internal/Parser.hs:14:10 > instance Monad m => Stream LB.ByteString m Char > -- Defined in ?Text.Parsec.Prim? > > > > The relevant stuff from the parsec 3.1.9 code[1] is: > > {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleContexts, UndecidableInstances #-} > > ... > > import qualified Data.ByteString.Lazy.Char8 as CL > import qualified Data.ByteString.Char8 as C > > ... > > class (Monad m) => Stream s m t | s -> t where > uncons :: s -> m (Maybe (t,s)) > > instance (Monad m) => Stream CL.ByteString m Char where > uncons = return . CL.uncons > > instance (Monad m) => Stream C.ByteString m Char where > uncons = return . C.uncons > > > > And from my code[2] is: > > {-# LANGUAGE BangPatterns, FlexibleInstances, MultiParamTypeClasses, FlexibleContexts #-} > > ... > > import qualified Data.ByteString.Lazy as LB > > ... > > instance (Monad m) => Stream LB.ByteString m Word8 where > uncons = return . LB.uncons > > > > As you can see, the instances are for different ByteString types, > therefore I don't quite get where GHC sees here any conflicts. > > > Greetings, > Daniel > > > [1] https://github.com/aslatter/parsec/blob/master/Text/Parsec/Prim.hs > [2] https://github.com/dan-t/Gamgine/blob/master/Gamgine/Image/PNG/Internal/Parser.hs > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From ivan.miljenovic at gmail.com Sun Apr 5 12:25:58 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Sun, 5 Apr 2015 22:25:58 +1000 Subject: [Haskell-cafe] Functional dependencies conflict In-Reply-To: <5521299D.3020007@ro-che.info> References: <20150405121913.GA4309@machine> <5521299D.3020007@ro-che.info> Message-ID: On 5 April 2015 at 22:25, Roman Cheplyaka wrote: > Data.ByteString.Lazy.Char8 exports the same lazy bytestring type as > Data.ByteString.Lazy. Only functions and instances differ. Well, *instances* can't differ... > > On 05/04/15 15:19, Daniel Trstenjak wrote: >> >> Hi, >> >> I'm getting the compile error: >> >> Gamgine/Image/PNG/Internal/Parser.hs:14:10: >> Functional dependencies conflict between instance declarations: >> instance Monad m => Stream LB.ByteString m Word8 >> -- Defined at Gamgine/Image/PNG/Internal/Parser.hs:14:10 >> instance Monad m => Stream LB.ByteString m Char >> -- Defined in ?Text.Parsec.Prim? >> >> >> >> The relevant stuff from the parsec 3.1.9 code[1] is: >> >> {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleContexts, UndecidableInstances #-} >> >> ... >> >> import qualified Data.ByteString.Lazy.Char8 as CL >> import qualified Data.ByteString.Char8 as C >> >> ... >> >> class (Monad m) => Stream s m t | s -> t where >> uncons :: s -> m (Maybe (t,s)) >> >> instance (Monad m) => Stream CL.ByteString m Char where >> uncons = return . CL.uncons >> >> instance (Monad m) => Stream C.ByteString m Char where >> uncons = return . C.uncons >> >> >> >> And from my code[2] is: >> >> {-# LANGUAGE BangPatterns, FlexibleInstances, MultiParamTypeClasses, FlexibleContexts #-} >> >> ... >> >> import qualified Data.ByteString.Lazy as LB >> >> ... >> >> instance (Monad m) => Stream LB.ByteString m Word8 where >> uncons = return . LB.uncons >> >> >> >> As you can see, the instances are for different ByteString types, >> therefore I don't quite get where GHC sees here any conflicts. >> >> >> Greetings, >> Daniel >> >> >> [1] https://github.com/aslatter/parsec/blob/master/Text/Parsec/Prim.hs >> [2] https://github.com/dan-t/Gamgine/blob/master/Gamgine/Image/PNG/Internal/Parser.hs >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From michael at snoyman.com Sun Apr 5 12:29:33 2015 From: michael at snoyman.com (Michael Snoyman) Date: Sun, 05 Apr 2015 12:29:33 +0000 Subject: [Haskell-cafe] Generalizing "unlift" functions with monad-control In-Reply-To: References: <2A84F536-F0DE-46B5-9AFD-8C8D1BB68EAD@gmail.com> Message-ID: Finally got around to this, and it worked like a charm. Just cleaning up this package, will probably be on Hackage in a few days. On Tue, Mar 31, 2015 at 6:37 PM Michael Snoyman wrote: > Wow, this looks great, thank you! I have to play around with this a bit > more to see how it fits in with everything, I'll ping back this thread when > I have something worth sharing. > > On Tue, Mar 31, 2015 at 12:06 PM Erik Hesselink > wrote: > >> This looks much better! I don't know why I didn't find it, I did >> experiment with type classes a bit... >> >> Regards, >> >> Erik >> >> On Tue, Mar 31, 2015 at 10:59 AM, Hiromi ISHII >> wrote: >> > Hi Michael, Erik, >> > >> >> The constraints package allows you to define Forall constraints, but >> >> that needs something of kind (* -> Constraint) and I can't figure out >> >> how to get something like (b ~ StT t b) in that form: we don't have >> >> 'data constraints'. >> > >> > I think we can do it with 'constraints' package using type class as >> below: >> > >> > ```haskell >> > import Control.Monad.Trans.Control (MonadTransControl (liftWith), StT) >> > import Control.Monad.Trans.Reader (ReaderT) >> > import Data.Constraint ((:-), (\\)) >> > import Data.Constraint.Forall (Forall, inst) >> > >> > class (StT t a ~ a) => Identical t a >> > instance (StT t a ~ a) => Identical t a >> > >> > type Unliftable t = Forall (Identical t) >> > >> > newtype Unlift t = Unlift { unlift :: forall n b. Monad n => t n b -> n >> b } >> > >> > mkUnlift :: forall t m a . (Forall (Identical t), Monad m) >> > => (forall n b. Monad n => t n b -> n (StT t b)) -> t m a -> m >> a >> > mkUnlift r act = r act \\ (inst :: Forall (Identical t) :- Identical t >> a) >> > >> > askRunG :: forall t m. (MonadTransControl t, Unliftable t, Monad m) => >> t m (Unlift t) >> > askRunG = liftWith unlifter >> > where >> > unlifter :: (forall n b. Monad n => t n b -> n (StT t b)) -> m >> (Unlift t) >> > unlifter r = return $ Unlift (mkUnlift r) >> > >> > askRun :: Monad m => ReaderT a m (Unlift (ReaderT a)) >> > askRun = askRunG >> > ``` >> > >> > This compiles successfuly in my environment, and things seems to be >> done correctly, >> > because we can derive ReaderT version from `askRunG`. >> > >> > In above, we defined `Unliftable t` constraint sysnonym for `Forall >> (Identical t)` just for convenience. >> > Using this constraint synonym needs `ConstraintKinds` even for library >> users, it might be >> > appropreate to define it as follows: >> > >> > ``` >> > class Forall (Identical t) => Unliftable t >> > instance Forall (Identical t) => Unliftable t >> > ``` >> > >> > This definiton is the same trick as `Identical` constraint. >> > We can choose which case to take for `Unliftable`, but `Identical` type >> class >> > should be defined in this way, to get `Forall` works correctly. >> > (This is due to 'constarints' package's current implementation; >> > don't ask me why :-)) >> > >> >> 2015/03/31 15:32?Michael Snoyman ????? >> >> >> >> Those are some impressive type gymnastics :) Unfortunately, I still >> don't think I'm able to write the generic function the way I wanted to, so >> I'll be stuck with a typeclass. However, I may be able to provide your >> helper types/functions to make it easier for people to declare their own >> instances. >> >> >> >> On Mon, Mar 30, 2015 at 10:42 PM Erik Hesselink >> wrote: >> >> Hi Michael, >> >> >> >> The problem seems to be that the constraint you want isn't (b ~ StT t >> >> b) for some specific b, you want (forall b. b ~ StT t b). It's not >> >> possible to say this directly, but after some trying I got something >> >> ugly that works. Perhaps it can be the inspiration for something >> >> nicer? >> >> >> >> {-# LANGUAGE RankNTypes #-} >> >> {-# LANGUAGE KindSignatures #-} >> >> {-# LANGUAGE GADTs #-} >> >> {-# LANGUAGE ScopedTypeVariables #-} >> >> >> >> import Control.Monad.Trans.Control >> >> import Control.Monad.Trans.Reader >> >> >> >> newtype Unlift t = Unlift { unlift :: forall n b. Monad n => t n b -> >> n b } >> >> >> >> newtype ForallStId t = ForallStId (forall a. StTEq t a) >> >> >> >> data StTEq (t :: (* -> *) -> * -> *) a where >> >> StTEq :: a ~ StT t a => StTEq t a >> >> >> >> askRun :: Monad m => ReaderT r m (Unlift (ReaderT r)) >> >> askRun = liftWith (return . Unlift) >> >> >> >> askRunG :: forall t m. >> >> ( MonadTransControl t >> >> , Monad m >> >> ) >> >> => ForallStId t -> t m (Unlift t) >> >> askRunG x = liftWith $ \runT -> >> >> return $ Unlift ( (case x of ForallStId (StTEq :: StTEq t b) -> >> >> runT) :: forall b n. Monad n => t n b -> n b ) >> >> >> >> askRun' :: Monad m => ReaderT r m (Unlift (ReaderT r)) >> >> askRun' = askRunG (ForallStId StTEq) >> >> >> >> The constraints package allows you to define Forall constraints, but >> >> that needs something of kind (* -> Constraint) and I can't figure out >> >> how to get something like (b ~ StT t b) in that form: we don't have >> >> 'data constraints'. >> >> >> >> Hope this helps you along, and please let me know if you find a nicer >> >> way to do this. >> >> >> >> Regards, >> >> >> >> Erik >> >> >> >> On Mon, Mar 30, 2015 at 7:33 AM, Michael Snoyman >> wrote: >> >> > I'm trying to extract an "unlift" function from monad-control, which >> would >> >> > allow stripping off a layer of a transformer stack in some cases. >> It's easy >> >> > to see that this works well for ReaderT, e.g.: >> >> > >> >> > {-# LANGUAGE RankNTypes #-} >> >> > {-# LANGUAGE TypeFamilies #-} >> >> > import Control.Monad.Trans.Control >> >> > import Control.Monad.Trans.Reader >> >> > >> >> > newtype Unlift t = Unlift { unlift :: forall n b. Monad n => t n b >> -> n b } >> >> > >> >> > askRun :: Monad m => ReaderT r m (Unlift (ReaderT r)) >> >> > askRun = liftWith (return . Unlift) >> >> > >> >> > The reason this works is that the `StT` associated type for >> `ReaderT` just >> >> > returns the original type, i.e. `type instance StT (ReaderT r) m a = >> a`. In >> >> > theory, we should be able to generalize `askRun` to any transformer >> for >> >> > which that applies. However, I can't figure out any way to express >> that >> >> > generalized type signature in a way that GHC accepts it. It seems >> like the >> >> > following should do the trick: >> >> > >> >> > askRunG :: ( MonadTransControl t >> >> > , Monad m >> >> > , b ~ StT t b >> >> > ) >> >> > => t m (Unlift t) >> >> > askRunG = liftWith (return . Unlift) >> >> > >> >> > However, I get the following error message when trying this: >> >> > >> >> > foo.hs:11:12: >> >> > Occurs check: cannot construct the infinite type: b0 ~ StT t b0 >> >> > The type variable ?b0? is ambiguous >> >> > In the ambiguity check for the type signature for ?askRunG?: >> >> > askRunG :: forall (t :: (* -> *) -> * -> *) (m :: * -> *) b. >> >> > (MonadTransControl t, Monad m, b ~ StT t b) => >> >> > t m (Unlift t) >> >> > To defer the ambiguity check to use sites, enable >> AllowAmbiguousTypes >> >> > In the type signature for ?askRunG?: >> >> > askRunG :: (MonadTransControl t, Monad m, b ~ StT t b) => >> >> > t m (Unlift t) >> >> > >> >> > Adding AllowAmbiguousTypes to the mix provides: >> >> > >> >> > foo.hs:17:30: >> >> > Could not deduce (b1 ~ StT t b1) >> >> > from the context (MonadTransControl t, Monad m, b ~ StT t b) >> >> > bound by the type signature for >> >> > askRunG :: (MonadTransControl t, Monad m, b ~ StT t >> b) => >> >> > t m (Unlift t) >> >> > at foo.hs:(12,12)-(16,25) >> >> > ?b1? is a rigid type variable bound by >> >> > the type forall (n1 :: * -> *) b2. >> >> > Monad n1 => >> >> > t n1 b2 -> n1 (StT t b2) >> >> > at foo.hs:1:1 >> >> > Expected type: Run t -> Unlift t >> >> > Actual type: (forall (n :: * -> *) b. Monad n => t n b -> n b) >> >> > -> Unlift t >> >> > Relevant bindings include >> >> > askRunG :: t m (Unlift t) (bound at foo.hs:17:1) >> >> > In the second argument of ?(.)?, namely ?Unlift? >> >> > In the first argument of ?liftWith?, namely ?(return . Unlift)? >> >> > >> >> > I've tested with both GHC 7.8.4 and 7.10.1. Any suggestions? >> >> > >> >> > _______________________________________________ >> >> > Haskell-Cafe mailing list >> >> > Haskell-Cafe at haskell.org >> >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > >> >> _______________________________________________ >> >> Haskell-Cafe mailing list >> >> Haskell-Cafe at haskell.org >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > >> > ----- ?? ?? --------------------------- >> > konn.jinro at gmail.com >> > ????????????? >> > ???? ???????? >> > ---------------------------------------------- >> > >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > Haskell-Cafe at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From roma at ro-che.info Sun Apr 5 12:30:52 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Sun, 05 Apr 2015 15:30:52 +0300 Subject: [Haskell-cafe] Functional dependencies conflict In-Reply-To: References: <20150405121913.GA4309@machine> <5521299D.3020007@ro-che.info> Message-ID: <55212AFC.6010002@ro-che.info> To be precise, the sets of instances differ. Eg. the Char8 module exports the IsString instance, which normal Data.ByteString.Lazy doesn't. On 05/04/15 15:25, Ivan Lazar Miljenovic wrote: > On 5 April 2015 at 22:25, Roman Cheplyaka wrote: >> Data.ByteString.Lazy.Char8 exports the same lazy bytestring type as >> Data.ByteString.Lazy. Only functions and instances differ. > > Well, *instances* can't differ... > >> >> On 05/04/15 15:19, Daniel Trstenjak wrote: >>> >>> Hi, >>> >>> I'm getting the compile error: >>> >>> Gamgine/Image/PNG/Internal/Parser.hs:14:10: >>> Functional dependencies conflict between instance declarations: >>> instance Monad m => Stream LB.ByteString m Word8 >>> -- Defined at Gamgine/Image/PNG/Internal/Parser.hs:14:10 >>> instance Monad m => Stream LB.ByteString m Char >>> -- Defined in ?Text.Parsec.Prim? >>> >>> >>> >>> The relevant stuff from the parsec 3.1.9 code[1] is: >>> >>> {-# LANGUAGE MultiParamTypeClasses, FunctionalDependencies, FlexibleContexts, UndecidableInstances #-} >>> >>> ... >>> >>> import qualified Data.ByteString.Lazy.Char8 as CL >>> import qualified Data.ByteString.Char8 as C >>> >>> ... >>> >>> class (Monad m) => Stream s m t | s -> t where >>> uncons :: s -> m (Maybe (t,s)) >>> >>> instance (Monad m) => Stream CL.ByteString m Char where >>> uncons = return . CL.uncons >>> >>> instance (Monad m) => Stream C.ByteString m Char where >>> uncons = return . C.uncons >>> >>> >>> >>> And from my code[2] is: >>> >>> {-# LANGUAGE BangPatterns, FlexibleInstances, MultiParamTypeClasses, FlexibleContexts #-} >>> >>> ... >>> >>> import qualified Data.ByteString.Lazy as LB >>> >>> ... >>> >>> instance (Monad m) => Stream LB.ByteString m Word8 where >>> uncons = return . LB.uncons >>> >>> >>> >>> As you can see, the instances are for different ByteString types, >>> therefore I don't quite get where GHC sees here any conflicts. >>> >>> >>> Greetings, >>> Daniel >>> >>> >>> [1] https://github.com/aslatter/parsec/blob/master/Text/Parsec/Prim.hs >>> [2] https://github.com/dan-t/Gamgine/blob/master/Gamgine/Image/PNG/Internal/Parser.hs >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > From daniel.trstenjak at gmail.com Sun Apr 5 12:54:36 2015 From: daniel.trstenjak at gmail.com (Daniel Trstenjak) Date: Sun, 5 Apr 2015 14:54:36 +0200 Subject: [Haskell-cafe] Functional dependencies conflict In-Reply-To: <5521299D.3020007@ro-che.info> References: <20150405121913.GA4309@machine> <5521299D.3020007@ro-che.info> Message-ID: <20150405125436.GA15148@machine> On Sun, Apr 05, 2015 at 03:25:01PM +0300, Roman Cheplyaka wrote: > Data.ByteString.Lazy.Char8 exports the same lazy bytestring type as > Data.ByteString.Lazy. Only functions and instances differ. So my only option in this case is to define a newtype wrapper for Data.ByteString.Lazy and then define a Stream instance on this one? Greetings, Daniel From roma at ro-che.info Sun Apr 5 13:04:43 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Sun, 05 Apr 2015 16:04:43 +0300 Subject: [Haskell-cafe] Functional dependencies conflict In-Reply-To: <20150405125436.GA15148@machine> References: <20150405121913.GA4309@machine> <5521299D.3020007@ro-che.info> <20150405125436.GA15148@machine> Message-ID: <552132EB.6020101@ro-che.info> On 05/04/15 15:54, Daniel Trstenjak wrote: > > On Sun, Apr 05, 2015 at 03:25:01PM +0300, Roman Cheplyaka wrote: >> Data.ByteString.Lazy.Char8 exports the same lazy bytestring type as >> Data.ByteString.Lazy. Only functions and instances differ. > > So my only option in this case is to define a newtype wrapper > for Data.ByteString.Lazy and then define a Stream instance on this one? You might do that. But if I were you, I'd use attoparsec or even binary/cereal to parse PNG. They are better suited for parsing binary data. Roman From aldodavide at gmx.com Sun Apr 5 15:12:16 2015 From: aldodavide at gmx.com (Aldo Davide) Date: Sun, 5 Apr 2015 17:12:16 +0200 Subject: [Haskell-cafe] FFI Ptr Question In-Reply-To: References: Message-ID: Sorry but I am having trouble understanding your post, but here's some information you might find useful: * If you want to pass a callback to a C function, you need to use a FunPtr. In particular, you need to create a "wrapper" foreign import that will allow you to convert a haskell function to a FunPtr that can be then passed to C and called by C. The docs on FunPtr [1] explain how to do this. * Ptr is not a data constructor, its a type constructor, so e.g. `Ptr CInt` is a valid type, but `Ptr 4` is not a valid expression. If you want to store a value in the heap and then create a pointer to it, you can use the `with` function [2]. So e.g. in `with 5 $ \p -> ...`, p is a `Ptr CInt`. Remember that `Ptr`s are to useful for passing callbacks though. [1] https://hackage.haskell.org/package/base-4.8.0.0/docs/Foreign-Ptr.html#t:FunPtr [2] https://hackage.haskell.org/package/base-4.8.0.0/docs/Foreign-Marshal-Utils.html#v:with Michael Jones, wrote: > I am having trouble figuring out how to pass callbacks to C. > > Given the definition below it is not clear how to define a function and pass it. I tried: > > getNumberOfRows :: CInt > getNumberOfRows = 1 > > table <- wxcGridTableCreate getNumberOfRows? > > but the compiler barks at me: > > src/MainGui.hs:577:32: Not in scope: data constructor ?Ptr? > > even through I do the imports: > > import Foreign > import Foreign.C > > There is also a _obj as the first value of wxcGridTableCreate and the callbacks, and I am not sure how to make a this* for it. I would have throughout that wxcGridTableCreate would return a Ptr. > > The goal is to pass haskell functions into wxcGridTableCreate so the grid can call back to haskell. > > Can someone show an example of just this one function, how to wrap a Ptr around it, and what to do with _obj? > > > DEFINITION > > wxcGridTableCreate :: Ptr a -> Ptr b -> Ptr c -> Ptr d -> Ptr e -> Ptr f -> Ptr g -> Ptr h -> Ptr i -> Ptr j -> Ptr k -> Ptr l -> Ptr m -> Ptr n -> Ptr o -> Ptr p -> Ptr q -> IO (WXCGridTable ()) > wxcGridTableCreate _obj _EifGetNumberRows _EifGetNumberCols _EifGetValue _EifSetValue _EifIsEmptyCell _EifClear _EifInsertRows _EifAppendRows _EifDeleteRows _EifInsertCols _EifAppendCols _EifDeleteCols _EifSetRowLabelValue _EifSetColLabelValue _EifGetRowLabelValue _EifGetColLabelValue > = withObjectResult $ > wx_ELJGridTable_Create _obj _EifGetNumberRows _EifGetNumberCols _EifGetValue _EifSetValue _EifIsEmptyCell _EifClear > _EifInsertRows _EifAppendRows _EifDeleteRows _EifInsertCols _EifAppendCols _EifDeleteCols > _EifSetRowLabelValue _EifSetColLabelValue _EifGetRowLabelValue _EifGetColLabelValue > foreign import ccall "ELJGridTable_Create" wx_ELJGridTable_Create :: Ptr a -> Ptr b -> Ptr c -> Ptr d -> Ptr e -> Ptr f -> Ptr g -> Ptr h -> Ptr i -> Ptr j -> Ptr k -> Ptr l -> Ptr m -> Ptr n -> Ptr o -> Ptr p -> Ptr q -> IO (Ptr (TWXCGridTable ())) > > typedef int _cdecl (*TGridGetInt)(void* _obj); > typedef int _cdecl (*TGridIsEmpty)(void* _obj, int row, int col); > typedef void* _cdecl (*TGridGetValue)(void* _obj, int row, int col); > typedef void _cdecl (*TGridSetValue)(void* _obj, int row, int col, void* val); > typedef void _cdecl (*TGridClear)(void* _obj); > typedef int _cdecl (*TGridModify)(void* _obj, int pos, int num); > typedef int _cdecl (*TGridMultiModify)(void* _obj, int num); > typedef void _cdecl (*TGridSetLabel)(void* _obj, int idx, void* val); > typedef void* _cdecl (*TGridGetLabel)(void* _obj, int idx); > > EWXWEXPORT(void*,ELJGridTable_Create)(void* self,void* _EifGetNumberRows,void* _EifGetNumberCols,void* _EifGetValue,void* _EifSetValue,void* _EifIsEmptyCell,void* _EifClear,void* _EifInsertRows,void* _EifAppendRows,void* _EifDeleteRows,void* _EifInsertCols,void* _EifAppendCols,void* _EifDeleteCols,void* _EifSetRowLabelValue,void* _EifSetColLabelValue,void* _EifGetRowLabelValue,void* _EifGetColLabelValue) > { > return (void*)new ELJGridTable (self, > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From aldodavide at gmx.com Sun Apr 5 15:14:23 2015 From: aldodavide at gmx.com (Aldo Davide) Date: Sun, 5 Apr 2015 17:14:23 +0200 Subject: [Haskell-cafe] FFI Ptr Question In-Reply-To: References: , Message-ID: Aldo Davide wrote: > `Ptr`s are to useful for passing callbacks though. Typo, I meant: `Ptr`s are *NOT* useful for passing callbacks though. From hjgtuyl at chello.nl Sun Apr 5 20:14:45 2015 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Sun, 05 Apr 2015 22:14:45 +0200 Subject: [Haskell-cafe] Initializing dynamic libraries Message-ID: L.S., I am using C++ dynamic libraries via a dynamic library written in C. The first thing my Haskell software does, is calling a C function to initiate the C++ libraries; but the C++ libraries generate warnings about not being able to set certain things, before the Haskell program initializes everything. Is there some automatic call to the dynamic libraries done, before execution of the main function in the Haskell program? Regards, Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From rasen.dubi at gmail.com Mon Apr 6 02:03:19 2015 From: rasen.dubi at gmail.com (Alexey Shmalko) Date: Mon, 6 Apr 2015 05:03:19 +0300 Subject: [Haskell-cafe] Initializing dynamic libraries In-Reply-To: References: Message-ID: Hello, I don't quite understand what's going on, but answering your question... Yes, depending on the operating system there are mechanisms to have initialization function in dynamic libraries. Search for "dynamic library init function" or something like that. Best regards, Alexey On Sun, Apr 5, 2015 at 11:14 PM, Henk-Jan van Tuyl wrote: > > L.S., > > I am using C++ dynamic libraries via a dynamic library written in C. The > first thing my Haskell software does, is calling a C function to initiate > the C++ libraries; but the C++ libraries generate warnings about not being > able to set certain things, before the Haskell program initializes > everything. Is there some automatic call to the dynamic libraries done, > before execution of the main function in the Haskell program? > > Regards, > Henk-Jan van Tuyl > > > -- > Folding at home > What if you could share your unused computer power to help find a cure? In > just 5 minutes you can join the world's biggest networked computer and get > us closer sooner. Watch the video. > http://folding.stanford.edu/ > > > http://Van.Tuyl.eu/ > http://members.chello.nl/hjgtuyl/tourdemonad.html > Haskell programming > -- > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at proclivis.com Mon Apr 6 04:28:49 2015 From: mike at proclivis.com (Michael Jones) Date: Sun, 5 Apr 2015 22:28:49 -0600 Subject: [Haskell-cafe] FFI Ptr Question In-Reply-To: References: Message-ID: <64DF165F-F705-4978-9B6C-4A869C880B13@proclivis.com> Ok, I?ll give some details and since I made it work using your description of Ptr a, I'll describe the solution. And it will leave one question at the end. There is a class in wxWidgets called wxGridTableBase. It has callbacks and definitions of signatures that are C typed. C++ CLASS WITH CALLBACKS extern "C" { typedef int _cdecl (*TGridGetInt)(void* _obj); typedef int _cdecl (*TGridIsEmpty)(void* _obj, int row, int col); ? } class ELJGridTable : public wxGridTableBase { private: void* EiffelObject; TGridGetInt EifGetNumberRows; TGridGetInt EifGetNumberCols; ... public: ELJGridTable (void* _obj, void* _EifGetNumberRows, void* _EifGetNumberCols, ? ): wxGridTableBase() { EiffelObject = _obj; EifGetNumberRows = (TGridGetInt)_EifGetNumberRows; EifGetNumberCols = (TGridGetInt)_EifGetNumberCols; ... }; int GetNumberRows() {return EifGetNumberRows(EiffelObject);}; int GetNumberCols() {return EifGetNumberCols(EiffelObject);}; ... }; WXHASKELL WRAPPER wxcGridTableCreate :: Ptr a -> Ptr b -> ... -> IO (WXCGridTable ()) wxcGridTableCreate _obj _EifGetNumberRows _EifGetNumberCols _EifGetValue _... = withObjectResult $ wx_ELJGridTable_Create _obj _EifGetNumberRows _EifGetNumberCols ... foreign import ccall "ELJGridTable_Create" wx_ELJGridTable_Create :: Ptr a -> Ptr b -> Ptr c -> ... -> IO (Ptr (TWXCGridTable ())) The question is how to call wxcGridTableCreate with Haskell functions. So here are a few callbacks: CALLBACK FUNCTIONS getNumberOfRows :: Ptr CInt -> CInt getNumberOfRows p = 1 foreign import ccall "wrapper" wrapNumberOfRows :: (Ptr CInt -> CInt) -> IO (FunPtr (Ptr CInt -> CInt)) getNumberOfCols :: Ptr CInt -> CInt getNumberOfCols p = 1 foreign import ccall "wrapper" wrapNumberOfCols :: (Ptr CInt -> CInt) -> IO (FunPtr (Ptr CInt -> CInt)) getValue :: Ptr CInt -> CInt -> CInt -> CWString getValue p r c = do unsafePerformIO $ newCWString ?Str" foreign import ccall "wrapper" wrapGetValue :: (Ptr CInt -> CInt -> CInt -> CWString) -> IO (FunPtr (Ptr CInt -> CInt -> CInt -> CWString)) MAKE WRAPPERS wGetNumberOfRows <- wrapNumberOfRows getNumberOfRows wGetNumberOfCols <- wrapNumberOfCols getNumberOfCols wGetValue <- wrapGetValue getValue CREATE TABLE table <- wxcGridTableCreate n (castFunPtrToPtr wGetNumberOfRows) (castFunPtrToPtr wGetNumberOfCols) (castFunPtrToPtr wGetValue) The key to making it work was to make a FunPtr and then use the castFunPtrToPtr. This was the main break through. Now the question. There is one function with CWString: getValue :: Ptr CInt -> CInt -> CInt -> CWString getValue p r c = do unsafePerformIO $ newCWString ?Str? The problem is newCWString creates a string that must be freed, and wxWidgets does not free it. This results in a memory leak. In a real application, I?ll probably store a String in a TVar so that some thread can keep it up to date. I could use other TVars to hold the CWString and free it each time a new value is created. But, it would be better if there was some way to guarantee it is freed, even with exceptions, etc. The best would be if when the function returns, it automatically freed the string. Is there a way express the callback function so that it frees the string after the return? Mike On Apr 5, 2015, at 9:12 AM, Aldo Davide wrote: > Sorry but I am having trouble understanding your post, but here's some information you might find useful: > > * If you want to pass a callback to a C function, you need to use a FunPtr. In particular, you need to create a "wrapper" foreign import that will allow you to convert a haskell function to a FunPtr that can be then passed to C and called by C. The docs on FunPtr [1] explain how to do this. > > * Ptr is not a data constructor, its a type constructor, so e.g. `Ptr CInt` is a valid type, but `Ptr 4` is not a valid expression. If you want to store a value in the heap and then create a pointer to it, you can use the `with` function [2]. So e.g. in `with 5 $ \p -> ...`, p is a `Ptr CInt`. Remember that `Ptr`s are to useful for passing callbacks though. > > > [1] https://hackage.haskell.org/package/base-4.8.0.0/docs/Foreign-Ptr.html#t:FunPtr > [2] https://hackage.haskell.org/package/base-4.8.0.0/docs/Foreign-Marshal-Utils.html#v:with > > Michael Jones, wrote: >> I am having trouble figuring out how to pass callbacks to C. >> >> Given the definition below it is not clear how to define a function and pass it. I tried: >> >> getNumberOfRows :: CInt >> getNumberOfRows = 1 >> >> table <- wxcGridTableCreate getNumberOfRows? >> >> but the compiler barks at me: >> >> src/MainGui.hs:577:32: Not in scope: data constructor ?Ptr? >> >> even through I do the imports: >> >> import Foreign >> import Foreign.C >> >> There is also a _obj as the first value of wxcGridTableCreate and the callbacks, and I am not sure how to make a this* for it. I would have throughout that wxcGridTableCreate would return a Ptr. >> >> The goal is to pass haskell functions into wxcGridTableCreate so the grid can call back to haskell. >> >> Can someone show an example of just this one function, how to wrap a Ptr around it, and what to do with _obj? >> >> >> DEFINITION >> >> wxcGridTableCreate :: Ptr a -> Ptr b -> Ptr c -> Ptr d -> Ptr e -> Ptr f -> Ptr g -> Ptr h -> Ptr i -> Ptr j -> Ptr k -> Ptr l -> Ptr m -> Ptr n -> Ptr o -> Ptr p -> Ptr q -> IO (WXCGridTable ()) >> wxcGridTableCreate _obj _EifGetNumberRows _EifGetNumberCols _EifGetValue _EifSetValue _EifIsEmptyCell _EifClear _EifInsertRows _EifAppendRows _EifDeleteRows _EifInsertCols _EifAppendCols _EifDeleteCols _EifSetRowLabelValue _EifSetColLabelValue _EifGetRowLabelValue _EifGetColLabelValue >> = withObjectResult $ >> wx_ELJGridTable_Create _obj _EifGetNumberRows _EifGetNumberCols _EifGetValue _EifSetValue _EifIsEmptyCell _EifClear >> _EifInsertRows _EifAppendRows _EifDeleteRows _EifInsertCols _EifAppendCols _EifDeleteCols >> _EifSetRowLabelValue _EifSetColLabelValue _EifGetRowLabelValue _EifGetColLabelValue >> foreign import ccall "ELJGridTable_Create" wx_ELJGridTable_Create :: Ptr a -> Ptr b -> Ptr c -> Ptr d -> Ptr e -> Ptr f -> Ptr g -> Ptr h -> Ptr i -> Ptr j -> Ptr k -> Ptr l -> Ptr m -> Ptr n -> Ptr o -> Ptr p -> Ptr q -> IO (Ptr (TWXCGridTable ())) >> >> typedef int _cdecl (*TGridGetInt)(void* _obj); >> typedef int _cdecl (*TGridIsEmpty)(void* _obj, int row, int col); >> typedef void* _cdecl (*TGridGetValue)(void* _obj, int row, int col); >> typedef void _cdecl (*TGridSetValue)(void* _obj, int row, int col, void* val); >> typedef void _cdecl (*TGridClear)(void* _obj); >> typedef int _cdecl (*TGridModify)(void* _obj, int pos, int num); >> typedef int _cdecl (*TGridMultiModify)(void* _obj, int num); >> typedef void _cdecl (*TGridSetLabel)(void* _obj, int idx, void* val); >> typedef void* _cdecl (*TGridGetLabel)(void* _obj, int idx); >> >> EWXWEXPORT(void*,ELJGridTable_Create)(void* self,void* _EifGetNumberRows,void* _EifGetNumberCols,void* _EifGetValue,void* _EifSetValue,void* _EifIsEmptyCell,void* _EifClear,void* _EifInsertRows,void* _EifAppendRows,void* _EifDeleteRows,void* _EifInsertCols,void* _EifAppendCols,void* _EifDeleteCols,void* _EifSetRowLabelValue,void* _EifSetColLabelValue,void* _EifGetRowLabelValue,void* _EifGetColLabelValue) >> { >> return (void*)new ELJGridTable (self, >> >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> From frank at fstaals.net Mon Apr 6 08:40:19 2015 From: frank at fstaals.net (Frank Staals) Date: Mon, 06 Apr 2015 10:40:19 +0200 Subject: [Haskell-cafe] How to implement a source-sink pattern In-Reply-To: (Roland Lutz's message of "Thu, 2 Apr 2015 19:44:58 +0200 (CEST)") References: Message-ID: Roland Lutz writes: > On Wed, 1 Apr 2015, Frank Staals wrote: >> So essentially you want a data structure for some kind of bipartite graph. > > Yes, with the additional constraint that the vertices in one partite set (the > "sinks") each connect to exactly one edge. > >> The most haskelly way to do that would probably to define the graph to be >> simply define the Bipartite graph to be a pair of Maps, and define functions >> to add/delete nodes and edges to the graph that make sure that the two maps >> keep in sync. > > This was actually my first approach, but I couldn't find appropriate key and > value types to be stored in the map. Since the vertices are well-known global > objects, it doesn't make much sense to store more than a handle here. But how > do I connect the handle back to the data structure? A vertex (source/sink) is not uniquely coupled with a graph; it may be in more than one graph. So, there is no easy way to define a function with type 'Vertex -> Graph'. In other words, you should pass around the graph data structure if you need connectivity information. >> In this model you cannot direclty mix the sources and sinks from >> different modules. I.e. a 'BiGraph MySource MySink' cannot be used to >> also store a (MySecondSource,MySecondSink) pairs. If you do want that, >> you would need some wrapper type that distinguishes between the various >> 'MySink', 'MySecondSink', versions. > > That's one of the points that trouble me. How would such a wrapper > look like? Assuming that the set of different sink types is fixed and known at compile time, simply: data Sink = SinkA ModuleA.Sink | SinkB ModuleB.Sink | ... If the sink types are not known then you need either Existential type or something like an open-union. An existentialtype will look something like: data Sink where Sink :: IsASink t => t -> Sink however, if you have a value of type Sink, you cannot recover what exact type of sink it was (i.e. if it was a ModuleA.Sink or a ModuleB.Sink): you only know that it has the properties specified by the IsASink typeclass. With open-unions you should be able to recover the exact type. However that is a bit more complicated. I don't have concrete experience with them myself, so others might be more helpful on that front. > I experimented a bit with your code (see below). I noticed that I have to > specify "Ord src =>" and "Ord snk =>" in multiple places. Is there a way to > state that type arguments for BiGraph always have to be instances of Ord? > > Roland Depends a bit, if you wish to keep the functions polymorphic in the source an sink types (src and snk) then you have to keep them. If you fill in the concrete types at hand, and you give them Ord instances, then (obviously) you don't have to keep the Ord constraints ;) Last note: If your source and sink types don't have a proper Ordering, you can switch to using (integer) explicit vertexIds and IntMaps. Similar to the API of fgl. However, in that case you have to do the bookkeeping of which vertex has which vertexId yourself. > updateEdge :: Ord src => Ord snk => > (src, snk) -> BiGraph src snk -> BiGraph src snk > updateEdge (src, snk) (BiGraph m1 m2) = FWI: you would normally write multiple class constraints like (Ord src, Ord snk) =>, instead of Ord src => Ord snk =>. I'm kind of surprised the latter is still allowed. Regards, -- - Frank From blamario at ciktel.net Mon Apr 6 15:09:33 2015 From: blamario at ciktel.net (=?UTF-8?B?TWFyaW8gQmxhxb5ldmnEhw==?=) Date: Mon, 06 Apr 2015 11:09:33 -0400 Subject: [Haskell-cafe] How to implement a source-sink pattern In-Reply-To: References: Message-ID: <5522A1AD.3040402@ciktel.net> On 04/01/2015 03:31 PM, Roland Lutz wrote: > Hi! > > I'm trying to implement a source-sink type of pattern where there are > a number of sinks, each connected to exactly one source, and a number > of sources, each connected to zero or more sinks. The program > consists of some modules, each defining their own sources and sinks. > To illustrate this, here's what this would look like in C: > > > /* global.h */ > > struct source { > char *state; > /* some more fields */ > } > > struct sink { > struct source *source; > char *state; > /* some more fields */ > } > > struct sink **get_sinks_for_source(struct source *source); > > /* module_a.c */ > > struct source a_source, another_source; > struct sink foo, bar, baz; > > ... > foo.source = &a_source; > ... > One important thing you didn't state is: which parts of these data structures are immutable? The process of moving any data structure from C to Haskell depends on the answer, but otherwise can usually be done in a relatively mechanical fashion: 1. everything that is immutable should be shifted as deeply as possible, the mutable containers containing immutable ones; 2. map the data structures over: struct to a record, immutable array to an immutable array (or list or map or whatever, depending on the access pattern and performance requirements), mutable array to a mutable array; 3. map immutable non-null pointers to just the data structure they're pointing to, other immutable pointers to Maybe, mutable non-null pointers to STRef, other mutable pointers to STRef Maybe. 4. use runST to hide the whole mess behind a pure interface as much as possible. The result is unlikely to be optimal or elegant, but this process can get you a working implementation in Haskell. Once there, start refactoring the algorithms in and you'll likely be able to simplify both the data structure and the whole program. Take care to start with strong types and they will prevent you from doing anything stupid while refactoring. On a related note: I have no idea what Sink and Source are supposed to be for, but it's possible that pipes and conduits already provide it. From donn at avvanta.com Mon Apr 6 16:47:54 2015 From: donn at avvanta.com (Donn Cave) Date: Mon, 6 Apr 2015 09:47:54 -0700 (PDT) Subject: [Haskell-cafe] FFI Ptr Question In-Reply-To: <64DF165F-F705-4978-9B6C-4A869C880B13@proclivis.com> References: <64DF165F-F705-4978-9B6C-4A869C880B13@proclivis.com> Message-ID: <20150406164754.5A81B276CB6@mail.avvanta.com> Quoth Michael Jones , > The problem is newCWString creates a string that must be freed, and > wxWidgets does not free it. This results in a memory leak. > > In a real application, I'll probably store a String in a TVar so that > some thread can keep it up to date. I could use other TVars to hold > the CWString and free it each time a new value is created. But, it > would be better if there was some way to guarantee it is freed, even > with exceptions, etc. The best would be if when the function returns, > it automatically freed the string. > > Is there a way express the callback function so that it frees the string > after the return? Are you sure that's what you want? I may be misunderstanding "after the return" - to me, that sounds like right after your getValue callback. That would be approximately the same as freeing it within the callback, right after allocation and initialization. That's not ideal at all. You'll need to store the string in your application, it seems to me, and pass that value via the callback rather than allocating a new copy each time. Donn From mike at proclivis.com Mon Apr 6 17:35:38 2015 From: mike at proclivis.com (Michael Jones) Date: Mon, 6 Apr 2015 11:35:38 -0600 Subject: [Haskell-cafe] FFI Ptr Question In-Reply-To: <20150406164754.5A81B276CB6@mail.avvanta.com> References: <64DF165F-F705-4978-9B6C-4A869C880B13@proclivis.com> <20150406164754.5A81B276CB6@mail.avvanta.com> Message-ID: Donn, My intent was to hold the strings in a TVar, with a thread keeping it up to date, in this case, filtering, adding, etc. I need to look at how TVar works. I assume that a read from a TVar produces a value, and if the read from TVar is done in the callback, the pointer has to be freed when the callback exits, or by the C++ code. I also recall seeing some code in wxWidgets where after the callback to read is made, it calls another callback to set. It may be intended as a way to free memory. If every read is followed by a set, the set could could free the pointer. Either way, the design has dynamic data getting updated by a thread at a rate of change much larger than human interaction (scrolling). Mike On Apr 6, 2015, at 10:47 AM, Donn Cave wrote: > Quoth Michael Jones , > >> The problem is newCWString creates a string that must be freed, and >> wxWidgets does not free it. This results in a memory leak. >> >> In a real application, I'll probably store a String in a TVar so that >> some thread can keep it up to date. I could use other TVars to hold >> the CWString and free it each time a new value is created. But, it >> would be better if there was some way to guarantee it is freed, even >> with exceptions, etc. The best would be if when the function returns, >> it automatically freed the string. >> >> Is there a way express the callback function so that it frees the string >> after the return? > > Are you sure that's what you want? I may be misunderstanding "after > the return" - to me, that sounds like right after your getValue callback. > That would be approximately the same as freeing it within the callback, > right after allocation and initialization. That's not ideal at all. > You'll need to store the string in your application, it seems to me, > and pass that value via the callback rather than allocating a new copy > each time. > > Donn > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From gershomb at gmail.com Mon Apr 6 20:20:53 2015 From: gershomb at gmail.com (Gershom B) Date: Mon, 6 Apr 2015 16:20:53 -0400 Subject: [Haskell-cafe] Call for Haskell.org committee self-nominations Message-ID: Dear Haskellers, We have been overdue for some time in calling for a new round of nominations to the Haskell.org Committee. We have three members due for retirement -- Jason Dagit, Edward Kmett, and Brent Yorgey. The committee would like to thank them for their excellent service. To nominate yourself, please send an email to committee at haskell.org by 21 April 2015. The retiring members are eligible to re-nominate themselves. Please feel free to include any information about yourself that you think will help us to make a decision. Being a member of the committee does not necessarily require a significant amount of time, but committee members should aim to be responsive during discussions when the committee is called upon to make a decision. Strong leadership, communication, and judgement are very important characteristics for committee members. The role is about setting policy, providing direction/guidance for Haskell.org infrastructure, planning for the long term, and being fiscally responsible with the Haskell.org funds (and donations). As overseers for policy regarding the open source side of Haskell, committee members must also be able to set aside personal or business related bias and make decisions with the good of the open source Haskell community in mind. More details about the committee's roles and responsibilities are on *https://wiki.haskell.org/Haskell.org_committee * If you have any questions about the process, please feel free to e-mail us at committee at haskell.org or to contact one of us individually. Regards, Gershom -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekmett at gmail.com Tue Apr 7 00:58:41 2015 From: ekmett at gmail.com (Edward Kmett) Date: Mon, 6 Apr 2015 20:58:41 -0400 Subject: [Haskell-cafe] [haskell.org Google Summer of Code] Call for Mentors Message-ID: We have had a rather large pool of potential students apply for this year's Google Summer of Code, but, ultimately, Google won't let us ask for a slot unless we have a potential mentor assigned in advance. On top of that, one thing we try to do with each project is wherever possible, assign both a primary and a backup mentor, so the available mentoring pool is drawn a little thin. Many hands make for light work, though: If you've mentored or thought about mentoring in years past, I'd encourage you to sign up on google-melange for the Google Summer of Code at: https://www.google-melange.com/gsoc/homepage/google/gsoc2015 and request a connection to haskell.org as a Mentor. Once you've done this you can help us vote on proposals, and should something seem appropriate to you, you can flag yourself as available as a potential mentor or backup mentor for one (or more) of the projects. We have a couple of weeks left to rate proposals and request slots, but it'd be good to make as much progress as we can this week. If you have any questions, feel free to reach out to me, or to Shachaf Ben-Kiki or Gershom Bazerman who have been helping out with organizational issues this year. We also have a #haskell-gsoc channel on irc.freenode.net if you have questions about what is involved. Thank you for your time and consideration, -Edward Kmett -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at proclivis.com Tue Apr 7 04:57:15 2015 From: mike at proclivis.com (Michael Jones) Date: Mon, 6 Apr 2015 22:57:15 -0600 Subject: [Haskell-cafe] FFI Ptr Question In-Reply-To: References: <64DF165F-F705-4978-9B6C-4A869C880B13@proclivis.com> <20150406164754.5A81B276CB6@mail.avvanta.com> Message-ID: <812E96D9-1EB7-4153-BD15-16466F9A266B@proclivis.com> Turns out Set is not called after Get. So here is what I am thinking: getValue :: TVar (ForeignPtr CWchar) -> Ptr CInt -> CInt -> CInt -> IO CWString getValue ptr p r c = do s <- if c == 0 then newCWString "Str 0" else newCWString "Str 1" sPtr <- newForeignPtr finalizerFree s liftIO $ atomically $ writeTVar ptr sPtr return s Create a ForeignPtr and store it in a TVar. After the callback, the CWString will be inside the ForeignPtr until it is no longer referenced. On the second call, TVar will be given a new ForeignPtr, and the previous one will get collected, and finalizerFree will free the CWString. The next step will be to generate the CWString out of another TVar, rather than the fixed string in the example. There might be a way to get a Ptr to the original string rather than generating a new string, but I don?t know what goes on in the TVar, so I am suspect of holding a pointer to anything inside it. But with the above, I essentially copy the string and manage its collection. Thoughts? Mike On Apr 6, 2015, at 11:35 AM, Michael Jones wrote: > Donn, > > My intent was to hold the strings in a TVar, with a thread keeping it up to date, in this case, filtering, adding, etc. > > I need to look at how TVar works. I assume that a read from a TVar produces a value, and if the read from TVar is done in the callback, the pointer has to be freed when the callback exits, or by the C++ code. > > I also recall seeing some code in wxWidgets where after the callback to read is made, it calls another callback to set. It may be intended as a way to free memory. If every read is followed by a set, the set could could free the pointer. > > Either way, the design has dynamic data getting updated by a thread at a rate of change much larger than human interaction (scrolling). > > Mike > > On Apr 6, 2015, at 10:47 AM, Donn Cave wrote: > >> Quoth Michael Jones , >> >>> The problem is newCWString creates a string that must be freed, and >>> wxWidgets does not free it. This results in a memory leak. >>> >>> In a real application, I'll probably store a String in a TVar so that >>> some thread can keep it up to date. I could use other TVars to hold >>> the CWString and free it each time a new value is created. But, it >>> would be better if there was some way to guarantee it is freed, even >>> with exceptions, etc. The best would be if when the function returns, >>> it automatically freed the string. >>> >>> Is there a way express the callback function so that it frees the string >>> after the return? >> >> Are you sure that's what you want? I may be misunderstanding "after >> the return" - to me, that sounds like right after your getValue callback. >> That would be approximately the same as freeing it within the callback, >> right after allocation and initialization. That's not ideal at all. >> You'll need to store the string in your application, it seems to me, >> and pass that value via the callback rather than allocating a new copy >> each time. >> >> Donn >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From arnaud.oqube at gmail.com Tue Apr 7 14:50:49 2015 From: arnaud.oqube at gmail.com (Arnaud Bailly) Date: Tue, 7 Apr 2015 16:50:49 +0200 Subject: [Haskell-cafe] Distributed and persistent events bus in Haskell Message-ID: Hello, I am implementing an application using event sourcing as primary storage for data, which implies I need a way to durably and reliably store streams of events on stable storage. I also need to be able to have an event distribution system on top of that persistent storage so that components can subscribe to stored events. So far I have implemented a simple store, e.g. a flat file, which reuses the format of Apache Kafka (just in case...). Not very robust nor sophisticated but can work for moderate loads. Now I am looking for the event distribution part in the hope of being able to reuse some distributed event bus system that might exist somewhere and not having to roll my own. I have had a look couple of months ago at Vaultaire, Marquise and friends, but I am not sure they are really suited to my use case: They seem to be geared toward very high workload and throughput, like log or huge data streams analysis. Thanks for any pointer you might share, -- Arnaud Bailly twitter: abailly skype: arnaud-bailly linkedin: http://fr.linkedin.com/in/arnaudbailly/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyle.marek.spartz at gmail.com Tue Apr 7 15:26:26 2015 From: kyle.marek.spartz at gmail.com (Kyle Marek-Spartz) Date: Tue, 07 Apr 2015 10:26:26 -0500 Subject: [Haskell-cafe] Distributed and persistent events bus in Haskell In-Reply-To: References: Message-ID: <2fcp3s8ue4kli5.fsf@gmail.com> If you do end up going with the Kafka route, there is a native Haskell client: https://github.com/tylerholien/milena Arnaud Bailly writes: > Hello, > > I am implementing an application using event sourcing as primary storage > for data, which implies I need a way to durably and reliably store streams > of events on stable storage. I also need to be able to have an event > distribution system on top of that persistent storage so that components > can subscribe to stored events. > > So far I have implemented a simple store, e.g. a flat file, which reuses > the format of Apache Kafka (just in case...). Not very robust nor > sophisticated but can work for moderate loads. Now I am looking for the > event distribution part in the hope of being able to reuse some distributed > event bus system that might exist somewhere and not having to roll my own. > > I have had a look couple of months ago at Vaultaire, Marquise and friends, > but I am not sure they are really suited to my use case: They seem to be > geared toward very high workload and throughput, like log or huge data > streams analysis. > > Thanks for any pointer you might share, -- Kyle Marek-Spartz From arnaud.oqube at gmail.com Tue Apr 7 15:33:01 2015 From: arnaud.oqube at gmail.com (Arnaud Bailly) Date: Tue, 7 Apr 2015 17:33:01 +0200 Subject: [Haskell-cafe] Distributed and persistent events bus in Haskell In-Reply-To: <2fcp3s8ue4kli5.fsf@gmail.com> References: <2fcp3s8ue4kli5.fsf@gmail.com> Message-ID: Cool! however, I would rather avoid having to manage kafka, if something simpler exists :-) I know there is also a mature zeromq client so given I already have the persistent part, I could probably leverage that but I thought somebody might have already treaded that path... Thanks a lot for the pointer, anyway. -- Arnaud Bailly twitter: abailly skype: arnaud-bailly linkedin: http://fr.linkedin.com/in/arnaudbailly/ On Tue, Apr 7, 2015 at 5:26 PM, Kyle Marek-Spartz < kyle.marek.spartz at gmail.com> wrote: > If you do end up going with the Kafka route, there is a native Haskell > client: > > https://github.com/tylerholien/milena > > > > Arnaud Bailly writes: > > > Hello, > > > > I am implementing an application using event sourcing as primary storage > > for data, which implies I need a way to durably and reliably store > streams > > of events on stable storage. I also need to be able to have an event > > distribution system on top of that persistent storage so that components > > can subscribe to stored events. > > > > So far I have implemented a simple store, e.g. a flat file, which reuses > > the format of Apache Kafka (just in case...). Not very robust nor > > sophisticated but can work for moderate loads. Now I am looking for the > > event distribution part in the hope of being able to reuse some > distributed > > event bus system that might exist somewhere and not having to roll my > own. > > > > I have had a look couple of months ago at Vaultaire, Marquise and > friends, > > but I am not sure they are really suited to my use case: They seem to be > > geared toward very high workload and throughput, like log or huge data > > streams analysis. > > > > Thanks for any pointer you might share, > > -- > Kyle Marek-Spartz > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aaron at fpcomplete.com Tue Apr 7 15:51:18 2015 From: aaron at fpcomplete.com (Aaron Contorer) Date: Tue, 7 Apr 2015 08:51:18 -0700 (PDT) Subject: [Haskell-cafe] MinGHC for GHC 7.10.1 available In-Reply-To: References: Message-ID: For those who don't know, MinGHC is a minimal installer of GHC for Windows, including GHC, cabal-install, and MSYS. Further discussion on this announcement is available at http://www.reddit.com/r/haskell/comments/30h52i/minghc_for_ghc_710/ and a short blog post with a bit more info is at https://www.fpcomplete.com/blog/2015/03/minghc-ghc-7-10 . On Friday, March 27, 2015 at 2:01:27 AM UTC-7, Michael Snoyman wrote: > > I've just uploaded a new release of MinGHC, including GHC 7.10.1 and > cabal-install 1.22.2.0. This release can be downloaded from: > > > https://s3.amazonaws.com/download.fpcomplete.com/minghc/minghc-7.10.1-i386.exe > > In the process, I also needed to upload a cabal-install binary for > Windows, which is available at: > > > https://s3.amazonaws.com/download.fpcomplete.com/minghc/cabal-install-1.22.2.0-i386-unknown-mingw32.tar.gz > > I've tested this distribution, but only lightly. Feedback from others > would be useful :) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From voldermort at hotmail.com Tue Apr 7 16:15:17 2015 From: voldermort at hotmail.com (Jeremy) Date: Tue, 7 Apr 2015 09:15:17 -0700 (MST) Subject: [Haskell-cafe] status of deb.haskell.org? Message-ID: <1428423317104-5768430.post@n5.nabble.com> The haskell status page says that deb.haskell.org has been down since mid-February. Are there any plans to bring it back up? -- View this message in context: http://haskell.1045720.n5.nabble.com/status-of-deb-haskell-org-tp5768430.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. From mrz.vtl at gmail.com Tue Apr 7 16:25:12 2015 From: mrz.vtl at gmail.com (Maurizio Vitale) Date: Tue, 7 Apr 2015 12:25:12 -0400 Subject: [Haskell-cafe] help w/ improving custom Stream for parsec Message-ID: I need a custom stream that supports insertion of include files and expansions of macros. I also want to be able to give nice error messages (think of clang macro-expansion backtrace), so I cannot use the standard trick of concatenating included files and expanded macros to the current input with setInput/getInput (I think I can't maybe there's a way of keeping a more complex "position" and since the use in producing an error backtrac is rare, it migth be worth exploring; if anybody has ideas here, I'm listening) Assuming I need a more compelx stream, this is what I have (Macro and File both have a string argument, but it will be more compicated, a list of expansions for Macro for instance). Is there a better way for doing this? What are the performance implications with backtracking? I'll be benchmarking it, but if people see obvious problems, let me know. Thanks a lot, Maurizio {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE InstanceSigs #-} {-# LANGUAGE MultiParamTypeClasses #-} module Parsing where import Text.Parsec type Parser s m = ParsecT s () m data VStream = File String | Macro String deriving Show newtype StreamStack = StreamStack [VStream] deriving Show instance (Monad m) ? Stream VStream m Char where uncons ? VStream -> m (Maybe (Char, VStream)) uncons (File (a:as)) = return $ Just (a, File as) uncons (File []) = return Nothing uncons (Macro (a:as)) = return $ Just (a, File as) uncons (Macro []) = return Nothing instance (Monad m) => Stream StreamStack m Char where uncons (StreamStack []) = return Nothing uncons (StreamStack (s:ss)) = case uncons s of Nothing ? uncons $ StreamStack ss Just Nothing ? uncons $ StreamStack ss Just (Just (c, File s')) ? return $ Just (c, StreamStack (File s': ss)) Just (Just (c, Macro s')) ? return $ Just (c, StreamStack (Macro s':ss)) -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Tue Apr 7 21:18:35 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 7 Apr 2015 23:18:35 +0200 Subject: [Haskell-cafe] Anyone interested in taking over network-uri? Message-ID: Hi! I find myself with less time to hack lately and I'm trying to reduce the number of libraries I maintain so I can invest time in new things. Would anyone be interested taking over maintenance of network-uri? It should not be much work at all. Bump a few dependencies and fix a bug or two. The package is very widely used so I suggest that the new maintainer should avoid breaking changes whenever possible*. * In other words, if you want to completely rethink network URI handling, this package is probably not the place for it. -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasen.dubi at gmail.com Wed Apr 8 09:38:57 2015 From: rasen.dubi at gmail.com (Alexey Shmalko) Date: Wed, 8 Apr 2015 12:38:57 +0300 Subject: [Haskell-cafe] Anyone interested in taking over network-uri? In-Reply-To: References: Message-ID: Hi, Johan, I have a little spare time and would be interested in taking over maintenance of network-uri. However I'm not sure if I'm a right person for that. I've never maintained any haskell package before, but would be glad to. I'm experienced with maintenance issues applied to C/C++ but not Haskell. However, I'm sure they are pretty much the same. I looked through github issues; as far as I understand, the maintainer's responsibilities are tracking things work fine with newer versions of dependencies, fixing bugs that doesn't break source-level compatibility and rejecting other proposals. I believe I would handle this. Regards, Alexey On Wed, Apr 8, 2015 at 12:18 AM, Johan Tibell wrote: > Hi! > > I find myself with less time to hack lately and I'm trying to reduce the > number of libraries I maintain so I can invest time in new things. Would > anyone be interested taking over maintenance of network-uri? > > It should not be much work at all. Bump a few dependencies and fix a bug > or two. The package is very widely used so I suggest that the new > maintainer should avoid breaking changes whenever possible*. > > * In other words, if you want to completely rethink network URI handling, > this package is probably not the place for it. > > -- Johan > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholls.mark at vimn.com Wed Apr 8 12:33:17 2015 From: nicholls.mark at vimn.com (Nicholls, Mark) Date: Wed, 8 Apr 2015 12:33:17 +0000 Subject: [Haskell-cafe] indexed writer monad In-Reply-To: References: <5512CA1C.5000506@ro-che.info> <5512DC3F.9040301@ro-che.info> <55132913.6090008@ro-che.info> Message-ID: Hmmm...remember me. I have returned to this....after some time. My Haskell is embryonic. Control.Category seems to be available without installing any magic (maybe I've installed that magic before). > import Control.Category Success... An IxMonad seems a harder feat. Installing packages and using them a leap into the unknown for me...I can drive a car, but assembling one is harder. I install the indexed package by using cabal? Cabal install indexed? https://hackage.haskell.org/package/indexed Seemed to do something. Then I want to use that package... So I want to import it.. It all seems to live in a module called Control.Monad.Indexed So... > import Control.Monad.Indexed Would have been my guess Boom...unknown... That?s before I jump through the IxFunctor,IxPointed,IxApplicative,IxMonad hoops. > On 25 Mar 2015, at 21:31, Roman Cheplyaka wrote: > > Alright, this does look like an indexed writer monad. > > Here's the indexed writer monad definition: > > newtype IxWriter c i j a = IxWriter { runIxWriter :: (c i j, a) } > > Here c is a Category, ie. an indexed Monoid. > > Under that constraint, you can make this an indexed monad (from the > 'indexed' package). That should be a good exercise that'll help you to > understand how this all works. > > In practice, you'll probably only need c = (->), so your writer > becomes isomorphic to (i -> j, a). This is similar to the ordinary > writer monad that uses difference lists as its monoid. > > The indexed 'tell' will be > > tell :: a -> IxWriter (->) z (z, a) () tell a = IxWriter (\z -> > (z,a), ()) > > And to run an indexed monad, you apply that i->j function to your end > marker. > > Does this help? > >> On 25/03/15 18:27, Nicholls, Mark wrote: >> Ok... >> >> Well lets just take the indexed writer >> >> So....for a writer we're going... >> >> e.g. >> >> (a,[c]) -> (a -> (b,[c])) -> (b,[c]) >> >> If I use the logging use case metaphor... >> So I "log" a few Strings (probably)....and out pops a (t,[String]) ? >> >> (I've never actually used one!) >> >> But what if I want to "log" different types... >> >> I want to "log" a string, then an integer bla bla... >> >> So I won't get the monoid [String] >> >> I should get something like a nest 2 tuple. >> >> do >> log 1 >> log "a" >> log 3 >> log "sdds" >> return 23 >> >> I could get a >> (Integer,(String,(Integer,(String,END)))) >> >> and then I could dereference this indexed monoid(?)...by splitting it into a head of a specific type and a tail...like a list. >> >> Maybe my use of "indexed" isn't correct. >> >> So it seems to me, that writer monad is dependent on the monoid >> append (of lists usually?)....and my "special" monad would be >> dependent on a "special" monoid append of basically >> >> (these are types) >> >> (a,b) ++ (c,d) = (a,b,c,d) >> (a,b,c,d) ++ (e) = (a,b,c,d,e) >> >> Which are encoded as nested 2 tuples (with an End marker) >> >> (a,(b,End) ++ (c,(d,End)) = (a,(b, (c,(d,End))) ? >> >> That sort of implies some sort of type family trickery...which isnt toooo bad (I've dabbled). >> >> Looks like an ixmonad to me....In my pigoeon Haskell I could probably wrestle with some code for a few days....but does such a thing already exist?...I don't want to reinvent the wheel, just mess about with it, and turn it into a cog (and then fail to map it back to my OO world). >> >> -----Original Message----- >> From: Roman Cheplyaka [mailto:roma at ro-che.info] >> Sent: 25 March 2015 4:03 PM >> To: Nicholls, Mark; haskell-cafe at haskell.org >> Subject: Re: [Haskell-cafe] indexed writer monad >> >> Sorry, I didn't mean to scare you off. >> >> By Category, I didn't mean the math concept; I meant simply the class from the Control.Category module. You don't need to learn any category theory to understand it. Since you seem to know already what Monoid is, try to compare those two classes, notice the similarities and see how (and why) one could be called an indexed version of the other. >> >> But if you don't know what "indexed" means, how do you know you need it? >> Perhaps you could describe your problem, and we could tell you what abstraction would fit that use case, be it indexed monad or something else. >> >>> On 25/03/15 17:22, Nicholls, Mark wrote: >>> Ah that assumes I know what a category is! (I did find some code that claimed the same thing)....maths doesn't scare me (much), and I suspect its nothing complicated (sounds like a category is a tuple then! Probably not), but I don't want to read a book on category theory to write a bit of code...yet. >>> >>> Ideally there would be a chapter in something like "learn you an indexed Haskell for the great good". >>> >>> Then I could take the code, use it...mess about with it...break it...put it back together in a slightly different shape and....bingo...it either works...or I find theres a good reason why it doesn't....(or I post a message to the caf?). >>> >>> -----Original Message----- >>> From: Roman Cheplyaka [mailto:roma at ro-che.info] >>> Sent: 25 March 2015 2:46 PM >>> To: Nicholls, Mark; haskell-cafe at haskell.org >>> Subject: Re: [Haskell-cafe] indexed writer monad >>> >>> An indexed monoid is just a Category. >>> >>>> On 25/03/15 16:32, Nicholls, Mark wrote: >>>> Anyone? >>>> >>>> >>>> >>>> I can handle monads, but I have something (actually in F#) that >>>> feels like it should be something like a indexed writer monad >>>> (which F# probably wouldn't support). >>>> >>>> >>>> >>>> So I thought I'd do some research in Haskell. >>>> >>>> >>>> >>>> I know little or nothing about indexed monad (though I have built >>>> the indexed state monad in C#). >>>> >>>> >>>> >>>> So I would assume there would be an indexed monoid (that looks at >>>> bit like a tuple?). >>>> >>>> >>>> >>>> e.g. >>>> >>>> >>>> >>>> (a,b) ++ (c,d) = (a,b,c,d) >>>> >>>> (a,b,c,d) ++ (e) = (a,b,c,d,e) >>>> >>>> >>>> >>>> ? >>>> >>>> >>>> >>>> There seems to be some stuff about "update monads", but it doesn't >>>> really look like a writer. >>>> >>>> >>>> >>>> I could do with playing around with an indexed writer, in order to >>>> get my head around what I'm doing..then try and capture what I'm >>>> doing.then try (and fail) to port it back. >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> CONFIDENTIALITY NOTICE >>>> >>>> This e-mail (and any attached files) is confidential and protected >>>> by copyright (and other intellectual property rights). If you are >>>> not the intended recipient please e-mail the sender and then delete >>>> the email and any attached files immediately. Any further use or >>>> dissemination is prohibited. >>>> >>>> While MTV Networks Europe has taken steps to ensure that this email >>>> and any attachments are virus free, it is your responsibility to >>>> ensure that this message and any attachments are virus free and do >>>> not affect your systems / data. >>>> >>>> Communicating by email is not 100% secure and carries risks such as >>>> delay, data corruption, non-delivery, wrongful interception and >>>> unauthorised amendment. If you communicate with us by e-mail, you >>>> acknowledge and assume these risks, and you agree to take >>>> appropriate measures to minimise these risks when e-mailing us. >>>> >>>> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, >>>> Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions >>>> International, Be Viacom, Viacom International Media Networks and >>>> VIMN and Comedy Central are all trading names of MTV Networks Europe. >>>> MTV Networks Europe is a partnership between MTV Networks Europe Inc. >>>> and Viacom Networks Europe Inc. Address for service in Great >>>> Britain is >>>> 17-29 Hawley Crescent, London, NW1 8TT. >>>> >>>> >>>> >>>> _______________________________________________ >>>> Haskell-Cafe mailing list >>>> Haskell-Cafe at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >>> CONFIDENTIALITY NOTICE >>> >>> This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. >>> >>> While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. >>> >>> Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. >>> >>> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. >> >> CONFIDENTIALITY NOTICE >> >> This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. >> >> While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. >> >> Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. >> >> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. > CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. From magnus at therning.org Wed Apr 8 12:36:52 2015 From: magnus at therning.org (Magnus Therning) Date: Wed, 8 Apr 2015 14:36:52 +0200 Subject: [Haskell-cafe] indexed writer monad In-Reply-To: References: <5512CA1C.5000506@ro-che.info> <5512DC3F.9040301@ro-che.info> <55132913.6090008@ro-che.info> Message-ID: On 8 April 2015 at 14:33, Nicholls, Mark wrote: > Hmmm...remember me. > > I have returned to this....after some time. > > My Haskell is embryonic. > > Control.Category seems to be available without installing any magic (maybe I've installed that magic before). > >> import Control.Category > > Success... > > An IxMonad seems a harder feat. > > Installing packages and using them a leap into the unknown for me...I can drive a car, but assembling one is harder. > > I install the indexed package by using cabal? > > Cabal install indexed? > > https://hackage.haskell.org/package/indexed > > Seemed to do something. > > Then I want to use that package... > > So I want to import it.. > It all seems to live in a module called Control.Monad.Indexed > > So... > >> import Control.Monad.Indexed > > Would have been my guess > > Boom...unknown... Possibly silly question but... did you restart `ghci` after installing indexed? /M -- Magnus Therning OpenPGP: 0xAB4DFBA4 email: magnus at therning.org jabber: magnus at therning.org twitter: magthe http://therning.org/magnus From nicholls.mark at vimn.com Wed Apr 8 13:21:17 2015 From: nicholls.mark at vimn.com (Nicholls, Mark) Date: Wed, 8 Apr 2015 13:21:17 +0000 Subject: [Haskell-cafe] indexed writer monad In-Reply-To: References: <5512CA1C.5000506@ro-che.info> <5512DC3F.9040301@ro-che.info> <55132913.6090008@ro-che.info> Message-ID: Genius Maybe this Haskell lark isnt as hard as I thought.... Now for the ix hoops. On 8 April 2015 at 14:33, Nicholls, Mark wrote: > Hmmm...remember me. > > I have returned to this....after some time. > > My Haskell is embryonic. > > Control.Category seems to be available without installing any magic (maybe I've installed that magic before). > >> import Control.Category > > Success... > > An IxMonad seems a harder feat. > > Installing packages and using them a leap into the unknown for me...I can drive a car, but assembling one is harder. > > I install the indexed package by using cabal? > > Cabal install indexed? > > https://hackage.haskell.org/package/indexed > > Seemed to do something. > > Then I want to use that package... > > So I want to import it.. > It all seems to live in a module called Control.Monad.Indexed > > So... > >> import Control.Monad.Indexed > > Would have been my guess > > Boom...unknown... Possibly silly question but... did you restart `ghci` after installing indexed? /M -- Magnus Therning OpenPGP: 0xAB4DFBA4 email: magnus at therning.org jabber: magnus at therning.org twitter: magthe http://therning.org/magnus CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. From simon at joyful.com Wed Apr 8 14:51:06 2015 From: simon at joyful.com (Simon Michael) Date: Wed, 8 Apr 2015 07:51:06 -0700 Subject: [Haskell-cafe] ANN: hledger 0.25 Message-ID: <89DED389-49F0-4E2A-9AC8-0DD5E7973565@joyful.com> I'm pleased to announce hledger and hledger-web 0.25. This release brings GHC 7.10 compatibility, terminal width awareness, useful averages and totals columns, and a more robust hledger-web add form. Full release notes: http://hledger.org/release-notes#hledger-0.25 . Release contributors: Simon Michael, Julien Moutinho. hledger (http://hledger.org) is a command-line tool and haskell library for tracking financial transactions, which are stored in a human-readable plain text format. It can also read CSV or timelog files. It provides useful reports, and can also help you record new transactions interactively. Add-on commands include hledger-web (a web interface), hledger-irr (for calculating internal rate of return) and hledger-interest (for generating interest transactions). hledger is inspired by and largely compatible with Ledger, and can be used with some Ledger files. Installation: cabal update; cabal install hledger [hledger-web] or see http://hledger.org/download and http://hledger.org/installing for more options. Best! -Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Wed Apr 8 15:03:26 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 08 Apr 2015 15:03:26 +0000 Subject: [Haskell-cafe] Anyone interested in taking over network-uri? In-Reply-To: References: Message-ID: Hi Johan and Alexey, I'll volunteer to be a comaintainer with Alexey on this package. Since the majority of upper bound bug reports will likely come from the Stackage team anyway, I won't mind making those maintenance releases. And I agree completely with keeping this package in an all-but-frozen API state. Does that sound like a reasonable setup to both of you? Michael On Wed, Apr 8, 2015 at 12:39 PM Alexey Shmalko wrote: > Hi, Johan, > > I have a little spare time and would be interested in taking over > maintenance of network-uri. > > However I'm not sure if I'm a right person for that. I've never maintained > any haskell package before, but would be glad to. I'm experienced with > maintenance issues applied to C/C++ but not Haskell. However, I'm sure they > are pretty much the same. > > I looked through github issues; as far as I understand, the maintainer's > responsibilities are tracking things work fine with newer versions of > dependencies, fixing bugs that doesn't break source-level compatibility and > rejecting other proposals. I believe I would handle this. > > Regards, > Alexey > > > On Wed, Apr 8, 2015 at 12:18 AM, Johan Tibell > wrote: > >> Hi! >> >> I find myself with less time to hack lately and I'm trying to reduce the >> number of libraries I maintain so I can invest time in new things. Would >> anyone be interested taking over maintenance of network-uri? >> >> It should not be much work at all. Bump a few dependencies and fix a bug >> or two. The package is very widely used so I suggest that the new >> maintainer should avoid breaking changes whenever possible*. >> >> * In other words, if you want to completely rethink network URI handling, >> this package is probably not the place for it. >> >> -- Johan >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasen.dubi at gmail.com Wed Apr 8 15:49:38 2015 From: rasen.dubi at gmail.com (Alexey Shmalko) Date: Wed, 8 Apr 2015 18:49:38 +0300 Subject: [Haskell-cafe] Anyone interested in taking over network-uri? In-Reply-To: References: Message-ID: Hi, Michael, I would be really happy to work along with you. Furthermore I believe I can rely on you when I feel the lack of experience. On Wed, Apr 8, 2015 at 6:03 PM, Michael Snoyman wrote: > Hi Johan and Alexey, > > I'll volunteer to be a comaintainer with Alexey on this package. Since the > majority of upper bound bug reports will likely come from the Stackage team > anyway, I won't mind making those maintenance releases. And I agree > completely with keeping this package in an all-but-frozen API state. > > Does that sound like a reasonable setup to both of you? > > Michael > > On Wed, Apr 8, 2015 at 12:39 PM Alexey Shmalko > wrote: > >> Hi, Johan, >> >> I have a little spare time and would be interested in taking over >> maintenance of network-uri. >> >> However I'm not sure if I'm a right person for that. I've never >> maintained any haskell package before, but would be glad to. I'm >> experienced with maintenance issues applied to C/C++ but not Haskell. >> However, I'm sure they are pretty much the same. >> >> I looked through github issues; as far as I understand, the maintainer's >> responsibilities are tracking things work fine with newer versions of >> dependencies, fixing bugs that doesn't break source-level compatibility and >> rejecting other proposals. I believe I would handle this. >> >> Regards, >> Alexey >> >> >> On Wed, Apr 8, 2015 at 12:18 AM, Johan Tibell >> wrote: >> >>> Hi! >>> >>> I find myself with less time to hack lately and I'm trying to reduce the >>> number of libraries I maintain so I can invest time in new things. Would >>> anyone be interested taking over maintenance of network-uri? >>> >>> It should not be much work at all. Bump a few dependencies and fix a bug >>> or two. The package is very widely used so I suggest that the new >>> maintainer should avoid breaking changes whenever possible*. >>> >>> * In other words, if you want to completely rethink network URI >>> handling, this package is probably not the place for it. >>> >>> -- Johan >>> >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >>> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From star.tim.star at gmail.com Wed Apr 8 15:59:23 2015 From: star.tim.star at gmail.com (timmy tofu) Date: Wed, 8 Apr 2015 11:59:23 -0400 Subject: [Haskell-cafe] Distributed and persistent events bus in Haskell Message-ID: We're working on something with 0MQ, if you (or anyone else reading) go that route and want to compare notes, hit me off-list. On Apr 8, 2015 8:01 AM, wrote: > Send Haskell-Cafe mailing list submissions to > haskell-cafe at haskell.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > or, via email, send a message with subject or body 'help' to > haskell-cafe-request at haskell.org > > You can reach the person managing the list at > haskell-cafe-owner at haskell.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of Haskell-Cafe digest..." > > > Today's Topics: > > 1. Distributed and persistent events bus in Haskell (Arnaud Bailly) > 2. Re: Distributed and persistent events bus in Haskell > (Kyle Marek-Spartz) > 3. Re: Distributed and persistent events bus in Haskell > (Arnaud Bailly) > 4. Re: MinGHC for GHC 7.10.1 available (Aaron Contorer) > 5. status of deb.haskell.org? (Jeremy) > 6. help w/ improving custom Stream for parsec (Maurizio Vitale) > 7. Anyone interested in taking over network-uri? (Johan Tibell) > 8. Re: Anyone interested in taking over network-uri? (Alexey Shmalko) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 7 Apr 2015 16:50:49 +0200 > From: Arnaud Bailly > To: Haskell Cafe > Subject: [Haskell-cafe] Distributed and persistent events bus in > Haskell > Message-ID: > < > CAL4zPaqpKXdOibD3_HECt+0V0ZniH0kKpA_ESQ30P7g9Ly_5cg at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hello, > > I am implementing an application using event sourcing as primary storage > for data, which implies I need a way to durably and reliably store streams > of events on stable storage. I also need to be able to have an event > distribution system on top of that persistent storage so that components > can subscribe to stored events. > > So far I have implemented a simple store, e.g. a flat file, which reuses > the format of Apache Kafka (just in case...). Not very robust nor > sophisticated but can work for moderate loads. Now I am looking for the > event distribution part in the hope of being able to reuse some distributed > event bus system that might exist somewhere and not having to roll my own. > > I have had a look couple of months ago at Vaultaire, Marquise and friends, > but I am not sure they are really suited to my use case: They seem to be > geared toward very high workload and throughput, like log or huge data > streams analysis. > > Thanks for any pointer you might share, > -- > Arnaud Bailly > > twitter: abailly > skype: arnaud-bailly > linkedin: http://fr.linkedin.com/in/arnaudbailly/ > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20150407/ee65ca06/attachment-0001.html > > > > ------------------------------ > > Message: 2 > Date: Tue, 07 Apr 2015 10:26:26 -0500 > From: Kyle Marek-Spartz > To: Arnaud Bailly > Cc: Haskell Cafe > Subject: Re: [Haskell-cafe] Distributed and persistent events bus in > Haskell > Message-ID: <2fcp3s8ue4kli5.fsf at gmail.com> > Content-Type: text/plain > > If you do end up going with the Kafka route, there is a native Haskell > client: > > https://github.com/tylerholien/milena > > > > Arnaud Bailly writes: > > > Hello, > > > > I am implementing an application using event sourcing as primary storage > > for data, which implies I need a way to durably and reliably store > streams > > of events on stable storage. I also need to be able to have an event > > distribution system on top of that persistent storage so that components > > can subscribe to stored events. > > > > So far I have implemented a simple store, e.g. a flat file, which reuses > > the format of Apache Kafka (just in case...). Not very robust nor > > sophisticated but can work for moderate loads. Now I am looking for the > > event distribution part in the hope of being able to reuse some > distributed > > event bus system that might exist somewhere and not having to roll my > own. > > > > I have had a look couple of months ago at Vaultaire, Marquise and > friends, > > but I am not sure they are really suited to my use case: They seem to be > > geared toward very high workload and throughput, like log or huge data > > streams analysis. > > > > Thanks for any pointer you might share, > > -- > Kyle Marek-Spartz > > > ------------------------------ > > Message: 3 > Date: Tue, 7 Apr 2015 17:33:01 +0200 > From: Arnaud Bailly > To: Kyle Marek-Spartz > Cc: Haskell Cafe > Subject: Re: [Haskell-cafe] Distributed and persistent events bus in > Haskell > Message-ID: > C4QhsTAe+OokdOr2g at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Cool! however, I would rather avoid having to manage kafka, if something > simpler exists :-) > > I know there is also a mature zeromq client so given I already have the > persistent part, I could probably leverage that but I thought somebody > might have already treaded that path... > > Thanks a lot for the pointer, anyway. > > -- > Arnaud Bailly > > twitter: abailly > skype: arnaud-bailly > linkedin: http://fr.linkedin.com/in/arnaudbailly/ > > On Tue, Apr 7, 2015 at 5:26 PM, Kyle Marek-Spartz < > kyle.marek.spartz at gmail.com> wrote: > > > If you do end up going with the Kafka route, there is a native Haskell > > client: > > > > https://github.com/tylerholien/milena > > > > > > > > Arnaud Bailly writes: > > > > > Hello, > > > > > > I am implementing an application using event sourcing as primary > storage > > > for data, which implies I need a way to durably and reliably store > > streams > > > of events on stable storage. I also need to be able to have an event > > > distribution system on top of that persistent storage so that > components > > > can subscribe to stored events. > > > > > > So far I have implemented a simple store, e.g. a flat file, which > reuses > > > the format of Apache Kafka (just in case...). Not very robust nor > > > sophisticated but can work for moderate loads. Now I am looking for the > > > event distribution part in the hope of being able to reuse some > > distributed > > > event bus system that might exist somewhere and not having to roll my > > own. > > > > > > I have had a look couple of months ago at Vaultaire, Marquise and > > friends, > > > but I am not sure they are really suited to my use case: They seem to > be > > > geared toward very high workload and throughput, like log or huge data > > > streams analysis. > > > > > > Thanks for any pointer you might share, > > > > -- > > Kyle Marek-Spartz > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20150407/570a7fb2/attachment-0001.html > > > > ------------------------------ > > Message: 4 > Date: Tue, 7 Apr 2015 08:51:18 -0700 (PDT) > From: Aaron Contorer > To: commercialhaskell at googlegroups.com > Cc: stackage at googlegroups.com, haskell-cafe at haskell.org > Subject: Re: [Haskell-cafe] MinGHC for GHC 7.10.1 available > Message-ID: > Content-Type: text/plain; charset="utf-8" > > For those who don't know, MinGHC is a minimal installer of GHC for Windows, > including GHC, cabal-install, and MSYS. Further discussion on this > announcement is available > at http://www.reddit.com/r/haskell/comments/30h52i/minghc_for_ghc_710/ and > a short blog post with a bit more info is > at https://www.fpcomplete.com/blog/2015/03/minghc-ghc-7-10 . > > On Friday, March 27, 2015 at 2:01:27 AM UTC-7, Michael Snoyman wrote: > > > > I've just uploaded a new release of MinGHC, including GHC 7.10.1 and > > cabal-install 1.22.2.0. This release can be downloaded from: > > > > > > > https://s3.amazonaws.com/download.fpcomplete.com/minghc/minghc-7.10.1-i386.exe > > > > In the process, I also needed to upload a cabal-install binary for > > Windows, which is available at: > > > > > > > https://s3.amazonaws.com/download.fpcomplete.com/minghc/cabal-install-1.22.2.0-i386-unknown-mingw32.tar.gz > > > > I've tested this distribution, but only lightly. Feedback from others > > would be useful :) > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20150407/66bc0517/attachment-0001.html > > > > ------------------------------ > > Message: 5 > Date: Tue, 7 Apr 2015 09:15:17 -0700 (MST) > From: Jeremy > To: haskell-cafe at haskell.org > Subject: [Haskell-cafe] status of deb.haskell.org? > Message-ID: <1428423317104-5768430.post at n5.nabble.com> > Content-Type: text/plain; charset=us-ascii > > The haskell status page says that deb.haskell.org has been down since > mid-February. Are there any plans to bring it back up? > > > > -- > View this message in context: > http://haskell.1045720.n5.nabble.com/status-of-deb-haskell-org-tp5768430.html > Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. > > > ------------------------------ > > Message: 6 > Date: Tue, 7 Apr 2015 12:25:12 -0400 > From: Maurizio Vitale > To: Haskell Cafe > Subject: [Haskell-cafe] help w/ improving custom Stream for parsec > Message-ID: > LZ+GD7sCQ at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > I need a custom stream that supports insertion of include files and > expansions of macros. > I also want to be able to give nice error messages (think of clang > macro-expansion backtrace), so I cannot use the standard trick of > concatenating included files and expanded macros to the current input with > setInput/getInput (I think I can't maybe there's a way of keeping a more > complex "position" and since the use in producing an error backtrac is > rare, it migth be worth exploring; if anybody has ideas here, I'm > listening) > > Assuming I need a more compelx stream, this is what I have (Macro and File > both have a string argument, but it will be more compicated, a list of > expansions for Macro for instance). > > Is there a better way for doing this? > What are the performance implications with backtracking? I'll be > benchmarking it, but if people see obvious problems, let me know. > > Thanks a lot, > Maurizio > > {-# LANGUAGE FlexibleInstances #-} > {-# LANGUAGE FlexibleContexts #-} > {-# LANGUAGE InstanceSigs #-} > {-# LANGUAGE MultiParamTypeClasses #-} > > module Parsing where > > import Text.Parsec > > type Parser s m = ParsecT s () m > > data VStream = File String | Macro String deriving Show > > newtype StreamStack = StreamStack [VStream] deriving Show > > instance (Monad m) ? Stream VStream m Char where > uncons ? VStream -> m (Maybe (Char, VStream)) > uncons (File (a:as)) = return $ Just (a, File as) > uncons (File []) = return Nothing > uncons (Macro (a:as)) = return $ Just (a, File as) > uncons (Macro []) = return Nothing > > > > instance (Monad m) => Stream StreamStack m Char where > uncons (StreamStack []) = return Nothing > uncons (StreamStack (s:ss)) = > case uncons s of > Nothing ? uncons $ StreamStack ss > Just Nothing ? uncons $ StreamStack ss > Just (Just (c, File s')) ? return $ Just (c, StreamStack (File s': > ss)) > Just (Just (c, Macro s')) ? return $ Just (c, StreamStack (Macro > s':ss)) > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20150407/3e28a995/attachment-0001.html > > > > ------------------------------ > > Message: 7 > Date: Tue, 7 Apr 2015 23:18:35 +0200 > From: Johan Tibell > To: haskell-cafe > Subject: [Haskell-cafe] Anyone interested in taking over network-uri? > Message-ID: > < > CAK-tuPb_TSgj0Sw9t5wFLHsmygGCu31Q3Py2-wCowXSMc0QRCQ at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi! > > I find myself with less time to hack lately and I'm trying to reduce the > number of libraries I maintain so I can invest time in new things. Would > anyone be interested taking over maintenance of network-uri? > > It should not be much work at all. Bump a few dependencies and fix a bug or > two. The package is very widely used so I suggest that the new maintainer > should avoid breaking changes whenever possible*. > > * In other words, if you want to completely rethink network URI handling, > this package is probably not the place for it. > > -- Johan > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20150407/da29f7b3/attachment-0001.html > > > > ------------------------------ > > Message: 8 > Date: Wed, 8 Apr 2015 12:38:57 +0300 > From: Alexey Shmalko > To: Johan Tibell > Cc: haskell-cafe > Subject: Re: [Haskell-cafe] Anyone interested in taking over > network-uri? > Message-ID: > cVyzY4m9apDB3Xa8SYO3Zv2o_hVT4SRBO_5ASbvZng at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi, Johan, > > I have a little spare time and would be interested in taking over > maintenance of network-uri. > > However I'm not sure if I'm a right person for that. I've never maintained > any haskell package before, but would be glad to. I'm experienced with > maintenance issues applied to C/C++ but not Haskell. However, I'm sure they > are pretty much the same. > > I looked through github issues; as far as I understand, the maintainer's > responsibilities are tracking things work fine with newer versions of > dependencies, fixing bugs that doesn't break source-level compatibility and > rejecting other proposals. I believe I would handle this. > > Regards, > Alexey > > > On Wed, Apr 8, 2015 at 12:18 AM, Johan Tibell > wrote: > > > Hi! > > > > I find myself with less time to hack lately and I'm trying to reduce the > > number of libraries I maintain so I can invest time in new things. Would > > anyone be interested taking over maintenance of network-uri? > > > > It should not be much work at all. Bump a few dependencies and fix a bug > > or two. The package is very widely used so I suggest that the new > > maintainer should avoid breaking changes whenever possible*. > > > > * In other words, if you want to completely rethink network URI handling, > > this package is probably not the place for it. > > > > -- Johan > > > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mail.haskell.org/pipermail/haskell-cafe/attachments/20150408/9b32ff13/attachment-0001.html > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > ------------------------------ > > End of Haskell-Cafe Digest, Vol 140, Issue 9 > ******************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omari at smileystation.com Thu Apr 9 01:43:34 2015 From: omari at smileystation.com (Omari Norman) Date: Wed, 8 Apr 2015 21:43:34 -0400 Subject: [Haskell-cafe] Generalized Newtype Deriving not allowed in Safe Haskell Message-ID: When compiling code with Generalized Newtype Deriving and the -fwarn-unsafe flag, I get -XGeneralizedNewtypeDeriving is not allowed in Safe Haskell This happens both in GHC 7.8 and GHC 7.10. I thought I remembered reading somewhere that GNTD is now part of the safe language? The GHC manual used to state that GNTD is not allowed in Safe Haskell: https://downloads.haskell.org/~ghc/7.6.3/docs/html/users_guide/safe-haskell.html#safe-language But this language on GNTD not being part of the safe language was removed in the 7.8 manual: https://downloads.haskell.org/~ghc/7.8.2/docs/html/users_guide/safe-haskell.html#safe-language The GHC release notes don't say anything about this one way or the other. Thoughts? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken.takusagawa.2 at gmail.com Thu Apr 9 02:49:56 2015 From: ken.takusagawa.2 at gmail.com (Ken Takusagawa II) Date: Wed, 8 Apr 2015 22:49:56 -0400 Subject: [Haskell-cafe] PrimMonad for Control.Monad.ST.Lazy? Message-ID: I notice that the strict ST monad has an instance for PrimMonad but the lazy ST monad does not. Is there a reason why, or is merely an oversight? (What I Am Really Trying To Do: get a purely lazy stream of random values out of mwc-random.) --ken -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Thu Apr 9 07:56:53 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Thu, 9 Apr 2015 07:56:53 +0000 Subject: [Haskell-cafe] Generalized Newtype Deriving not allowed in Safe Haskell In-Reply-To: References: Message-ID: <0d913dcca48b47abadb7ab6b4125b906@DB4PR30MB030.064d.mgd.msft.net> There is a long discussion on https://ghc.haskell.org/trac/ghc/ticket/8827 about whether the new Coercible story makes GND ok for Safe Haskell. At a type-soundness level, definitely yes. But there are other less-clear-cut issues like ?breaking abstractions? to consider. The decision on the ticket (comment:36) seems to be: GND stays out of Safe Haskell for now, but there is room for a better proposal. I don?t have an opinion myself. David Terei and David Mazieres are in the driving seat, but I?m sure they?ll be responsive to user input. However, I think the user manual may not have kept up with #8827. The sentence ?GeneralizedNewtypeDeriving ? It can be used to violate constructor access control, by allowing untrusted code to manipulate protected data types in ways the data type author did not intend, breaking invariants they have established.? vanished from the 7.8 user manual (links below). Maybe it should be restored. Safe Haskell aficionados, would you like to offer a patch for the manual? And maybe also a less drastic remedy than omitting GND altogether? Simon From: Omari Norman [mailto:omari at smileystation.com] Sent: 09 April 2015 02:44 To: haskell Cafe Subject: Generalized Newtype Deriving not allowed in Safe Haskell When compiling code with Generalized Newtype Deriving and the -fwarn-unsafe flag, I get -XGeneralizedNewtypeDeriving is not allowed in Safe Haskell This happens both in GHC 7.8 and GHC 7.10. I thought I remembered reading somewhere that GNTD is now part of the safe language? The GHC manual used to state that GNTD is not allowed in Safe Haskell: https://downloads.haskell.org/~ghc/7.6.3/docs/html/users_guide/safe-haskell.html#safe-language But this language on GNTD not being part of the safe language was removed in the 7.8 manual: https://downloads.haskell.org/~ghc/7.8.2/docs/html/users_guide/safe-haskell.html#safe-language The GHC release notes don't say anything about this one way or the other. Thoughts? -------------- next part -------------- An HTML attachment was scrubbed... URL: From malcolm.wallace at me.com Thu Apr 9 09:05:24 2015 From: malcolm.wallace at me.com (Malcolm Wallace) Date: Thu, 09 Apr 2015 10:05:24 +0100 Subject: [Haskell-cafe] help w/ improving custom Stream for parsec In-Reply-To: References: Message-ID: I think what you really need is a two-pass parser. The first parser consumes the input stream, and copies it to the output stream with files inserted, and macros expanded. The second parser consumes the already-preprocessed stream, and does whatever you like with it. Regards, Malcolm On 7 Apr 2015, at 17:25, Maurizio Vitale wrote: > I need a custom stream that supports insertion of include files and expansions of macros. > I also want to be able to give nice error messages (think of clang macro-expansion backtrace), so I cannot use the standard trick of concatenating included files and expanded macros to the current input with setInput/getInput (I think I can't maybe there's a way of keeping a more complex "position" and since the use in producing an error backtrac is rare, it migth be worth exploring; if anybody has ideas here, I'm listening) > > Assuming I need a more compelx stream, this is what I have (Macro and File both have a string argument, but it will be more compicated, a list of expansions for Macro for instance). > > Is there a better way for doing this? > What are the performance implications with backtracking? I'll be benchmarking it, but if people see obvious problems, let me know. > > Thanks a lot, > Maurizio > > {-# LANGUAGE FlexibleInstances #-} > {-# LANGUAGE FlexibleContexts #-} > {-# LANGUAGE InstanceSigs #-} > {-# LANGUAGE MultiParamTypeClasses #-} > > module Parsing where > > import Text.Parsec > > type Parser s m = ParsecT s () m > > data VStream = File String | Macro String deriving Show > > newtype StreamStack = StreamStack [VStream] deriving Show > > instance (Monad m) ? Stream VStream m Char where > uncons ? VStream -> m (Maybe (Char, VStream)) > uncons (File (a:as)) = return $ Just (a, File as) > uncons (File []) = return Nothing > uncons (Macro (a:as)) = return $ Just (a, File as) > uncons (Macro []) = return Nothing > > > > instance (Monad m) => Stream StreamStack m Char where > uncons (StreamStack []) = return Nothing > uncons (StreamStack (s:ss)) = > case uncons s of > Nothing ? uncons $ StreamStack ss > Just Nothing ? uncons $ StreamStack ss > Just (Just (c, File s')) ? return $ Just (c, StreamStack (File s': ss)) > Just (Just (c, Macro s')) ? return $ Just (c, StreamStack (Macro s':ss)) > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From chrisdone at gmail.com Thu Apr 9 09:23:48 2015 From: chrisdone at gmail.com (Christopher Done) Date: Thu, 9 Apr 2015 11:23:48 +0200 Subject: [Haskell-cafe] PrimMonad for Control.Monad.ST.Lazy? In-Reply-To: References: Message-ID: I would hesitantly say that PrimMonad's types are all unboxed and this means that any instance of it is inherently strict. Lazy state monad relies on the laziness of the tuple it uses to hold the state, for example. On 9 April 2015 at 04:49, Ken Takusagawa II wrote: > I notice that the strict ST monad has an instance for PrimMonad but the > lazy ST monad does not. Is there a reason why, or is merely an oversight? > > (What I Am Really Trying To Do: get a purely lazy stream of random values > out of mwc-random.) > > --ken > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg at okmij.org Thu Apr 9 09:40:24 2015 From: oleg at okmij.org (oleg at okmij.org) Date: Thu, 9 Apr 2015 05:40:24 -0400 (EDT) Subject: [Haskell-cafe] PrimMonad for Control.Monad.ST.Lazy? Message-ID: <20150409094024.64656C382B@www1.g3.pair.com> Ken Takusagawa II wrote: > What I Am Really Trying To Do: get a purely lazy stream of random values Streams of random data is actually somewhat popular structure, and has been asked about before. What was also found out that such lazy streams are fraught with problems. Some are described on this page: http://okmij.org/ftp/continuations/PPYield/index.html#randoms The page also shows two solutions, which are robust and easy to reason about. Lazy evaluation does promote compositionality on one hand, and it also inhibits compositionality. Reasoning about lazy programs is inherently non-compositional, breaking abstraction boundaries, as the page demonstrates. Overall, lazy evaluation has not worked out as it was originally hoped for. There are alternatives that are more compositional and easier to reason about, including reasoning about performance. From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Thu Apr 9 09:47:29 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 9 Apr 2015 10:47:29 +0100 Subject: [Haskell-cafe] PrimMonad for Control.Monad.ST.Lazy? In-Reply-To: <20150409094024.64656C382B@www1.g3.pair.com> References: <20150409094024.64656C382B@www1.g3.pair.com> Message-ID: <20150409094729.GA31520@weber> On Thu, Apr 09, 2015 at 05:40:24AM -0400, oleg at okmij.org wrote: > There are alternatives that are more compositional and easier to reason > about, including reasoning about performance. What are you hinting at here? From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Thu Apr 9 09:51:52 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 9 Apr 2015 10:51:52 +0100 Subject: [Haskell-cafe] PrimMonad for Control.Monad.ST.Lazy? In-Reply-To: <20150409094729.GA31520@weber> References: <20150409094024.64656C382B@www1.g3.pair.com> <20150409094729.GA31520@weber> Message-ID: <20150409095152.GC31520@weber> On Thu, Apr 09, 2015 at 10:47:29AM +0100, Tom Ellis wrote: > On Thu, Apr 09, 2015 at 05:40:24AM -0400, oleg at okmij.org wrote: > > There are alternatives that are more compositional and easier to reason > > about, including reasoning about performance. > > What are you hinting at here? Oh, perhaps you're talking about your exposition of lazy streams rather than alternatives to lazy evaluation in general. http://okmij.org/ftp/continuations/PPYield/index.html#randoms From doug at cs.dartmouth.edu Thu Apr 9 13:37:18 2015 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Thu, 09 Apr 2015 09:37:18 -0400 Subject: [Haskell-cafe] indexed writer monad Message-ID: <201504091337.t39DbI9m020496@coolidge.cs.dartmouth.edu> I infer from the following dialog that ghci nontransparently caches knowledge about the contents of the library. This is doubtless helpful for the flurry of imports that a single "load" or interactive "import" may cause. But the time saved by caching from one such top-level action to the next is probably imperceptible. The hours wasted in the present instance and others like it (the very quick response on cafe suggests that it's not unusual) probably outweigh all the milliseconds that the cache normally saves. Non-transparent caching is inherently evil. While the risk may be justifiable in some cases (e.g. stdio buffers in C), it is very questionable at top level in ghci. Doesn't this warrant a ticket? Doug >> I install the indexed package by using cabal >> >> Cabal install indexed >> >> https://hackage.haskell.org/package/indexed >> >> Seemed to do something. >> >> Then I want to use that package... >> >>> import Control.Monad.Indexed >> >> Boom...unknown... > > Possibly silly question but... did you restart `ghci` after installing indexed? Genius Maybe this Haskell lark isnt as hard as I thought.... From J.T.Jeuring at uu.nl Thu Apr 9 13:45:09 2015 From: J.T.Jeuring at uu.nl (Johan Jeuring) Date: Thu, 9 Apr 2015 15:45:09 +0200 Subject: [Haskell-cafe] VACANCIES : 3x PhD position in Fuctional Programming Message-ID: <696C3353-CB1D-4FF2-AF81-966BDAF5B0B4@uu.nl> The research group of Software Technology is part of the Software Systems division of in the department of Information and Computer Science at the Utrecht University. We focus our research on functional programming, compiler construction, tools for learning and teaching (serious games, intelligent tutoring systems), program analysis, validation, and verification. Financed by the Technology Foundation STW, the EU, and Utrecht University we currently have job openings for: ** 3x PhD researcher (PhD student) in Functional Programming ** We are looking for PhD students to develop functional programming techniques related to parsing, rewriting, property-based testing, dependently typed programming, or program analysis, and to apply these techniques in several applications, such as distributed systems, applied games, dialogue management systems, or assessment tools. Besides research, the successful candidate will be expected to help supervise MSc students and assist courses. We prefer candidates to start no later than September 2015. --------------------------------- What we are looking for --------------------------------- The candidate should have an MSc in Computer Science with good grades, be highly motivated to pursue a PhD, and speak and write English well. Knowledge of functional programming, such as Haskell or ML is essential. --------------------------------- What we offer --------------------------------- The candidate is offered a full-time salaried position for four years. The salary is supplemented with a holiday bonus of 8% and an end-of-year bonus of 8,3% per year. In addition we offer: a pension scheme, partially paid parental leave, flexible employment conditions. Conditions are based on the Collective Labour Agreement Dutch Universities. The research group will provide the candidate with necessary support on all aspects of the project. More information is available on the website: Terms and employment: http://bit.ly/1elqpM7 Salary starts at ? 2,083.- and increases to ? 2,664.- gross per month in the fourth year of the appointment. Utrecht is a great place to live, having been ranked as one of the happiest places in the world, according to BBC travel. Living in Utrecht: http://bitly.com/HdbL0X --------------------------------- In order to apply --------------------------------- To apply please attach a letter of motivation, a curriculum vitae, and (email) addresses of two referees. Make sure to also include a transcript of the courses you have followed (at bachelor and master level), with the grades you obtained, and to include a sample of your scientific writing, such as your master thesis. It is possible to apply for this position if you are close to obtaining your Master's. In that case include a letter of your supervisor with an estimate of your progress, and do not forget to include at least a sample of your technical writing skills. Applications are accepted until the positions are filled. Send your application via email to Johan Jeuring: J.T.Jeuring at uu.nl --------------- Contact person --------------- For further information you can direct your inquiries to: Johan Jeuring phone: +31 (0)30 253 4115/ (0) 6 40010053 e-mail: J.T.Jeuring at uu.nl website: http://www.staff.science.uu.nl/~jeuri101/ From mrz.vtl at gmail.com Thu Apr 9 15:01:10 2015 From: mrz.vtl at gmail.com (Maurizio Vitale) Date: Thu, 9 Apr 2015 08:01:10 -0700 Subject: [Haskell-cafe] help w/ improving custom Stream for parsec In-Reply-To: References: Message-ID: Thanks Malcolm, I did consider the two pass approach (and actually having pass 1 returning a stream of tokens annotated with position information) I'm keeping that option open, especially because for speed one might implement the first pass with Attoparsec and the rest with parsec. How would you keep track of macro expansions and source positions in order to provide nice error messages? Do you know of anything on hackage that does something similar (either the two pass or the custom stream approach)? Again, thanks. I'm still playing with alternatives before implementing the real language (which, for the curious, is SystemVerilog) so my barrier to trying out and benchmark different approaches is at this moment very low. And the real goal is to learn Haskell,don't care much if I'll have a full Verilog parser/elaborator; so playing with alternatives is very much useful. The language has also interesting features that make compiling separate files in parallel very challenging, so that's another area I want to play with before being too invested. On Thu, Apr 9, 2015 at 2:05 AM, Malcolm Wallace wrote: > I think what you really need is a two-pass parser. The first parser > consumes the input stream, and copies it to the output stream with files > inserted, and macros expanded. The second parser consumes the > already-preprocessed stream, and does whatever you like with it. > > Regards, > Malcolm > > On 7 Apr 2015, at 17:25, Maurizio Vitale wrote: > > > I need a custom stream that supports insertion of include files and > expansions of macros. > > I also want to be able to give nice error messages (think of clang > macro-expansion backtrace), so I cannot use the standard trick of > concatenating included files and expanded macros to the current input with > setInput/getInput (I think I can't maybe there's a way of keeping a more > complex "position" and since the use in producing an error backtrac is > rare, it migth be worth exploring; if anybody has ideas here, I'm listening) > > > > Assuming I need a more compelx stream, this is what I have (Macro and > File both have a string argument, but it will be more compicated, a list of > expansions for Macro for instance). > > > > Is there a better way for doing this? > > What are the performance implications with backtracking? I'll be > benchmarking it, but if people see obvious problems, let me know. > > > > Thanks a lot, > > Maurizio > > > > {-# LANGUAGE FlexibleInstances #-} > > {-# LANGUAGE FlexibleContexts #-} > > {-# LANGUAGE InstanceSigs #-} > > {-# LANGUAGE MultiParamTypeClasses #-} > > > > module Parsing where > > > > import Text.Parsec > > > > type Parser s m = ParsecT s () m > > > > data VStream = File String | Macro String deriving Show > > > > newtype StreamStack = StreamStack [VStream] deriving Show > > > > instance (Monad m) ? Stream VStream m Char where > > uncons ? VStream -> m (Maybe (Char, VStream)) > > uncons (File (a:as)) = return $ Just (a, File as) > > uncons (File []) = return Nothing > > uncons (Macro (a:as)) = return $ Just (a, File as) > > uncons (Macro []) = return Nothing > > > > > > > > instance (Monad m) => Stream StreamStack m Char where > > uncons (StreamStack []) = return Nothing > > uncons (StreamStack (s:ss)) = > > case uncons s of > > Nothing ? uncons $ StreamStack ss > > Just Nothing ? uncons $ StreamStack ss > > Just (Just (c, File s')) ? return $ Just (c, StreamStack (File s': > ss)) > > Just (Just (c, Macro s')) ? return $ Just (c, StreamStack (Macro > s':ss)) > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From malcolm.wallace at me.com Thu Apr 9 15:22:34 2015 From: malcolm.wallace at me.com (Malcolm Wallace) Date: Thu, 09 Apr 2015 16:22:34 +0100 Subject: [Haskell-cafe] help w/ improving custom Stream for parsec In-Reply-To: References: Message-ID: <5603B13E-FEA4-45C8-A1CE-0BD14E9AC85C@me.com> In cpphs (a Haskell implementation of the C preprocessor), there are two passes: the first pass splits the input into lines, and interprets #if and #include directives, leaving all other lines either untouched, or eliminated when in the wrong branch of a #if conditional. Its output is a list of lines, each of which is paired with its original source position (i.e. file and line number). Because #includes are transitive, the state for the first pass contains a little stack of contexts, that is, the file that triggered the inclusion, and its line number, so that once the included file is finished, the annotations can be popped to return to their enclosing position. The second pass accepts the [(Posn,String)] emitted by the first pass, and interprets/expands #define macros and their usage sites. It uses the positional annotations to give better warning/error messages in the case that, for instance, a #define is syntactically not-well-formed. Macro expansion needs to treat the input as words rather than lines, but the macro definitions obviously are lexed and parsed differently, so it is useful to identify them on separate lines before splitting the non-#define lines into words. Regards, Malcolm On 9 Apr 2015, at 16:01, Maurizio Vitale wrote: > Thanks Malcolm, > > I did consider the two pass approach (and actually having pass 1 returning a stream of tokens annotated with position information) > I'm keeping that option open, especially because for speed one might implement the first pass with Attoparsec and the rest with parsec. > How would you keep track of macro expansions and source positions in order to provide nice error messages? > Do you know of anything on hackage that does something similar (either the two pass or the custom stream approach)? > > Again, thanks. I'm still playing with alternatives before implementing the real language (which, for the curious, is SystemVerilog) so my barrier to trying out and benchmark different approaches is at this moment very low. > And the real goal is to learn Haskell,don't care much if I'll have a full Verilog parser/elaborator; so playing with alternatives is very much useful. > The language has also interesting features that make compiling separate files in parallel very challenging, so that's another area I want to play with before being too invested. > > > On Thu, Apr 9, 2015 at 2:05 AM, Malcolm Wallace wrote: > I think what you really need is a two-pass parser. The first parser consumes the input stream, and copies it to the output stream with files inserted, and macros expanded. The second parser consumes the already-preprocessed stream, and does whatever you like with it. > > Regards, > Malcolm > > On 7 Apr 2015, at 17:25, Maurizio Vitale wrote: > > > I need a custom stream that supports insertion of include files and expansions of macros. > > I also want to be able to give nice error messages (think of clang macro-expansion backtrace), so I cannot use the standard trick of concatenating included files and expanded macros to the current input with setInput/getInput (I think I can't maybe there's a way of keeping a more complex "position" and since the use in producing an error backtrac is rare, it migth be worth exploring; if anybody has ideas here, I'm listening) > > > > Assuming I need a more compelx stream, this is what I have (Macro and File both have a string argument, but it will be more compicated, a list of expansions for Macro for instance). > > > > Is there a better way for doing this? > > What are the performance implications with backtracking? I'll be benchmarking it, but if people see obvious problems, let me know. > > > > Thanks a lot, > > Maurizio > > > > {-# LANGUAGE FlexibleInstances #-} > > {-# LANGUAGE FlexibleContexts #-} > > {-# LANGUAGE InstanceSigs #-} > > {-# LANGUAGE MultiParamTypeClasses #-} > > > > module Parsing where > > > > import Text.Parsec > > > > type Parser s m = ParsecT s () m > > > > data VStream = File String | Macro String deriving Show > > > > newtype StreamStack = StreamStack [VStream] deriving Show > > > > instance (Monad m) ? Stream VStream m Char where > > uncons ? VStream -> m (Maybe (Char, VStream)) > > uncons (File (a:as)) = return $ Just (a, File as) > > uncons (File []) = return Nothing > > uncons (Macro (a:as)) = return $ Just (a, File as) > > uncons (Macro []) = return Nothing > > > > > > > > instance (Monad m) => Stream StreamStack m Char where > > uncons (StreamStack []) = return Nothing > > uncons (StreamStack (s:ss)) = > > case uncons s of > > Nothing ? uncons $ StreamStack ss > > Just Nothing ? uncons $ StreamStack ss > > Just (Just (c, File s')) ? return $ Just (c, StreamStack (File s': ss)) > > Just (Just (c, Macro s')) ? return $ Just (c, StreamStack (Macro s':ss)) > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > From nicholls.mark at vimn.com Thu Apr 9 15:31:48 2015 From: nicholls.mark at vimn.com (Nicholls, Mark) Date: Thu, 9 Apr 2015 15:31:48 +0000 Subject: [Haskell-cafe] indexed writer monad In-Reply-To: References: <5512CA1C.5000506@ro-che.info> <5512DC3F.9040301@ro-che.info> <55132913.6090008@ro-che.info> Message-ID: Brilliant...that was actually easier than imagined, I followed the types and it all dropped out the bottom, the hardest thing was constructing a meaningful expression without do notation...but now I've got the do notation working as well. I need to play with this...and then workout...how to apply this new knowledge to my writer like thing. Thanks again. > On 25 Mar 2015, at 21:31, Roman Cheplyaka wrote: > > Alright, this does look like an indexed writer monad. > > Here's the indexed writer monad definition: > > newtype IxWriter c i j a = IxWriter { runIxWriter :: (c i j, a) } > > Here c is a Category, ie. an indexed Monoid. > > Under that constraint, you can make this an indexed monad (from the > 'indexed' package). That should be a good exercise that'll help you to > understand how this all works. > > In practice, you'll probably only need c = (->), so your writer > becomes isomorphic to (i -> j, a). This is similar to the ordinary > writer monad that uses difference lists as its monoid. > > The indexed 'tell' will be > > tell :: a -> IxWriter (->) z (z, a) () tell a = IxWriter (\z -> > (z,a), ()) > > And to run an indexed monad, you apply that i->j function to your end > marker. > > Does this help? > >> On 25/03/15 18:27, Nicholls, Mark wrote: >> Ok... >> >> Well lets just take the indexed writer >> >> So....for a writer we're going... >> >> e.g. >> >> (a,[c]) -> (a -> (b,[c])) -> (b,[c]) >> >> If I use the logging use case metaphor... >> So I "log" a few Strings (probably)....and out pops a (t,[String]) ? >> >> (I've never actually used one!) >> >> But what if I want to "log" different types... >> >> I want to "log" a string, then an integer bla bla... >> >> So I won't get the monoid [String] >> >> I should get something like a nest 2 tuple. >> >> do >> log 1 >> log "a" >> log 3 >> log "sdds" >> return 23 >> >> I could get a >> (Integer,(String,(Integer,(String,END)))) >> >> and then I could dereference this indexed monoid(?)...by splitting it into a head of a specific type and a tail...like a list. >> >> Maybe my use of "indexed" isn't correct. >> >> So it seems to me, that writer monad is dependent on the monoid >> append (of lists usually?)....and my "special" monad would be >> dependent on a "special" monoid append of basically >> >> (these are types) >> >> (a,b) ++ (c,d) = (a,b,c,d) >> (a,b,c,d) ++ (e) = (a,b,c,d,e) >> >> Which are encoded as nested 2 tuples (with an End marker) >> >> (a,(b,End) ++ (c,(d,End)) = (a,(b, (c,(d,End))) ? >> >> That sort of implies some sort of type family trickery...which isnt toooo bad (I've dabbled). >> >> Looks like an ixmonad to me....In my pigoeon Haskell I could probably wrestle with some code for a few days....but does such a thing already exist?...I don't want to reinvent the wheel, just mess about with it, and turn it into a cog (and then fail to map it back to my OO world). >> >> -----Original Message----- >> From: Roman Cheplyaka [mailto:roma at ro-che.info] >> Sent: 25 March 2015 4:03 PM >> To: Nicholls, Mark; haskell-cafe at haskell.org >> Subject: Re: [Haskell-cafe] indexed writer monad >> >> Sorry, I didn't mean to scare you off. >> >> By Category, I didn't mean the math concept; I meant simply the class from the Control.Category module. You don't need to learn any category theory to understand it. Since you seem to know already what Monoid is, try to compare those two classes, notice the similarities and see how (and why) one could be called an indexed version of the other. >> >> But if you don't know what "indexed" means, how do you know you need it? >> Perhaps you could describe your problem, and we could tell you what abstraction would fit that use case, be it indexed monad or something else. >> >>> On 25/03/15 17:22, Nicholls, Mark wrote: >>> Ah that assumes I know what a category is! (I did find some code that claimed the same thing)....maths doesn't scare me (much), and I suspect its nothing complicated (sounds like a category is a tuple then! Probably not), but I don't want to read a book on category theory to write a bit of code...yet. >>> >>> Ideally there would be a chapter in something like "learn you an indexed Haskell for the great good". >>> >>> Then I could take the code, use it...mess about with it...break it...put it back together in a slightly different shape and....bingo...it either works...or I find theres a good reason why it doesn't....(or I post a message to the caf?). >>> >>> -----Original Message----- >>> From: Roman Cheplyaka [mailto:roma at ro-che.info] >>> Sent: 25 March 2015 2:46 PM >>> To: Nicholls, Mark; haskell-cafe at haskell.org >>> Subject: Re: [Haskell-cafe] indexed writer monad >>> >>> An indexed monoid is just a Category. >>> >>>> On 25/03/15 16:32, Nicholls, Mark wrote: >>>> Anyone? >>>> >>>> >>>> >>>> I can handle monads, but I have something (actually in F#) that >>>> feels like it should be something like a indexed writer monad >>>> (which F# probably wouldn't support). >>>> >>>> >>>> >>>> So I thought I'd do some research in Haskell. >>>> >>>> >>>> >>>> I know little or nothing about indexed monad (though I have built >>>> the indexed state monad in C#). >>>> >>>> >>>> >>>> So I would assume there would be an indexed monoid (that looks at >>>> bit like a tuple?). >>>> >>>> >>>> >>>> e.g. >>>> >>>> >>>> >>>> (a,b) ++ (c,d) = (a,b,c,d) >>>> >>>> (a,b,c,d) ++ (e) = (a,b,c,d,e) >>>> >>>> >>>> >>>> ? >>>> >>>> >>>> >>>> There seems to be some stuff about "update monads", but it doesn't >>>> really look like a writer. >>>> >>>> >>>> >>>> I could do with playing around with an indexed writer, in order to >>>> get my head around what I'm doing..then try and capture what I'm >>>> doing.then try (and fail) to port it back. >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> CONFIDENTIALITY NOTICE >>>> >>>> This e-mail (and any attached files) is confidential and protected >>>> by copyright (and other intellectual property rights). If you are >>>> not the intended recipient please e-mail the sender and then delete >>>> the email and any attached files immediately. Any further use or >>>> dissemination is prohibited. >>>> >>>> While MTV Networks Europe has taken steps to ensure that this email >>>> and any attachments are virus free, it is your responsibility to >>>> ensure that this message and any attachments are virus free and do >>>> not affect your systems / data. >>>> >>>> Communicating by email is not 100% secure and carries risks such as >>>> delay, data corruption, non-delivery, wrongful interception and >>>> unauthorised amendment. If you communicate with us by e-mail, you >>>> acknowledge and assume these risks, and you agree to take >>>> appropriate measures to minimise these risks when e-mailing us. >>>> >>>> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, >>>> Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions >>>> International, Be Viacom, Viacom International Media Networks and >>>> VIMN and Comedy Central are all trading names of MTV Networks Europe. >>>> MTV Networks Europe is a partnership between MTV Networks Europe Inc. >>>> and Viacom Networks Europe Inc. Address for service in Great >>>> Britain is >>>> 17-29 Hawley Crescent, London, NW1 8TT. >>>> >>>> >>>> >>>> _______________________________________________ >>>> Haskell-Cafe mailing list >>>> Haskell-Cafe at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >>> CONFIDENTIALITY NOTICE >>> >>> This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. >>> >>> While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. >>> >>> Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. >>> >>> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. >> >> CONFIDENTIALITY NOTICE >> >> This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. >> >> While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. >> >> Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. >> >> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. > CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. From mrz.vtl at gmail.com Thu Apr 9 15:57:14 2015 From: mrz.vtl at gmail.com (Maurizio Vitale) Date: Thu, 9 Apr 2015 08:57:14 -0700 Subject: [Haskell-cafe] help w/ improving custom Stream for parsec In-Reply-To: <5603B13E-FEA4-45C8-A1CE-0BD14E9AC85C@me.com> References: <5603B13E-FEA4-45C8-A1CE-0BD14E9AC85C@me.com> Message-ID: I'll take a look (I did in the past, but for some reason decided that a separate preprocessor was not the way I wanted to go) If you know the answer, otherwise I'll do my own research: If cpphs expands #defines in pass 2, how does pass 1 know what to include/skip? E.g. #define P #ifdef P #include "foo.h" #endif and even if only expansion is done in pass2 (but definition takes effect in pass1), you'd still have problems w/ #define FILE "foo.h" #include FILE so it looks to me that full macro proecessing must be done in the first pass. On Thu, Apr 9, 2015 at 8:22 AM, Malcolm Wallace wrote: > In cpphs (a Haskell implementation of the C preprocessor), there are two > passes: the first pass splits the input into lines, and interprets #if and > #include directives, leaving all other lines either untouched, or > eliminated when in the wrong branch of a #if conditional. Its output is a > list of lines, each of which is paired with its original source position > (i.e. file and line number). Because #includes are transitive, the state > for the first pass contains a little stack of contexts, that is, the file > that triggered the inclusion, and its line number, so that once the > included file is finished, the annotations can be popped to return to their > enclosing position. > > The second pass accepts the [(Posn,String)] emitted by the first pass, and > interprets/expands #define macros and their usage sites. It uses the > positional annotations to give better warning/error messages in the case > that, for instance, a #define is syntactically not-well-formed. Macro > expansion needs to treat the input as words rather than lines, but the > macro definitions obviously are lexed and parsed differently, so it is > useful to identify them on separate lines before splitting the non-#define > lines into words. > > Regards, > Malcolm > > On 9 Apr 2015, at 16:01, Maurizio Vitale wrote: > > > Thanks Malcolm, > > > > I did consider the two pass approach (and actually having pass 1 > returning a stream of tokens annotated with position information) > > I'm keeping that option open, especially because for speed one might > implement the first pass with Attoparsec and the rest with parsec. > > How would you keep track of macro expansions and source positions in > order to provide nice error messages? > > Do you know of anything on hackage that does something similar (either > the two pass or the custom stream approach)? > > > > Again, thanks. I'm still playing with alternatives before implementing > the real language (which, for the curious, is SystemVerilog) so my barrier > to trying out and benchmark different approaches is at this moment very low. > > And the real goal is to learn Haskell,don't care much if I'll have a > full Verilog parser/elaborator; so playing with alternatives is very much > useful. > > The language has also interesting features that make compiling separate > files in parallel very challenging, so that's another area I want to play > with before being too invested. > > > > > > On Thu, Apr 9, 2015 at 2:05 AM, Malcolm Wallace > wrote: > > I think what you really need is a two-pass parser. The first parser > consumes the input stream, and copies it to the output stream with files > inserted, and macros expanded. The second parser consumes the > already-preprocessed stream, and does whatever you like with it. > > > > Regards, > > Malcolm > > > > On 7 Apr 2015, at 17:25, Maurizio Vitale wrote: > > > > > I need a custom stream that supports insertion of include files and > expansions of macros. > > > I also want to be able to give nice error messages (think of clang > macro-expansion backtrace), so I cannot use the standard trick of > concatenating included files and expanded macros to the current input with > setInput/getInput (I think I can't maybe there's a way of keeping a more > complex "position" and since the use in producing an error backtrac is > rare, it migth be worth exploring; if anybody has ideas here, I'm listening) > > > > > > Assuming I need a more compelx stream, this is what I have (Macro and > File both have a string argument, but it will be more compicated, a list of > expansions for Macro for instance). > > > > > > Is there a better way for doing this? > > > What are the performance implications with backtracking? I'll be > benchmarking it, but if people see obvious problems, let me know. > > > > > > Thanks a lot, > > > Maurizio > > > > > > {-# LANGUAGE FlexibleInstances #-} > > > {-# LANGUAGE FlexibleContexts #-} > > > {-# LANGUAGE InstanceSigs #-} > > > {-# LANGUAGE MultiParamTypeClasses #-} > > > > > > module Parsing where > > > > > > import Text.Parsec > > > > > > type Parser s m = ParsecT s () m > > > > > > data VStream = File String | Macro String deriving Show > > > > > > newtype StreamStack = StreamStack [VStream] deriving Show > > > > > > instance (Monad m) ? Stream VStream m Char where > > > uncons ? VStream -> m (Maybe (Char, VStream)) > > > uncons (File (a:as)) = return $ Just (a, File as) > > > uncons (File []) = return Nothing > > > uncons (Macro (a:as)) = return $ Just (a, File as) > > > uncons (Macro []) = return Nothing > > > > > > > > > > > > instance (Monad m) => Stream StreamStack m Char where > > > uncons (StreamStack []) = return Nothing > > > uncons (StreamStack (s:ss)) = > > > case uncons s of > > > Nothing ? uncons $ StreamStack ss > > > Just Nothing ? uncons $ StreamStack ss > > > Just (Just (c, File s')) ? return $ Just (c, StreamStack (File > s': ss)) > > > Just (Just (c, Macro s')) ? return $ Just (c, StreamStack (Macro > s':ss)) > > > > > > _______________________________________________ > > > Haskell-Cafe mailing list > > > Haskell-Cafe at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From malcolm.wallace at me.com Thu Apr 9 16:02:39 2015 From: malcolm.wallace at me.com (Malcolm Wallace) Date: Thu, 09 Apr 2015 17:02:39 +0100 Subject: [Haskell-cafe] help w/ improving custom Stream for parsec In-Reply-To: References: <5603B13E-FEA4-45C8-A1CE-0BD14E9AC85C@me.com> Message-ID: <4149AB02-6620-49F0-8B19-CA078B9DA0AF@me.com> For resolving the conditionals in #if directives, it is not necessary to do full macro processing. There is a restricted language of conditionals which means that macro definitions can be recorded in a simplified fashion during the first pass. Essentially, a conditional can only test (a) definedness, (b) booleans, (c) integer comparisons. Regards, Malcolm On 9 Apr 2015, at 16:57, Maurizio Vitale wrote: > I'll take a look (I did in the past, but for some reason decided that a separate preprocessor was not the way I wanted to go) > > If you know the answer, otherwise I'll do my own research: > If cpphs expands #defines in pass 2, how does pass 1 know what to include/skip? > E.g. > #define P > #ifdef P > #include "foo.h" > #endif > > and even if only expansion is done in pass2 (but definition takes effect in pass1), you'd still have problems w/ > #define FILE "foo.h" > #include FILE > > so it looks to me that full macro proecessing must be done in the first pass. > > On Thu, Apr 9, 2015 at 8:22 AM, Malcolm Wallace wrote: > In cpphs (a Haskell implementation of the C preprocessor), there are two passes: the first pass splits the input into lines, and interprets #if and #include directives, leaving all other lines either untouched, or eliminated when in the wrong branch of a #if conditional. Its output is a list of lines, each of which is paired with its original source position (i.e. file and line number). Because #includes are transitive, the state for the first pass contains a little stack of contexts, that is, the file that triggered the inclusion, and its line number, so that once the included file is finished, the annotations can be popped to return to their enclosing position. > > The second pass accepts the [(Posn,String)] emitted by the first pass, and interprets/expands #define macros and their usage sites. It uses the positional annotations to give better warning/error messages in the case that, for instance, a #define is syntactically not-well-formed. Macro expansion needs to treat the input as words rather than lines, but the macro definitions obviously are lexed and parsed differently, so it is useful to identify them on separate lines before splitting the non-#define lines into words. > > Regards, > Malcolm > > On 9 Apr 2015, at 16:01, Maurizio Vitale wrote: > > > Thanks Malcolm, > > > > I did consider the two pass approach (and actually having pass 1 returning a stream of tokens annotated with position information) > > I'm keeping that option open, especially because for speed one might implement the first pass with Attoparsec and the rest with parsec. > > How would you keep track of macro expansions and source positions in order to provide nice error messages? > > Do you know of anything on hackage that does something similar (either the two pass or the custom stream approach)? > > > > Again, thanks. I'm still playing with alternatives before implementing the real language (which, for the curious, is SystemVerilog) so my barrier to trying out and benchmark different approaches is at this moment very low. > > And the real goal is to learn Haskell,don't care much if I'll have a full Verilog parser/elaborator; so playing with alternatives is very much useful. > > The language has also interesting features that make compiling separate files in parallel very challenging, so that's another area I want to play with before being too invested. > > > > > > On Thu, Apr 9, 2015 at 2:05 AM, Malcolm Wallace wrote: > > I think what you really need is a two-pass parser. The first parser consumes the input stream, and copies it to the output stream with files inserted, and macros expanded. The second parser consumes the already-preprocessed stream, and does whatever you like with it. > > > > Regards, > > Malcolm > > > > On 7 Apr 2015, at 17:25, Maurizio Vitale wrote: > > > > > I need a custom stream that supports insertion of include files and expansions of macros. > > > I also want to be able to give nice error messages (think of clang macro-expansion backtrace), so I cannot use the standard trick of concatenating included files and expanded macros to the current input with setInput/getInput (I think I can't maybe there's a way of keeping a more complex "position" and since the use in producing an error backtrac is rare, it migth be worth exploring; if anybody has ideas here, I'm listening) > > > > > > Assuming I need a more compelx stream, this is what I have (Macro and File both have a string argument, but it will be more compicated, a list of expansions for Macro for instance). > > > > > > Is there a better way for doing this? > > > What are the performance implications with backtracking? I'll be benchmarking it, but if people see obvious problems, let me know. > > > > > > Thanks a lot, > > > Maurizio > > > > > > {-# LANGUAGE FlexibleInstances #-} > > > {-# LANGUAGE FlexibleContexts #-} > > > {-# LANGUAGE InstanceSigs #-} > > > {-# LANGUAGE MultiParamTypeClasses #-} > > > > > > module Parsing where > > > > > > import Text.Parsec > > > > > > type Parser s m = ParsecT s () m > > > > > > data VStream = File String | Macro String deriving Show > > > > > > newtype StreamStack = StreamStack [VStream] deriving Show > > > > > > instance (Monad m) ? Stream VStream m Char where > > > uncons ? VStream -> m (Maybe (Char, VStream)) > > > uncons (File (a:as)) = return $ Just (a, File as) > > > uncons (File []) = return Nothing > > > uncons (Macro (a:as)) = return $ Just (a, File as) > > > uncons (Macro []) = return Nothing > > > > > > > > > > > > instance (Monad m) => Stream StreamStack m Char where > > > uncons (StreamStack []) = return Nothing > > > uncons (StreamStack (s:ss)) = > > > case uncons s of > > > Nothing ? uncons $ StreamStack ss > > > Just Nothing ? uncons $ StreamStack ss > > > Just (Just (c, File s')) ? return $ Just (c, StreamStack (File s': ss)) > > > Just (Just (c, Macro s')) ? return $ Just (c, StreamStack (Macro s':ss)) > > > > > > _______________________________________________ > > > Haskell-Cafe mailing list > > > Haskell-Cafe at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > > From plredmond at gmail.com Thu Apr 9 16:44:37 2015 From: plredmond at gmail.com (Patrick Redmond) Date: Thu, 9 Apr 2015 09:44:37 -0700 Subject: [Haskell-cafe] Cabal hell Message-ID: Since learning Haskell I've had the pleasure of finding my way out of a cabal hell or two. I've developed some knowledge to cope with it [1], but mostly concluded that if I can avoid burdening my projects with dependencies that have many of their own dependencies, then cabal hell can be averted. This puts somewhat of a damper on the joy of haskell's composability. With the new release of GHC I've observed a flurry of discussion on haskell mailing lists and from Linux distro maintainers about all the fixing and patching required to keep the haskell ecosystem going. Meanwhile I've learned other languages and used other tools that don't seem to have this problem that haskell does. For example, in the elm-lang community the package management tool enforces strict api-versioning, and in the clojure ecosystem people talk about "repeatability" and achieve it by using mostly exact-version requirements, even including the language (the language version is a dependency of a project, rather than a constraint of the environment). I guess I'm wondering why we don't try something simpler to solve the haskell cabal hell problem. How about using minimum version dependencies only, not ranges, since we can't accurately guess about the future compatibility of our projects. How about automatic api-versioning of our projects to give the version numbers some rigid semantics with regard to package compatibility? [1] http://f06code.com/post/90205977959/cabal-usage-notes -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasen.dubi at gmail.com Thu Apr 9 17:24:10 2015 From: rasen.dubi at gmail.com (Alexey Shmalko) Date: Thu, 9 Apr 2015 20:24:10 +0300 Subject: [Haskell-cafe] Cabal hell In-Reply-To: References: Message-ID: I really love Qt versioning scheme when applied to compatibility [1]. Given version is Major.Minor.Patch: Major releases may break backwards binary and source compatibility, although source compatibility may be maintained. Minor releases are backwards binary and source compatible. Patch releases are both backwards and forwards binary and source compatible. So that you know your package will work with any Qt version that has the same major version and minor version is greater-equal to the one you developed your package with. It would be really great if everyone followed this scheme. As a drawback I can note that this requires strict discipline from a library developer. It's also harder to get binary compatibility right for Haskell because of cross-module optimization GHC does. [1] https://wiki.qt.io/Qt-Version-Compatibility On Thu, Apr 9, 2015 at 7:44 PM, Patrick Redmond wrote: > Since learning Haskell I've had the pleasure of finding my way out of a > cabal hell or two. I've developed some knowledge to cope with it [1], but > mostly concluded that if I can avoid burdening my projects > with dependencies that have many of their own dependencies, then cabal hell > can be averted. This puts somewhat of a damper on the joy of haskell's > composability. > > With the new release of GHC I've observed a flurry of discussion on > haskell mailing lists and from Linux distro maintainers about all the > fixing and patching required to keep the haskell ecosystem going. > > Meanwhile I've learned other languages and used other tools that don't > seem to have this problem that haskell does. For example, in the elm-lang > community the package management tool enforces strict api-versioning, and > in the clojure ecosystem people talk about "repeatability" and achieve it > by using mostly exact-version requirements, even including the language > (the language version is a dependency of a project, rather than a > constraint of the environment). > > I guess I'm wondering why we don't try something simpler to solve the > haskell cabal hell problem. How about using minimum version dependencies > only, not ranges, since we can't accurately guess about the future > compatibility of our projects. How about automatic api-versioning of our > projects to give the version numbers some rigid semantics with regard to > package compatibility? > > [1] http://f06code.com/post/90205977959/cabal-usage-notes > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mantkiew at gsd.uwaterloo.ca Thu Apr 9 17:28:01 2015 From: mantkiew at gsd.uwaterloo.ca (Michal Antkiewicz) Date: Thu, 9 Apr 2015 13:28:01 -0400 Subject: [Haskell-cafe] Cabal hell In-Reply-To: References: Message-ID: Well if the package X you depend on adheres to PVP [1] then you depend on it like this to allow newer minor versions: x == A.B.* where "*A.B* is known as the *major* version number, and *C* the *minor* version number." If only every package maintainer adhered to this, we'd be in much better shape. Michal [1] https://wiki.haskell.org/Package_versioning_policy On Thu, Apr 9, 2015 at 1:24 PM, Alexey Shmalko wrote: > I really love Qt versioning scheme when applied to compatibility [1]. > > Given version is Major.Minor.Patch: > > Major releases may break backwards binary and source compatibility, > although source compatibility may be maintained. > Minor releases are backwards binary and source compatible. > Patch releases are both backwards and forwards binary and source > compatible. > > So that you know your package will work with any Qt version that has the > same major version and minor version is greater-equal to the one you > developed your package with. It would be really great if everyone followed > this scheme. > > As a drawback I can note that this requires strict discipline from a > library developer. It's also harder to get binary compatibility right for > Haskell because of cross-module optimization GHC does. > > [1] https://wiki.qt.io/Qt-Version-Compatibility > > On Thu, Apr 9, 2015 at 7:44 PM, Patrick Redmond > wrote: > >> Since learning Haskell I've had the pleasure of finding my way out of a >> cabal hell or two. I've developed some knowledge to cope with it [1], but >> mostly concluded that if I can avoid burdening my projects >> with dependencies that have many of their own dependencies, then cabal hell >> can be averted. This puts somewhat of a damper on the joy of haskell's >> composability. >> >> With the new release of GHC I've observed a flurry of discussion on >> haskell mailing lists and from Linux distro maintainers about all the >> fixing and patching required to keep the haskell ecosystem going. >> >> Meanwhile I've learned other languages and used other tools that don't >> seem to have this problem that haskell does. For example, in the elm-lang >> community the package management tool enforces strict api-versioning, and >> in the clojure ecosystem people talk about "repeatability" and achieve it >> by using mostly exact-version requirements, even including the language >> (the language version is a dependency of a project, rather than a >> constraint of the environment). >> >> I guess I'm wondering why we don't try something simpler to solve the >> haskell cabal hell problem. How about using minimum version dependencies >> only, not ranges, since we can't accurately guess about the future >> compatibility of our projects. How about automatic api-versioning of our >> projects to give the version numbers some rigid semantics with regard to >> package compatibility? >> >> [1] http://f06code.com/post/90205977959/cabal-usage-notes >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasen.dubi at gmail.com Thu Apr 9 17:37:14 2015 From: rasen.dubi at gmail.com (Alexey Shmalko) Date: Thu, 9 Apr 2015 20:37:14 +0300 Subject: [Haskell-cafe] Cabal hell In-Reply-To: References: Message-ID: As far as I understand PVP, you can't rely on any C. Only which are greater-equal to one you tested your package with. It's because minor version can introduce changes that are not forward-compatible. So basically, PVP is Qt versioning scheme with two first numbers being Major, third one being Minor and the rest is Patch. If such convention already exists, what's left is to invent the way to enforce it. On Thu, Apr 9, 2015 at 8:28 PM, Michal Antkiewicz wrote: > Well if the package X you depend on adheres to PVP [1] > > then you depend on it like this to allow newer minor versions: > > x == A.B.* > > where > > "*A.B* is known as the *major* version number, and *C* the *minor* > version number." > > If only every package maintainer adhered to this, we'd be in much better > shape. > > Michal > > [1] https://wiki.haskell.org/Package_versioning_policy > > On Thu, Apr 9, 2015 at 1:24 PM, Alexey Shmalko > wrote: > >> I really love Qt versioning scheme when applied to compatibility [1]. >> >> Given version is Major.Minor.Patch: >> >> Major releases may break backwards binary and source compatibility, >> although source compatibility may be maintained. >> Minor releases are backwards binary and source compatible. >> Patch releases are both backwards and forwards binary and source >> compatible. >> >> So that you know your package will work with any Qt version that has the >> same major version and minor version is greater-equal to the one you >> developed your package with. It would be really great if everyone followed >> this scheme. >> >> As a drawback I can note that this requires strict discipline from a >> library developer. It's also harder to get binary compatibility right for >> Haskell because of cross-module optimization GHC does. >> >> [1] https://wiki.qt.io/Qt-Version-Compatibility >> >> On Thu, Apr 9, 2015 at 7:44 PM, Patrick Redmond >> wrote: >> >>> Since learning Haskell I've had the pleasure of finding my way out of a >>> cabal hell or two. I've developed some knowledge to cope with it [1], but >>> mostly concluded that if I can avoid burdening my projects >>> with dependencies that have many of their own dependencies, then cabal hell >>> can be averted. This puts somewhat of a damper on the joy of haskell's >>> composability. >>> >>> With the new release of GHC I've observed a flurry of discussion on >>> haskell mailing lists and from Linux distro maintainers about all the >>> fixing and patching required to keep the haskell ecosystem going. >>> >>> Meanwhile I've learned other languages and used other tools that don't >>> seem to have this problem that haskell does. For example, in the elm-lang >>> community the package management tool enforces strict api-versioning, and >>> in the clojure ecosystem people talk about "repeatability" and achieve it >>> by using mostly exact-version requirements, even including the language >>> (the language version is a dependency of a project, rather than a >>> constraint of the environment). >>> >>> I guess I'm wondering why we don't try something simpler to solve the >>> haskell cabal hell problem. How about using minimum version dependencies >>> only, not ranges, since we can't accurately guess about the future >>> compatibility of our projects. How about automatic api-versioning of our >>> projects to give the version numbers some rigid semantics with regard to >>> package compatibility? >>> >>> [1] http://f06code.com/post/90205977959/cabal-usage-notes >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >>> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mantkiew at gsd.uwaterloo.ca Thu Apr 9 17:48:51 2015 From: mantkiew at gsd.uwaterloo.ca (Michal Antkiewicz) Date: Thu, 9 Apr 2015 13:48:51 -0400 Subject: [Haskell-cafe] Cabal hell In-Reply-To: References: Message-ID: Yes, there was a proposal to follow how it's done in Elm: Cabal PVP compliance checker [1]. There's also an outdated check-pvp package [2] but it requires base < 4.7. Michal [1] http://blog.johantibell.com/2015/03/google-summer-of-code-2015-project-ideas.html [2] http://hackage.haskell.org/package/check-pvp On Thu, Apr 9, 2015 at 1:37 PM, Alexey Shmalko wrote: > As far as I understand PVP, you can't rely on any C. Only which are > greater-equal to one you tested your package with. It's because minor > version can introduce changes that are not forward-compatible. > > So basically, PVP is Qt versioning scheme with two first numbers being > Major, third one being Minor and the rest is Patch. > > If such convention already exists, what's left is to invent the way to > enforce it. > > On Thu, Apr 9, 2015 at 8:28 PM, Michal Antkiewicz < > mantkiew at gsd.uwaterloo.ca> wrote: > >> Well if the package X you depend on adheres to PVP [1] >> >> then you depend on it like this to allow newer minor versions: >> >> x == A.B.* >> >> where >> >> "*A.B* is known as the *major* version number, and *C* the *minor* >> version number." >> >> If only every package maintainer adhered to this, we'd be in much better >> shape. >> >> Michal >> >> [1] https://wiki.haskell.org/Package_versioning_policy >> >> On Thu, Apr 9, 2015 at 1:24 PM, Alexey Shmalko >> wrote: >> >>> I really love Qt versioning scheme when applied to compatibility [1]. >>> >>> Given version is Major.Minor.Patch: >>> >>> Major releases may break backwards binary and source compatibility, >>> although source compatibility may be maintained. >>> Minor releases are backwards binary and source compatible. >>> Patch releases are both backwards and forwards binary and source >>> compatible. >>> >>> So that you know your package will work with any Qt version that has the >>> same major version and minor version is greater-equal to the one you >>> developed your package with. It would be really great if everyone followed >>> this scheme. >>> >>> As a drawback I can note that this requires strict discipline from a >>> library developer. It's also harder to get binary compatibility right for >>> Haskell because of cross-module optimization GHC does. >>> >>> [1] https://wiki.qt.io/Qt-Version-Compatibility >>> >>> On Thu, Apr 9, 2015 at 7:44 PM, Patrick Redmond >>> wrote: >>> >>>> Since learning Haskell I've had the pleasure of finding my way out of a >>>> cabal hell or two. I've developed some knowledge to cope with it [1], but >>>> mostly concluded that if I can avoid burdening my projects >>>> with dependencies that have many of their own dependencies, then cabal hell >>>> can be averted. This puts somewhat of a damper on the joy of haskell's >>>> composability. >>>> >>>> With the new release of GHC I've observed a flurry of discussion on >>>> haskell mailing lists and from Linux distro maintainers about all the >>>> fixing and patching required to keep the haskell ecosystem going. >>>> >>>> Meanwhile I've learned other languages and used other tools that don't >>>> seem to have this problem that haskell does. For example, in the elm-lang >>>> community the package management tool enforces strict api-versioning, and >>>> in the clojure ecosystem people talk about "repeatability" and achieve it >>>> by using mostly exact-version requirements, even including the language >>>> (the language version is a dependency of a project, rather than a >>>> constraint of the environment). >>>> >>>> I guess I'm wondering why we don't try something simpler to solve the >>>> haskell cabal hell problem. How about using minimum version dependencies >>>> only, not ranges, since we can't accurately guess about the future >>>> compatibility of our projects. How about automatic api-versioning of our >>>> projects to give the version numbers some rigid semantics with regard to >>>> package compatibility? >>>> >>>> [1] http://f06code.com/post/90205977959/cabal-usage-notes >>>> >>>> _______________________________________________ >>>> Haskell-Cafe mailing list >>>> Haskell-Cafe at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>> >>>> >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >>> >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rasen.dubi at gmail.com Thu Apr 9 18:07:44 2015 From: rasen.dubi at gmail.com (Alexey Shmalko) Date: Thu, 9 Apr 2015 21:07:44 +0300 Subject: [Haskell-cafe] Cabal hell In-Reply-To: References: Message-ID: check-pvp only checks you specified your versions right; it assumes everyone follows PVP. The real problem is that this assumption isn't correct and this exactly this problem should be addressed. GSoC project is basically the same. It would be great to develop a tool that can tell if you must bump major or minor version. Then put it on hackage and reject any package version that doesn't do versioning right. On Thu, Apr 9, 2015 at 8:48 PM, Michal Antkiewicz wrote: > Yes, there was a proposal to follow how it's done in Elm: Cabal PVP > compliance checker [1]. There's also an outdated check-pvp package [2] but > it requires base < 4.7. > > Michal > > [1] > http://blog.johantibell.com/2015/03/google-summer-of-code-2015-project-ideas.html > [2] http://hackage.haskell.org/package/check-pvp > > > On Thu, Apr 9, 2015 at 1:37 PM, Alexey Shmalko > wrote: > >> As far as I understand PVP, you can't rely on any C. Only which are >> greater-equal to one you tested your package with. It's because minor >> version can introduce changes that are not forward-compatible. >> >> So basically, PVP is Qt versioning scheme with two first numbers being >> Major, third one being Minor and the rest is Patch. >> >> If such convention already exists, what's left is to invent the way to >> enforce it. >> >> On Thu, Apr 9, 2015 at 8:28 PM, Michal Antkiewicz < >> mantkiew at gsd.uwaterloo.ca> wrote: >> >>> Well if the package X you depend on adheres to PVP [1] >>> >>> then you depend on it like this to allow newer minor versions: >>> >>> x == A.B.* >>> >>> where >>> >>> "*A.B* is known as the *major* version number, and *C* the *minor* >>> version number." >>> >>> If only every package maintainer adhered to this, we'd be in much better >>> shape. >>> >>> Michal >>> >>> [1] https://wiki.haskell.org/Package_versioning_policy >>> >>> On Thu, Apr 9, 2015 at 1:24 PM, Alexey Shmalko >>> wrote: >>> >>>> I really love Qt versioning scheme when applied to compatibility [1]. >>>> >>>> Given version is Major.Minor.Patch: >>>> >>>> Major releases may break backwards binary and source compatibility, >>>> although source compatibility may be maintained. >>>> Minor releases are backwards binary and source compatible. >>>> Patch releases are both backwards and forwards binary and source >>>> compatible. >>>> >>>> So that you know your package will work with any Qt version that has >>>> the same major version and minor version is greater-equal to the one you >>>> developed your package with. It would be really great if everyone followed >>>> this scheme. >>>> >>>> As a drawback I can note that this requires strict discipline from a >>>> library developer. It's also harder to get binary compatibility right for >>>> Haskell because of cross-module optimization GHC does. >>>> >>>> [1] https://wiki.qt.io/Qt-Version-Compatibility >>>> >>>> On Thu, Apr 9, 2015 at 7:44 PM, Patrick Redmond >>>> wrote: >>>> >>>>> Since learning Haskell I've had the pleasure of finding my way out of >>>>> a cabal hell or two. I've developed some knowledge to cope with it [1], but >>>>> mostly concluded that if I can avoid burdening my projects >>>>> with dependencies that have many of their own dependencies, then cabal hell >>>>> can be averted. This puts somewhat of a damper on the joy of haskell's >>>>> composability. >>>>> >>>>> With the new release of GHC I've observed a flurry of discussion on >>>>> haskell mailing lists and from Linux distro maintainers about all the >>>>> fixing and patching required to keep the haskell ecosystem going. >>>>> >>>>> Meanwhile I've learned other languages and used other tools that don't >>>>> seem to have this problem that haskell does. For example, in the elm-lang >>>>> community the package management tool enforces strict api-versioning, and >>>>> in the clojure ecosystem people talk about "repeatability" and achieve it >>>>> by using mostly exact-version requirements, even including the language >>>>> (the language version is a dependency of a project, rather than a >>>>> constraint of the environment). >>>>> >>>>> I guess I'm wondering why we don't try something simpler to solve the >>>>> haskell cabal hell problem. How about using minimum version dependencies >>>>> only, not ranges, since we can't accurately guess about the future >>>>> compatibility of our projects. How about automatic api-versioning of our >>>>> projects to give the version numbers some rigid semantics with regard to >>>>> package compatibility? >>>>> >>>>> [1] http://f06code.com/post/90205977959/cabal-usage-notes >>>>> >>>>> _______________________________________________ >>>>> Haskell-Cafe mailing list >>>>> Haskell-Cafe at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Haskell-Cafe mailing list >>>> Haskell-Cafe at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>> >>>> >>> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.mcclean at gmail.com Thu Apr 9 19:36:07 2015 From: douglas.mcclean at gmail.com (Douglas McClean) Date: Thu, 9 Apr 2015 15:36:07 -0400 Subject: [Haskell-cafe] ANN: exact-pi 0.1 Message-ID: I'm announcing the release of the new exact-pi package. It provides a type that exactly represents all rational multiples of integer powers of pi. Because it's closed under multiplication and taking of reciprocals, it's useful for computing exact conversion factors between physical units. In order to provide full Num and Floating instances there is also a representation for approximate values. I'm not sure if this will be of use to anyone else, but it is nice and self-contained so I thought I would put it out there. -Doug McClean -------------- next part -------------- An HTML attachment was scrubbed... URL: From semen at trygub.com Thu Apr 9 22:59:36 2015 From: semen at trygub.com (Semen Trygubenko / =?utf-8?B?0KHQtdC80LXQvSDQotGA0LjQs9GD0LHQtdC9?= =?utf-8?B?0LrQvg==?=) Date: Thu, 9 Apr 2015 23:59:36 +0100 Subject: [Haskell-cafe] Haskell Weekly News: Issue 324 In-Reply-To: <20150326222444.GA91822@inanna.trygub.com> References: <20150326222444.GA91822@inanna.trygub.com> Message-ID: <20150409225936.GA1070@inanna.trygub.com> Talks How to Sell Excellence by Michael Church Slides for presentation given at Chicago Haskell Meetup. Author urges us not to write arbitrary code as reasoning about arbitrary code is mathematically impossible, argues that code quality is determined by what people do and not by what language enables (which puts functional programming and Haskell at the top), links code entropy to deadline culture (a.k.a. "Agile"), explains why we suck at selling functional programming and has a decent go at selling Haskell himself. https://docs.google.com/presentation/d/1a4GvI0dbL8sfAlnTUwVxhq4_j-QiDlz02_t0XZJXnzY/preview?sle=true&slide=id.p http://www.reddit.com/r/haskell/comments/31awxs/how_to_sell_excellence/ https://lobste.rs/s/9oq7ra/how_to_sell_excellence http://www.quora.com/Why-do-some-developers-at-strong-companies-like-Google-consider-Agile-development-to-be-nonsense/answer/Michael-O-Church Reflex: Practical Functional Reactive Programming FRP talk and an awesome FRP demo. https://www.youtube.com/watch?v=mYvkcskJbc4 https://www.youtube.com/watch?v=3qfc9XFVo2c https://obsidian.systems/reflex-nyhug Discussion Necessity/utility of dependent types? A comprehensive response by tel with explanations of how to expose invariants at type level and how to relate invariants of two types. http://www.reddit.com/r/haskell/comments/31lru4/necessityutility_of_dependent_types/ How is the error function implemented in Haskell? A comprehensive reply by Tikhon Jelvis. "Semantically, error results in ? ("bottom") just like an infinite loop, it just happens to be better-behaved in a practical sense and easier to debug." https://www.quora.com/How-is-the-error-function-implemented-in-Haskell Why are we naming types instead of instances when we have multiple instances per type? "Type classes only work really well when they are coherent, so there can only be one per type." (bss03) http://www.reddit.com/r/haskell/comments/31zagw/why_are_we_naming_types_instead_of_instances_when/ http://blog.ezyang.com/2014/07/type-classes-confluence-coherence-global-uniqueness/ Quotes of the Week "The best way to move Haskell forward is to build something important like opaleye, wreq, haskellonheroku, pipes, cloud haskell or yesod and then sell that instead; write some amazing documentation and tell as many people as you can... And then probably try very, very hard to get a job at one of these newfangled startups that use Haskell :)" (rehno-lindeque) http://www.reddit.com/r/haskell/comments/31awxs/how_to_sell_excellence/cq0aze5 "The reason Haskell is great is because its systematic faithfulness to mathematical abstractions, and as soon as you start talking about math, the masses leave the room." (dnkndnts) http://www.reddit.com/r/haskell/comments/31awxs/how_to_sell_excellence/cpzwlu7 "productivity is impossible to measure (the idea that it can be measured brought us that "Agile"/Scrum shit that I hate more than the worst-designed programming language)" (michaelochurch) http://www.reddit.com/r/haskell/comments/31awxs/how_to_sell_excellence/cq0jhjv "why doesn't anyone ever explain that natural transformations make a tent shape?" (chrisamaphone) https://twitter.com/chrisamaphone/status/585499255772676097 "Haskell is _fast as hell_." (Michael O. Church) "Haskell crushes imprecision of thought." (Michael O. Church) "Clojure is beautiful. Python is easy to learn. These are fine languages! But they lack a feature (compile-time typing) that I'd demand if I were building out a 5000+ LoC project ? Haskell isn't the only functional programming language. ? but Haskell is the only major language whose type system can verify the lack of side effects." (Michael O. Church) https://docs.google.com/presentation/d/1a4GvI0dbL8sfAlnTUwVxhq4_j-QiDlz02_t0XZJXnzY/preview?sle=true&slide=id.p "Monads are just burritos, man. Burritos ain't scary. http://chrisdone.com/posts/monads-are-burritos" (kyllo) http://www.reddit.com/r/haskell/comments/31awxs/how_to_sell_excellence/cq0jv6c "Coq has no backdoors as far as I'm aware. In fact, I don't believe it has any doors at all." (tactics) http://www.reddit.com/r/haskell/comments/31awxs/how_to_sell_excellence/cq0f5r0 "haskell is like pizza. Even when it's bad it's still good." (deech) https://twitter.com/deech/status/583087866248454144 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From andrew.gibiansky at gmail.com Thu Apr 9 23:50:38 2015 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Thu, 9 Apr 2015 16:50:38 -0700 Subject: [Haskell-cafe] ANN: exact-pi 0.1 In-Reply-To: References: Message-ID: Why is this hard-coded to pi? Is there a particular reason it cannot be used for any irrational number? On Thu, Apr 9, 2015 at 12:36 PM, Douglas McClean wrote: > I'm announcing the release of the new exact-pi package. > > It provides a type that exactly represents all rational multiples of > integer powers of pi. Because it's closed under multiplication and taking > of reciprocals, it's useful for computing exact conversion factors between > physical units. In order to provide full Num and Floating instances there > is also a representation for approximate values. > > I'm not sure if this will be of use to anyone else, but it is nice and > self-contained so I thought I would put it out there. > > -Doug McClean > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.mcclean at gmail.com Fri Apr 10 00:04:16 2015 From: douglas.mcclean at gmail.com (Douglas McClean) Date: Thu, 9 Apr 2015 20:04:16 -0400 Subject: [Haskell-cafe] ANN: exact-pi 0.1 In-Reply-To: References: Message-ID: You could definitely use the same approach for any one irrational number you were interested in. With a tuple of integers you could track a handful of irrationals. If you need more than that you would probably be better served by something like the cyclotomic package. The reason it is hard-coded to pi is twofold. First, that's the one I need to track, because it appears in conversion factors between units of angle. Second, because pi appears in the Floating instance, which makes it notationally more convenient to have the type specialized for pi. If you have a use case for the extra generality, I could see an approach where its parametrized by a Symbol. It would still be convenient to have the Floating instance specialized for the type that tracks pi, but that would be achievable. On Thu, Apr 9, 2015 at 7:50 PM, Andrew Gibiansky wrote: > Why is this hard-coded to pi? Is there a particular reason it cannot be > used for any irrational number? > > On Thu, Apr 9, 2015 at 12:36 PM, Douglas McClean < > douglas.mcclean at gmail.com> wrote: > >> I'm announcing the release of the new exact-pi package. >> >> It provides a type that exactly represents all rational multiples of >> integer powers of pi. Because it's closed under multiplication and taking >> of reciprocals, it's useful for computing exact conversion factors between >> physical units. In order to provide full Num and Floating instances there >> is also a representation for approximate values. >> >> I'm not sure if this will be of use to anyone else, but it is nice and >> self-contained so I thought I would put it out there. >> >> -Doug McClean >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > -- J. Douglas McClean (781) 561-5540 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Fri Apr 10 02:56:27 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Thu, 9 Apr 2015 22:56:27 -0400 Subject: [Haskell-cafe] PrimMonad for Control.Monad.ST.Lazy? In-Reply-To: References: Message-ID: seems like an oversight, i've opened a ticket to add it On Wed, Apr 8, 2015 at 10:49 PM, Ken Takusagawa II < ken.takusagawa.2 at gmail.com> wrote: > I notice that the strict ST monad has an instance for PrimMonad but the > lazy ST monad does not. Is there a reason why, or is merely an oversight? > > (What I Am Really Trying To Do: get a purely lazy stream of random values > out of mwc-random.) > > --ken > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jerzy.karczmarczuk at unicaen.fr Fri Apr 10 07:50:07 2015 From: jerzy.karczmarczuk at unicaen.fr (Jerzy Karczmarczuk) Date: Fri, 10 Apr 2015 09:50:07 +0200 Subject: [Haskell-cafe] ANN: exact-pi 0.1 In-Reply-To: References: Message-ID: <552780AF.1060404@unicaen.fr> About exact manipulation of PI and its powers... Le 10/04/2015 02:04, Douglas McClean a ?crit : > You could definitely use the same approach for any one irrational > number you were interested in. With a tuple of integers you could > track a handful of irrationals. There are many differences between "just" *irrational* and *transcendental* numbers. The work with algebraic / transcendental extensions is a bit different. No polynomial of PI with rational coefficients will vanish. If you replace PI by sqrt(2), well, you know the answer. Jerzy Karczmarczuk From simonpj at microsoft.com Fri Apr 10 08:48:52 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Fri, 10 Apr 2015 08:48:52 +0000 Subject: [Haskell-cafe] Generalized Newtype Deriving not allowed in Safe Haskell In-Reply-To: References: <0d913dcca48b47abadb7ab6b4125b906@DB4PR30MB030.064d.mgd.msft.net> Message-ID: | prefer, such as only exporting the Coerce instance if all the | constructors are exported, it seems that the ship sailed on these Coercible is relatively recent; I don't think we should regard it as cast in stone. But yes, the Coerbible instance of a newtype is only available when the data constructor for the newtype is lexically in scope. Simon | -----Original Message----- | From: davidterei at gmail.com [mailto:davidterei at gmail.com] On Behalf Of | David Terei | Sent: 10 April 2015 09:38 | To: Simon Peyton Jones | Cc: Omari Norman; haskell Cafe; ghc-devs at haskell.org | Subject: Re: Generalized Newtype Deriving not allowed in Safe Haskell | | I'll prepare a patch for the userguide soon. | | As for something better, yes I think we can and should. It's on my todo | list :) Basically, the new-GND design has all the mechanisms to be | safe, but sadly the defaults are rather worrying. Without explicit | annotations from the user, module abstractions are broken. This is why | we left GND out of Safe Haskell for the moment as it is a subtle and | easy mistake to make. | | If the module contained explicit role annotations then it could be | allowed. The discussion in | https://ghc.haskell.org/trac/ghc/ticket/8827 has other solutions that I | prefer, such as only exporting the Coerce instance if all the | constructors are exported, it seems that the ship sailed on these | bigger changes sadly. | | Cheers, | David | | On 9 April 2015 at 00:56, Simon Peyton Jones | wrote: | > There is a long discussion on | > https://ghc.haskell.org/trac/ghc/ticket/8827 | > about whether the new Coercible story makes GND ok for Safe Haskell. | > At a type-soundness level, definitely yes. But there are other | > less-clear-cut issues like ?breaking abstractions? to consider. The | > decision on the ticket | > (comment:36) seems to be: GND stays out of Safe Haskell for now, but | > there is room for a better proposal. | > | > | > | > I don?t have an opinion myself. David Terei and David Mazieres are in | > the driving seat, but I?m sure they?ll be responsive to user input. | > | > | > | > However, I think the user manual may not have kept up with #8827. | The | > sentence ?GeneralizedNewtypeDeriving ? It can be used to violate | > constructor access control, by allowing untrusted code to manipulate | > protected data types in ways the data type author did not intend, | > breaking invariants they have established.? vanished from the 7.8 | > user manual (links below). Maybe it should be restored. | > | > | > | > Safe Haskell aficionados, would you like to offer a patch for the | manual? | > And maybe also a less drastic remedy than omitting GND altogether? | > | > | > | > Simon | > | > | > | > From: Omari Norman [mailto:omari at smileystation.com] | > Sent: 09 April 2015 02:44 | > To: haskell Cafe | > Subject: Generalized Newtype Deriving not allowed in Safe Haskell | > | > | > | > When compiling code with Generalized Newtype Deriving and the | > -fwarn-unsafe flag, I get | > | > | > | > -XGeneralizedNewtypeDeriving is not allowed in Safe Haskell | > | > | > | > This happens both in GHC 7.8 and GHC 7.10. | > | > | > | > I thought I remembered reading somewhere that GNTD is now part of the | > safe language? The GHC manual used to state that GNTD is not allowed | > in Safe | > Haskell: | > | > | > | > https://downloads.haskell.org/~ghc/7.6.3/docs/html/users_guide/safe- | ha | > skell.html#safe-language | > | > | > | > But this language on GNTD not being part of the safe language was | > removed in the 7.8 manual: | > | > | > | > https://downloads.haskell.org/~ghc/7.8.2/docs/html/users_guide/safe- | ha | > skell.html#safe-language | > | > | > | > The GHC release notes don't say anything about this one way or the | other. | > Thoughts? | > | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs | > From marcin.jan.mrotek at gmail.com Fri Apr 10 08:51:03 2015 From: marcin.jan.mrotek at gmail.com (Marcin Mrotek) Date: Fri, 10 Apr 2015 10:51:03 +0200 Subject: [Haskell-cafe] PrimMonad for Control.Monad.ST.Lazy? In-Reply-To: <20150409094024.64656C382B@www1.g3.pair.com> References: <20150409094024.64656C382B@www1.g3.pair.com> Message-ID: Hello, > Streams of random data is actually somewhat popular structure, and has > been asked about before. What was also found out that such lazy > streams are fraught with problems. Some are described on this page: > > http://okmij.org/ftp/continuations/PPYield/index.html#randoms > The page also shows two solutions, which are robust and easy to reason > about. And from that page: >The monadic stream ListM is not a Haskell list. We have to write our own ListM-processing functions like headL, replicateL, appendL, etc. (There is probably a Hackage package for it.) Could it be Pipes? Best regards, Marcin Mrotek From eir at cis.upenn.edu Fri Apr 10 12:07:05 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Fri, 10 Apr 2015 13:07:05 +0100 Subject: [Haskell-cafe] Generalized Newtype Deriving not allowed in Safe Haskell In-Reply-To: References: <0d913dcca48b47abadb7ab6b4125b906@DB4PR30MB030.064d.mgd.msft.net> Message-ID: <86F7E6C0-6CD5-40CD-BAF9-1CA07FA80872@cis.upenn.edu> Here's an idea: For a module to be Safe, then for each exported datatype, one of the following must hold: 1) The datatype comes with a role annotation. 2) The module exports all of the datatype's constructors. 3) If the datatype is defined in a place other than the current module, the current module exports no fewer data constructors than are exported in the datatype's defining module. Why? 1) The role annotation, even if it has no effect, shows that the programmer has considered roles. Any mistake here is clearly the programmer's fault. 2) This datatype is clearly meant not to be abstract. `Coercible` then gives clients no more power than they already have. 3) This is subtler. It is a common idiom to export a datatype's constructors from a package-internal module, but then never to export the constructors beyond the package. If such a datatype has a role annotation (in its defining module, of course), then we're fine, even if it is exported abstractly later. However, suppose we are abstractly re-exporting a datatype that exports its constructors from its defining module. If there is no role annotation on the datatype, we're in trouble and should fail. BUT, if the datatype were exported abstractly in its defining module, then we don't need to fail on re-export, because nothing has changed. Actually, we could simplify the conditions. Change (2) to: 2') The module exports all of the datatype's visible constructors. I think explaining in terms of separate rules (2) and (3) is a little clearer, because the re-export case is slightly subtle, and this subtlety can be lost in (2'). This proposal would require tracking (in interface files, too) whether or not a datatype comes with a role annotation. This isn't hard, though. It might even help in pretty-printing. An alternative would be to have a way of setting roles differently on export than internally. I don't think this breaks the type system, but it's yet another thing to specify and support. And we'd have to consider the possibility that some module will import a datatype from multiple re-exporting modules, each with different ascribed role annotations. Is this an error? Does GHC take some sort of least upper bound? I prefer not to go here, but there's nothing terribly wrong with this approach. Richard On Apr 10, 2015, at 9:37 AM, David Terei wrote: > I'll prepare a patch for the userguide soon. > > As for something better, yes I think we can and should. It's on my > todo list :) Basically, the new-GND design has all the mechanisms to > be safe, but sadly the defaults are rather worrying. Without explicit > annotations from the user, module abstractions are broken. This is why > we left GND out of Safe Haskell for the moment as it is a subtle and > easy mistake to make. > > If the module contained explicit role annotations then it could be > allowed. The discussion in > https://ghc.haskell.org/trac/ghc/ticket/8827 has other solutions that > I prefer, such as only exporting the Coerce instance if all the > constructors are exported, it seems that the ship sailed on these > bigger changes sadly. > > Cheers, > David > > On 9 April 2015 at 00:56, Simon Peyton Jones wrote: >> There is a long discussion on https://ghc.haskell.org/trac/ghc/ticket/8827 >> about whether the new Coercible story makes GND ok for Safe Haskell. At a >> type-soundness level, definitely yes. But there are other less-clear-cut >> issues like ?breaking abstractions? to consider. The decision on the ticket >> (comment:36) seems to be: GND stays out of Safe Haskell for now, but there >> is room for a better proposal. >> >> >> >> I don?t have an opinion myself. David Terei and David Mazieres are in the >> driving seat, but I?m sure they?ll be responsive to user input. >> >> >> >> However, I think the user manual may not have kept up with #8827. The >> sentence ?GeneralizedNewtypeDeriving ? It can be used to violate constructor >> access control, by allowing untrusted code to manipulate protected data >> types in ways the data type author did not intend, breaking invariants they >> have established.? vanished from the 7.8 user manual (links below). Maybe >> it should be restored. >> >> >> >> Safe Haskell aficionados, would you like to offer a patch for the manual? >> And maybe also a less drastic remedy than omitting GND altogether? >> >> >> >> Simon >> >> >> >> From: Omari Norman [mailto:omari at smileystation.com] >> Sent: 09 April 2015 02:44 >> To: haskell Cafe >> Subject: Generalized Newtype Deriving not allowed in Safe Haskell >> >> >> >> When compiling code with Generalized Newtype Deriving and the -fwarn-unsafe >> flag, I get >> >> >> >> -XGeneralizedNewtypeDeriving is not allowed in Safe Haskell >> >> >> >> This happens both in GHC 7.8 and GHC 7.10. >> >> >> >> I thought I remembered reading somewhere that GNTD is now part of the safe >> language? The GHC manual used to state that GNTD is not allowed in Safe >> Haskell: >> >> >> >> https://downloads.haskell.org/~ghc/7.6.3/docs/html/users_guide/safe-haskell.html#safe-language >> >> >> >> But this language on GNTD not being part of the safe language was removed in >> the 7.8 manual: >> >> >> >> https://downloads.haskell.org/~ghc/7.8.2/docs/html/users_guide/safe-haskell.html#safe-language >> >> >> >> The GHC release notes don't say anything about this one way or the other. >> Thoughts? >> >> >> _______________________________________________ >> ghc-devs mailing list >> ghc-devs at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > _______________________________________________ > ghc-devs mailing list > ghc-devs at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > From ky3 at atamo.com Fri Apr 10 12:38:28 2015 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Fri, 10 Apr 2015 19:38:28 +0700 Subject: [Haskell-cafe] PrimMonad for Control.Monad.ST.Lazy? In-Reply-To: <20150409094729.GA31520@weber> References: <20150409094024.64656C382B@www1.g3.pair.com> <20150409094729.GA31520@weber> Message-ID: On Thu, Apr 9, 2015 at 4:47 PM, Tom Ellis < tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk> wrote: > There are alternatives that are more compositional and easier to reason > > about, including reasoning about performance. > > What are you hinting at here? I understand Oleg's comment as a set up to a longer essay reappraising the points raised in Hughes' "Why FP matters." I do hope that essay transpires. -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From colinpauladams at gmail.com Fri Apr 10 13:00:33 2015 From: colinpauladams at gmail.com (Colin Adams) Date: Fri, 10 Apr 2015 14:00:33 +0100 Subject: [Haskell-cafe] Haskell on BBC Radio 4 Message-ID: Haskell featured strongly today in the last Episode of Radio 4's "Codes that changed the world": http://www.bbc.co.uk/programmes/b05prkh7 -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.mcclean at gmail.com Fri Apr 10 13:05:51 2015 From: douglas.mcclean at gmail.com (Douglas McClean) Date: Fri, 10 Apr 2015 09:05:51 -0400 Subject: [Haskell-cafe] Generalized Newtype Deriving not allowed in Safe Haskell In-Reply-To: <86F7E6C0-6CD5-40CD-BAF9-1CA07FA80872@cis.upenn.edu> References: <0d913dcca48b47abadb7ab6b4125b906@DB4PR30MB030.064d.mgd.msft.net> <86F7E6C0-6CD5-40CD-BAF9-1CA07FA80872@cis.upenn.edu> Message-ID: I don't think that 2+3 is equivalent to 2', because an explicit import list or hiding list could've brought only some of the datatype's constructors into visibility. On Fri, Apr 10, 2015 at 8:07 AM, Richard Eisenberg wrote: > Here's an idea: For a module to be Safe, then for each exported datatype, > one of the following must hold: > 1) The datatype comes with a role annotation. > 2) The module exports all of the datatype's constructors. > 3) If the datatype is defined in a place other than the current module, > the current module exports no fewer data constructors than are exported in > the datatype's defining module. > > Why? > 1) The role annotation, even if it has no effect, shows that the > programmer has considered roles. Any mistake here is clearly the > programmer's fault. > 2) This datatype is clearly meant not to be abstract. `Coercible` then > gives clients no more power than they already have. > 3) This is subtler. It is a common idiom to export a datatype's > constructors from a package-internal module, but then never to export the > constructors beyond the package. If such a datatype has a role annotation > (in its defining module, of course), then we're fine, even if it is > exported abstractly later. However, suppose we are abstractly re-exporting > a datatype that exports its constructors from its defining module. If there > is no role annotation on the datatype, we're in trouble and should fail. > BUT, if the datatype were exported abstractly in its defining module, then > we don't need to fail on re-export, because nothing has changed. > > > Actually, we could simplify the conditions. Change (2) to: > > 2') The module exports all of the datatype's visible constructors. > > I think explaining in terms of separate rules (2) and (3) is a little > clearer, because the re-export case is slightly subtle, and this subtlety > can be lost in (2'). > > This proposal would require tracking (in interface files, too) whether or > not a datatype comes with a role annotation. This isn't hard, though. It > might even help in pretty-printing. > > > An alternative would be to have a way of setting roles differently on > export than internally. I don't think this breaks the type system, but it's > yet another thing to specify and support. And we'd have to consider the > possibility that some module will import a datatype from multiple > re-exporting modules, each with different ascribed role annotations. Is > this an error? Does GHC take some sort of least upper bound? I prefer not > to go here, but there's nothing terribly wrong with this approach. > > Richard > > On Apr 10, 2015, at 9:37 AM, David Terei wrote: > > > I'll prepare a patch for the userguide soon. > > > > As for something better, yes I think we can and should. It's on my > > todo list :) Basically, the new-GND design has all the mechanisms to > > be safe, but sadly the defaults are rather worrying. Without explicit > > annotations from the user, module abstractions are broken. This is why > > we left GND out of Safe Haskell for the moment as it is a subtle and > > easy mistake to make. > > > > If the module contained explicit role annotations then it could be > > allowed. The discussion in > > https://ghc.haskell.org/trac/ghc/ticket/8827 has other solutions that > > I prefer, such as only exporting the Coerce instance if all the > > constructors are exported, it seems that the ship sailed on these > > bigger changes sadly. > > > > Cheers, > > David > > > > On 9 April 2015 at 00:56, Simon Peyton Jones > wrote: > >> There is a long discussion on > https://ghc.haskell.org/trac/ghc/ticket/8827 > >> about whether the new Coercible story makes GND ok for Safe Haskell. > At a > >> type-soundness level, definitely yes. But there are other > less-clear-cut > >> issues like ?breaking abstractions? to consider. The decision on the > ticket > >> (comment:36) seems to be: GND stays out of Safe Haskell for now, but > there > >> is room for a better proposal. > >> > >> > >> > >> I don?t have an opinion myself. David Terei and David Mazieres are in > the > >> driving seat, but I?m sure they?ll be responsive to user input. > >> > >> > >> > >> However, I think the user manual may not have kept up with #8827. The > >> sentence ?GeneralizedNewtypeDeriving ? It can be used to violate > constructor > >> access control, by allowing untrusted code to manipulate protected data > >> types in ways the data type author did not intend, breaking invariants > they > >> have established.? vanished from the 7.8 user manual (links below). > Maybe > >> it should be restored. > >> > >> > >> > >> Safe Haskell aficionados, would you like to offer a patch for the > manual? > >> And maybe also a less drastic remedy than omitting GND altogether? > >> > >> > >> > >> Simon > >> > >> > >> > >> From: Omari Norman [mailto:omari at smileystation.com] > >> Sent: 09 April 2015 02:44 > >> To: haskell Cafe > >> Subject: Generalized Newtype Deriving not allowed in Safe Haskell > >> > >> > >> > >> When compiling code with Generalized Newtype Deriving and the > -fwarn-unsafe > >> flag, I get > >> > >> > >> > >> -XGeneralizedNewtypeDeriving is not allowed in Safe Haskell > >> > >> > >> > >> This happens both in GHC 7.8 and GHC 7.10. > >> > >> > >> > >> I thought I remembered reading somewhere that GNTD is now part of the > safe > >> language? The GHC manual used to state that GNTD is not allowed in Safe > >> Haskell: > >> > >> > >> > >> > https://downloads.haskell.org/~ghc/7.6.3/docs/html/users_guide/safe-haskell.html#safe-language > >> > >> > >> > >> But this language on GNTD not being part of the safe language was > removed in > >> the 7.8 manual: > >> > >> > >> > >> > https://downloads.haskell.org/~ghc/7.8.2/docs/html/users_guide/safe-haskell.html#safe-language > >> > >> > >> > >> The GHC release notes don't say anything about this one way or the > other. > >> Thoughts? > >> > >> > >> _______________________________________________ > >> ghc-devs mailing list > >> ghc-devs at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > >> > > _______________________________________________ > > ghc-devs mailing list > > ghc-devs at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- J. Douglas McClean (781) 561-5540 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From amindfv at gmail.com Fri Apr 10 13:23:56 2015 From: amindfv at gmail.com (Tom Murphy) Date: Fri, 10 Apr 2015 09:23:56 -0400 Subject: [Haskell-cafe] Haskell on BBC Radio 4 In-Reply-To: References: Message-ID: If flash player isn't your thing, you can find mp3s here: http://www.bbc.co.uk/podcasts/series/r4codes (episode "Babel") Tom On Fri, Apr 10, 2015 at 9:00 AM, Colin Adams wrote: > Haskell featured strongly today in the last Episode of Radio 4's "Codes > that changed the world": > http://www.bbc.co.uk/programmes/b05prkh7 > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholls.mark at vimn.com Fri Apr 10 13:32:36 2015 From: nicholls.mark at vimn.com (Nicholls, Mark) Date: Fri, 10 Apr 2015 13:32:36 +0000 Subject: [Haskell-cafe] indexed writer monad References: <5512CA1C.5000506@ro-che.info> <5512DC3F.9040301@ro-che.info> <55132913.6090008@ro-che.info> Message-ID: Looking at this again.... I've done that, but noted that my "monoid" was constructed backwards... e.g. > myWriter2@(IxWriter (logs,a2)) = do > x <- (ireturn 3) > tell x > y <- (ireturn 5) > tell y > tell (show y) > return (x + y) Gives logs :: j -> (((j, Integer), Integer), String) if I wanted to access the head (i.e. the 1 inner inner inner Integer), I have to evaluate the whole expression (pretty much). This made me try to work out how to build the monoid the other way around.... i.e. logs :: j -> (Integer,(Integer,(String,j))) but that doesn?t seem to be consistent with the signature of ibind (or at least not in my head). Don't I want a Control.Effect instead? Then I think I can construct the correct signature in the type family Plus? I'll have a go, but it would be nice to know I'm not disappearing into a dead end. > On 25 Mar 2015, at 21:31, Roman Cheplyaka wrote: > > Alright, this does look like an indexed writer monad. > > Here's the indexed writer monad definition: > > newtype IxWriter c i j a = IxWriter { runIxWriter :: (c i j, a) } > > Here c is a Category, ie. an indexed Monoid. > > Under that constraint, you can make this an indexed monad (from the > 'indexed' package). That should be a good exercise that'll help you to > understand how this all works. > > In practice, you'll probably only need c = (->), so your writer > becomes isomorphic to (i -> j, a). This is similar to the ordinary > writer monad that uses difference lists as its monoid. > > The indexed 'tell' will be > > tell :: a -> IxWriter (->) z (z, a) () tell a = IxWriter (\z -> > (z,a), ()) > > And to run an indexed monad, you apply that i->j function to your end > marker. > > Does this help? > >> On 25/03/15 18:27, Nicholls, Mark wrote: >> Ok... >> >> Well lets just take the indexed writer >> >> So....for a writer we're going... >> >> e.g. >> >> (a,[c]) -> (a -> (b,[c])) -> (b,[c]) >> >> If I use the logging use case metaphor... >> So I "log" a few Strings (probably)....and out pops a (t,[String]) ? >> >> (I've never actually used one!) >> >> But what if I want to "log" different types... >> >> I want to "log" a string, then an integer bla bla... >> >> So I won't get the monoid [String] >> >> I should get something like a nest 2 tuple. >> >> do >> log 1 >> log "a" >> log 3 >> log "sdds" >> return 23 >> >> I could get a >> (Integer,(String,(Integer,(String,END)))) >> >> and then I could dereference this indexed monoid(?)...by splitting it into a head of a specific type and a tail...like a list. >> >> Maybe my use of "indexed" isn't correct. >> >> So it seems to me, that writer monad is dependent on the monoid >> append (of lists usually?)....and my "special" monad would be >> dependent on a "special" monoid append of basically >> >> (these are types) >> >> (a,b) ++ (c,d) = (a,b,c,d) >> (a,b,c,d) ++ (e) = (a,b,c,d,e) >> >> Which are encoded as nested 2 tuples (with an End marker) >> >> (a,(b,End) ++ (c,(d,End)) = (a,(b, (c,(d,End))) ? >> >> That sort of implies some sort of type family trickery...which isnt toooo bad (I've dabbled). >> >> Looks like an ixmonad to me....In my pigoeon Haskell I could probably wrestle with some code for a few days....but does such a thing already exist?...I don't want to reinvent the wheel, just mess about with it, and turn it into a cog (and then fail to map it back to my OO world). >> >> -----Original Message----- >> From: Roman Cheplyaka [mailto:roma at ro-che.info] >> Sent: 25 March 2015 4:03 PM >> To: Nicholls, Mark; haskell-cafe at haskell.org >> Subject: Re: [Haskell-cafe] indexed writer monad >> >> Sorry, I didn't mean to scare you off. >> >> By Category, I didn't mean the math concept; I meant simply the class from the Control.Category module. You don't need to learn any category theory to understand it. Since you seem to know already what Monoid is, try to compare those two classes, notice the similarities and see how (and why) one could be called an indexed version of the other. >> >> But if you don't know what "indexed" means, how do you know you need it? >> Perhaps you could describe your problem, and we could tell you what abstraction would fit that use case, be it indexed monad or something else. >> >>> On 25/03/15 17:22, Nicholls, Mark wrote: >>> Ah that assumes I know what a category is! (I did find some code that claimed the same thing)....maths doesn't scare me (much), and I suspect its nothing complicated (sounds like a category is a tuple then! Probably not), but I don't want to read a book on category theory to write a bit of code...yet. >>> >>> Ideally there would be a chapter in something like "learn you an indexed Haskell for the great good". >>> >>> Then I could take the code, use it...mess about with it...break it...put it back together in a slightly different shape and....bingo...it either works...or I find theres a good reason why it doesn't....(or I post a message to the caf?). >>> >>> -----Original Message----- >>> From: Roman Cheplyaka [mailto:roma at ro-che.info] >>> Sent: 25 March 2015 2:46 PM >>> To: Nicholls, Mark; haskell-cafe at haskell.org >>> Subject: Re: [Haskell-cafe] indexed writer monad >>> >>> An indexed monoid is just a Category. >>> >>>> On 25/03/15 16:32, Nicholls, Mark wrote: >>>> Anyone? >>>> >>>> >>>> >>>> I can handle monads, but I have something (actually in F#) that >>>> feels like it should be something like a indexed writer monad >>>> (which F# probably wouldn't support). >>>> >>>> >>>> >>>> So I thought I'd do some research in Haskell. >>>> >>>> >>>> >>>> I know little or nothing about indexed monad (though I have built >>>> the indexed state monad in C#). >>>> >>>> >>>> >>>> So I would assume there would be an indexed monoid (that looks at >>>> bit like a tuple?). >>>> >>>> >>>> >>>> e.g. >>>> >>>> >>>> >>>> (a,b) ++ (c,d) = (a,b,c,d) >>>> >>>> (a,b,c,d) ++ (e) = (a,b,c,d,e) >>>> >>>> >>>> >>>> ? >>>> >>>> >>>> >>>> There seems to be some stuff about "update monads", but it doesn't >>>> really look like a writer. >>>> >>>> >>>> >>>> I could do with playing around with an indexed writer, in order to >>>> get my head around what I'm doing..then try and capture what I'm >>>> doing.then try (and fail) to port it back. >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> CONFIDENTIALITY NOTICE >>>> >>>> This e-mail (and any attached files) is confidential and protected >>>> by copyright (and other intellectual property rights). If you are >>>> not the intended recipient please e-mail the sender and then delete >>>> the email and any attached files immediately. Any further use or >>>> dissemination is prohibited. >>>> >>>> While MTV Networks Europe has taken steps to ensure that this email >>>> and any attachments are virus free, it is your responsibility to >>>> ensure that this message and any attachments are virus free and do >>>> not affect your systems / data. >>>> >>>> Communicating by email is not 100% secure and carries risks such as >>>> delay, data corruption, non-delivery, wrongful interception and >>>> unauthorised amendment. If you communicate with us by e-mail, you >>>> acknowledge and assume these risks, and you agree to take >>>> appropriate measures to minimise these risks when e-mailing us. >>>> >>>> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, >>>> Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions >>>> International, Be Viacom, Viacom International Media Networks and >>>> VIMN and Comedy Central are all trading names of MTV Networks Europe. >>>> MTV Networks Europe is a partnership between MTV Networks Europe Inc. >>>> and Viacom Networks Europe Inc. Address for service in Great >>>> Britain is >>>> 17-29 Hawley Crescent, London, NW1 8TT. >>>> >>>> >>>> >>>> _______________________________________________ >>>> Haskell-Cafe mailing list >>>> Haskell-Cafe at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >>> CONFIDENTIALITY NOTICE >>> >>> This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. >>> >>> While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. >>> >>> Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. >>> >>> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. >> >> CONFIDENTIALITY NOTICE >> >> This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. >> >> While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. >> >> Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. >> >> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. > CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. From nicholls.mark at vimn.com Fri Apr 10 15:12:19 2015 From: nicholls.mark at vimn.com (Nicholls, Mark) Date: Fri, 10 Apr 2015 15:12:19 +0000 Subject: [Haskell-cafe] indexed writer monad References: <5512CA1C.5000506@ro-che.info> <5512DC3F.9040301@ro-che.info> <55132913.6090008@ro-che.info> Message-ID: And Control.Effect.WriteOnceWriter Is what I'm after I think. -----Original Message----- From: Nicholls, Mark Sent: 10 April 2015 2:33 PM To: 'Roman Cheplyaka' Cc: 'haskell-cafe at haskell.org' Subject: RE: [Haskell-cafe] indexed writer monad Looking at this again.... I've done that, but noted that my "monoid" was constructed backwards... e.g. > myWriter2@(IxWriter (logs,a2)) = do > x <- (ireturn 3) > tell x > y <- (ireturn 5) > tell y > tell (show y) > return (x + y) Gives logs :: j -> (((j, Integer), Integer), String) if I wanted to access the head (i.e. the 1 inner inner inner Integer), I have to evaluate the whole expression (pretty much). This made me try to work out how to build the monoid the other way around.... i.e. logs :: j -> (Integer,(Integer,(String,j))) but that doesn?t seem to be consistent with the signature of ibind (or at least not in my head). Don't I want a Control.Effect instead? Then I think I can construct the correct signature in the type family Plus? I'll have a go, but it would be nice to know I'm not disappearing into a dead end. > On 25 Mar 2015, at 21:31, Roman Cheplyaka wrote: > > Alright, this does look like an indexed writer monad. > > Here's the indexed writer monad definition: > > newtype IxWriter c i j a = IxWriter { runIxWriter :: (c i j, a) } > > Here c is a Category, ie. an indexed Monoid. > > Under that constraint, you can make this an indexed monad (from the > 'indexed' package). That should be a good exercise that'll help you to > understand how this all works. > > In practice, you'll probably only need c = (->), so your writer > becomes isomorphic to (i -> j, a). This is similar to the ordinary > writer monad that uses difference lists as its monoid. > > The indexed 'tell' will be > > tell :: a -> IxWriter (->) z (z, a) () tell a = IxWriter (\z -> > (z,a), ()) > > And to run an indexed monad, you apply that i->j function to your end > marker. > > Does this help? > >> On 25/03/15 18:27, Nicholls, Mark wrote: >> Ok... >> >> Well lets just take the indexed writer >> >> So....for a writer we're going... >> >> e.g. >> >> (a,[c]) -> (a -> (b,[c])) -> (b,[c]) >> >> If I use the logging use case metaphor... >> So I "log" a few Strings (probably)....and out pops a (t,[String]) ? >> >> (I've never actually used one!) >> >> But what if I want to "log" different types... >> >> I want to "log" a string, then an integer bla bla... >> >> So I won't get the monoid [String] >> >> I should get something like a nest 2 tuple. >> >> do >> log 1 >> log "a" >> log 3 >> log "sdds" >> return 23 >> >> I could get a >> (Integer,(String,(Integer,(String,END)))) >> >> and then I could dereference this indexed monoid(?)...by splitting it into a head of a specific type and a tail...like a list. >> >> Maybe my use of "indexed" isn't correct. >> >> So it seems to me, that writer monad is dependent on the monoid >> append (of lists usually?)....and my "special" monad would be >> dependent on a "special" monoid append of basically >> >> (these are types) >> >> (a,b) ++ (c,d) = (a,b,c,d) >> (a,b,c,d) ++ (e) = (a,b,c,d,e) >> >> Which are encoded as nested 2 tuples (with an End marker) >> >> (a,(b,End) ++ (c,(d,End)) = (a,(b, (c,(d,End))) ? >> >> That sort of implies some sort of type family trickery...which isnt toooo bad (I've dabbled). >> >> Looks like an ixmonad to me....In my pigoeon Haskell I could probably wrestle with some code for a few days....but does such a thing already exist?...I don't want to reinvent the wheel, just mess about with it, and turn it into a cog (and then fail to map it back to my OO world). >> >> -----Original Message----- >> From: Roman Cheplyaka [mailto:roma at ro-che.info] >> Sent: 25 March 2015 4:03 PM >> To: Nicholls, Mark; haskell-cafe at haskell.org >> Subject: Re: [Haskell-cafe] indexed writer monad >> >> Sorry, I didn't mean to scare you off. >> >> By Category, I didn't mean the math concept; I meant simply the class from the Control.Category module. You don't need to learn any category theory to understand it. Since you seem to know already what Monoid is, try to compare those two classes, notice the similarities and see how (and why) one could be called an indexed version of the other. >> >> But if you don't know what "indexed" means, how do you know you need it? >> Perhaps you could describe your problem, and we could tell you what abstraction would fit that use case, be it indexed monad or something else. >> >>> On 25/03/15 17:22, Nicholls, Mark wrote: >>> Ah that assumes I know what a category is! (I did find some code that claimed the same thing)....maths doesn't scare me (much), and I suspect its nothing complicated (sounds like a category is a tuple then! Probably not), but I don't want to read a book on category theory to write a bit of code...yet. >>> >>> Ideally there would be a chapter in something like "learn you an indexed Haskell for the great good". >>> >>> Then I could take the code, use it...mess about with it...break it...put it back together in a slightly different shape and....bingo...it either works...or I find theres a good reason why it doesn't....(or I post a message to the caf?). >>> >>> -----Original Message----- >>> From: Roman Cheplyaka [mailto:roma at ro-che.info] >>> Sent: 25 March 2015 2:46 PM >>> To: Nicholls, Mark; haskell-cafe at haskell.org >>> Subject: Re: [Haskell-cafe] indexed writer monad >>> >>> An indexed monoid is just a Category. >>> >>>> On 25/03/15 16:32, Nicholls, Mark wrote: >>>> Anyone? >>>> >>>> >>>> >>>> I can handle monads, but I have something (actually in F#) that >>>> feels like it should be something like a indexed writer monad >>>> (which F# probably wouldn't support). >>>> >>>> >>>> >>>> So I thought I'd do some research in Haskell. >>>> >>>> >>>> >>>> I know little or nothing about indexed monad (though I have built >>>> the indexed state monad in C#). >>>> >>>> >>>> >>>> So I would assume there would be an indexed monoid (that looks at >>>> bit like a tuple?). >>>> >>>> >>>> >>>> e.g. >>>> >>>> >>>> >>>> (a,b) ++ (c,d) = (a,b,c,d) >>>> >>>> (a,b,c,d) ++ (e) = (a,b,c,d,e) >>>> >>>> >>>> >>>> ? >>>> >>>> >>>> >>>> There seems to be some stuff about "update monads", but it doesn't >>>> really look like a writer. >>>> >>>> >>>> >>>> I could do with playing around with an indexed writer, in order to >>>> get my head around what I'm doing..then try and capture what I'm >>>> doing.then try (and fail) to port it back. >>>> >>>> >>>> >>>> >>>> >>>> >>>> >>>> CONFIDENTIALITY NOTICE >>>> >>>> This e-mail (and any attached files) is confidential and protected >>>> by copyright (and other intellectual property rights). If you are >>>> not the intended recipient please e-mail the sender and then delete >>>> the email and any attached files immediately. Any further use or >>>> dissemination is prohibited. >>>> >>>> While MTV Networks Europe has taken steps to ensure that this email >>>> and any attachments are virus free, it is your responsibility to >>>> ensure that this message and any attachments are virus free and do >>>> not affect your systems / data. >>>> >>>> Communicating by email is not 100% secure and carries risks such as >>>> delay, data corruption, non-delivery, wrongful interception and >>>> unauthorised amendment. If you communicate with us by e-mail, you >>>> acknowledge and assume these risks, and you agree to take >>>> appropriate measures to minimise these risks when e-mailing us. >>>> >>>> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, >>>> Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions >>>> International, Be Viacom, Viacom International Media Networks and >>>> VIMN and Comedy Central are all trading names of MTV Networks Europe. >>>> MTV Networks Europe is a partnership between MTV Networks Europe Inc. >>>> and Viacom Networks Europe Inc. Address for service in Great >>>> Britain is >>>> 17-29 Hawley Crescent, London, NW1 8TT. >>>> >>>> >>>> >>>> _______________________________________________ >>>> Haskell-Cafe mailing list >>>> Haskell-Cafe at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >>> CONFIDENTIALITY NOTICE >>> >>> This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. >>> >>> While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. >>> >>> Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. >>> >>> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. >> >> CONFIDENTIALITY NOTICE >> >> This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. >> >> While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. >> >> Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. >> >> MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. > CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 10 16:41:02 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 10 Apr 2015 17:41:02 +0100 Subject: [Haskell-cafe] Eta-reducing case branches Message-ID: <20150410164102.GL31520@weber> Has anyone ever considered permitting case branches to be eta reduced? For example, it is often nice to rewrite foo x = bar baz x as foo = bar baz Likewise, I have often wanted to rewrite case m of Nothing -> n Just x -> quux x as case m of Nothing -> n Just -> quux Am I missing an obvious reason this wouldn't work? Tom From frank at fstaals.net Fri Apr 10 18:35:14 2015 From: frank at fstaals.net (Frank Staals) Date: Fri, 10 Apr 2015 20:35:14 +0200 Subject: [Haskell-cafe] Eta-reducing case branches In-Reply-To: <20150410164102.GL31520@weber> (Tom Ellis's message of "Fri, 10 Apr 2015 17:41:02 +0100") References: <20150410164102.GL31520@weber> Message-ID: Tom Ellis writes: > Has anyone ever considered permitting case branches to be eta reduced? For > example, it is often nice to rewrite > > foo x = bar baz x > > as > > foo = bar baz > > Likewise, I have often wanted to rewrite > > case m of > Nothing -> n > Just x -> quux x > > as > case m of > Nothing -> n > Just -> quux > > Am I missing an obvious reason this wouldn't work? > > Tom I would think that is a bit weird since Nothing and Just have different types. -- - Frank From chrisdone at gmail.com Fri Apr 10 18:54:23 2015 From: chrisdone at gmail.com (Christopher Done) Date: Fri, 10 Apr 2015 20:54:23 +0200 Subject: [Haskell-cafe] Eta-reducing case branches In-Reply-To: References: <20150410164102.GL31520@weber> Message-ID: Well, I think his proposal was that for a constructor C with n slots, the RHS of the case alt would be expected to be an n-ary function where arg*i* for *i..n* would have type T*i* for the types of each slot in C. I think it adds up logically... but it is rather odd. On 10 April 2015 at 20:35, Frank Staals wrote: > Tom Ellis writes: > > > Has anyone ever considered permitting case branches to be eta reduced? > For > > example, it is often nice to rewrite > > > > foo x = bar baz x > > > > as > > > > foo = bar baz > > > > Likewise, I have often wanted to rewrite > > > > case m of > > Nothing -> n > > Just x -> quux x > > > > as > > case m of > > Nothing -> n > > Just -> quux > > > > Am I missing an obvious reason this wouldn't work? > > > > Tom > > I would think that is a bit weird since Nothing and Just have different > types. > > -- > > - Frank > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Fri Apr 10 22:54:18 2015 From: david.feuer at gmail.com (David Feuer) Date: Fri, 10 Apr 2015 18:54:18 -0400 Subject: [Haskell-cafe] Generalized Newtype Deriving not allowed in Safe Haskell In-Reply-To: References: <0d913dcca48b47abadb7ab6b4125b906@DB4PR30MB030.064d.mgd.msft.net> <86F7E6C0-6CD5-40CD-BAF9-1CA07FA80872@cis.upenn.edu> Message-ID: I think a module exporting some but not all data constructors of a type is fundamentally broken behavior. I would generally be in favor of prohibiting it altogether, and I would be strongly opposed to letting continued support for it break anything else. On Apr 10, 2015 9:05 AM, "Douglas McClean" wrote: > I don't think that 2+3 is equivalent to 2', because an explicit import > list or hiding list could've brought only some of the datatype's > constructors into visibility. > > On Fri, Apr 10, 2015 at 8:07 AM, Richard Eisenberg > wrote: > >> Here's an idea: For a module to be Safe, then for each exported datatype, >> one of the following must hold: >> 1) The datatype comes with a role annotation. >> 2) The module exports all of the datatype's constructors. >> 3) If the datatype is defined in a place other than the current module, >> the current module exports no fewer data constructors than are exported in >> the datatype's defining module. >> >> Why? >> 1) The role annotation, even if it has no effect, shows that the >> programmer has considered roles. Any mistake here is clearly the >> programmer's fault. >> 2) This datatype is clearly meant not to be abstract. `Coercible` then >> gives clients no more power than they already have. >> 3) This is subtler. It is a common idiom to export a datatype's >> constructors from a package-internal module, but then never to export the >> constructors beyond the package. If such a datatype has a role annotation >> (in its defining module, of course), then we're fine, even if it is >> exported abstractly later. However, suppose we are abstractly re-exporting >> a datatype that exports its constructors from its defining module. If there >> is no role annotation on the datatype, we're in trouble and should fail. >> BUT, if the datatype were exported abstractly in its defining module, then >> we don't need to fail on re-export, because nothing has changed. >> >> >> Actually, we could simplify the conditions. Change (2) to: >> >> 2') The module exports all of the datatype's visible constructors. >> >> I think explaining in terms of separate rules (2) and (3) is a little >> clearer, because the re-export case is slightly subtle, and this subtlety >> can be lost in (2'). >> >> This proposal would require tracking (in interface files, too) whether or >> not a datatype comes with a role annotation. This isn't hard, though. It >> might even help in pretty-printing. >> >> >> An alternative would be to have a way of setting roles differently on >> export than internally. I don't think this breaks the type system, but it's >> yet another thing to specify and support. And we'd have to consider the >> possibility that some module will import a datatype from multiple >> re-exporting modules, each with different ascribed role annotations. Is >> this an error? Does GHC take some sort of least upper bound? I prefer not >> to go here, but there's nothing terribly wrong with this approach. >> >> Richard >> >> On Apr 10, 2015, at 9:37 AM, David Terei wrote: >> >> > I'll prepare a patch for the userguide soon. >> > >> > As for something better, yes I think we can and should. It's on my >> > todo list :) Basically, the new-GND design has all the mechanisms to >> > be safe, but sadly the defaults are rather worrying. Without explicit >> > annotations from the user, module abstractions are broken. This is why >> > we left GND out of Safe Haskell for the moment as it is a subtle and >> > easy mistake to make. >> > >> > If the module contained explicit role annotations then it could be >> > allowed. The discussion in >> > https://ghc.haskell.org/trac/ghc/ticket/8827 has other solutions that >> > I prefer, such as only exporting the Coerce instance if all the >> > constructors are exported, it seems that the ship sailed on these >> > bigger changes sadly. >> > >> > Cheers, >> > David >> > >> > On 9 April 2015 at 00:56, Simon Peyton Jones >> wrote: >> >> There is a long discussion on >> https://ghc.haskell.org/trac/ghc/ticket/8827 >> >> about whether the new Coercible story makes GND ok for Safe Haskell. >> At a >> >> type-soundness level, definitely yes. But there are other >> less-clear-cut >> >> issues like ?breaking abstractions? to consider. The decision on the >> ticket >> >> (comment:36) seems to be: GND stays out of Safe Haskell for now, but >> there >> >> is room for a better proposal. >> >> >> >> >> >> >> >> I don?t have an opinion myself. David Terei and David Mazieres are in >> the >> >> driving seat, but I?m sure they?ll be responsive to user input. >> >> >> >> >> >> >> >> However, I think the user manual may not have kept up with #8827. The >> >> sentence ?GeneralizedNewtypeDeriving ? It can be used to violate >> constructor >> >> access control, by allowing untrusted code to manipulate protected data >> >> types in ways the data type author did not intend, breaking invariants >> they >> >> have established.? vanished from the 7.8 user manual (links below). >> Maybe >> >> it should be restored. >> >> >> >> >> >> >> >> Safe Haskell aficionados, would you like to offer a patch for the >> manual? >> >> And maybe also a less drastic remedy than omitting GND altogether? >> >> >> >> >> >> >> >> Simon >> >> >> >> >> >> >> >> From: Omari Norman [mailto:omari at smileystation.com] >> >> Sent: 09 April 2015 02:44 >> >> To: haskell Cafe >> >> Subject: Generalized Newtype Deriving not allowed in Safe Haskell >> >> >> >> >> >> >> >> When compiling code with Generalized Newtype Deriving and the >> -fwarn-unsafe >> >> flag, I get >> >> >> >> >> >> >> >> -XGeneralizedNewtypeDeriving is not allowed in Safe Haskell >> >> >> >> >> >> >> >> This happens both in GHC 7.8 and GHC 7.10. >> >> >> >> >> >> >> >> I thought I remembered reading somewhere that GNTD is now part of the >> safe >> >> language? The GHC manual used to state that GNTD is not allowed in >> Safe >> >> Haskell: >> >> >> >> >> >> >> >> >> https://downloads.haskell.org/~ghc/7.6.3/docs/html/users_guide/safe-haskell.html#safe-language >> >> >> >> >> >> >> >> But this language on GNTD not being part of the safe language was >> removed in >> >> the 7.8 manual: >> >> >> >> >> >> >> >> >> https://downloads.haskell.org/~ghc/7.8.2/docs/html/users_guide/safe-haskell.html#safe-language >> >> >> >> >> >> >> >> The GHC release notes don't say anything about this one way or the >> other. >> >> Thoughts? >> >> >> >> >> >> _______________________________________________ >> >> ghc-devs mailing list >> >> ghc-devs at haskell.org >> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> >> >> > _______________________________________________ >> > ghc-devs mailing list >> > ghc-devs at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs >> > >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > > > -- > J. Douglas McClean > > (781) 561-5540 (cell) > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mle+hs at mega-nerd.com Sat Apr 11 08:22:22 2015 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Sat, 11 Apr 2015 01:22:22 -0700 Subject: [Haskell-cafe] Eta-reducing case branches In-Reply-To: <20150410164102.GL31520@weber> References: <20150410164102.GL31520@weber> Message-ID: <20150411012222.83164fd0131ec57e59a0c9c3@mega-nerd.com> Tom Ellis wrote: > Likewise, I have often wanted to rewrite > > case m of > Nothing -> n > Just x -> quux x Why not emply the maybe function (b -> (a -> b) -> Maybe a -> b) maybe n quux m Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From mle+hs at mega-nerd.com Sat Apr 11 08:38:00 2015 From: mle+hs at mega-nerd.com (Erik de Castro Lopo) Date: Sat, 11 Apr 2015 01:38:00 -0700 Subject: [Haskell-cafe] Anyone interested in taking over network-uri? In-Reply-To: References: Message-ID: <20150411013800.953d4769771214526517a6af@mega-nerd.com> Michael Snoyman wrote: > Does that sound like a reasonable setup to both of you? That sounds like an ideal combination of a relative newcomer with someone with a lot of experience and expertise. Thanks to both of you! Erik -- ---------------------------------------------------------------------- Erik de Castro Lopo http://www.mega-nerd.com/ From brendan.g.hay at gmail.com Sun Apr 12 07:55:05 2015 From: brendan.g.hay at gmail.com (Brendan Hay) Date: Sun, 12 Apr 2015 09:55:05 +0200 Subject: [Haskell-cafe] Dealing with GHC 7.10 prelude imports Message-ID: Hi, I've run into a couple of cases when attempting to support multiple GHC versions in my libraries (7.6.3 -> 7.10) where I've repeatedly made mistakes due to being warned about unused imports for various modules that are now part of the Prelude such as Data.Monoid, Data.Foldable, etc. which I subsequently remove either manually or via editor automation sufficiently indistinguishable from magic. This then results in successful compilation on 7.10 and failure on earlier versions of GHC due to missing imports (ie. Data.Monoid (mappend, mempty)), which prior to my current workflow of manually building on multiple versions of GHC before releasing a new version, manifested once or twice only after uploading to Hackage. Now this is all user/workflow error on my part, but I wondered if others have some kind of trick up their sleeve for avoiding these kind of issues? I could attempt to tailor the compiler's warning flags appropriately, but it bodes ill for spotting genuinely unused imports. Cheers, Brendan -------------- next part -------------- An HTML attachment was scrubbed... URL: From alan.zimm at gmail.com Sun Apr 12 08:14:19 2015 From: alan.zimm at gmail.com (Alan & Kim Zimmerman) Date: Sun, 12 Apr 2015 10:14:19 +0200 Subject: [Haskell-cafe] Dealing with GHC 7.10 prelude imports In-Reply-To: References: Message-ID: See https://ghc.haskell.org/trac/ghc/wiki/Migration/7.10. I have just solved one of these by doing {-# LANGUAGE CPP #-} #if __GLASGOW_HASKELL__ < 709 import Data.Monoid hiding ((<>)) #endif Alan On Sun, Apr 12, 2015 at 9:55 AM, Brendan Hay wrote: > Hi, > > I've run into a couple of cases when attempting to support multiple GHC > versions in my libraries (7.6.3 -> 7.10) where I've repeatedly made > mistakes > due to being warned about unused imports for various modules that are now > part > of the Prelude such as Data.Monoid, Data.Foldable, etc. which I > subsequently > remove either manually or via editor automation sufficiently > indistinguishable > from magic. > > This then results in successful compilation on 7.10 and failure on earlier > versions of GHC due to missing imports (ie. Data.Monoid (mappend, mempty)), > which prior to my current workflow of manually building on multiple > versions of > GHC before releasing a new version, manifested once or twice only after > uploading to Hackage. > > Now this is all user/workflow error on my part, but I wondered if others > have > some kind of trick up their sleeve for avoiding these kind of issues? I > could > attempt to tailor the compiler's warning flags appropriately, but it bodes > ill for > spotting genuinely unused imports. > > Cheers, > Brendan > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nukasu.kanaka at gmail.com Sun Apr 12 09:41:15 2015 From: nukasu.kanaka at gmail.com (Nikita Volkov) Date: Sun, 12 Apr 2015 12:41:15 +0300 Subject: [Haskell-cafe] Dealing with GHC 7.10 prelude imports In-Reply-To: References: Message-ID: You can use a custom Prelude. E.g., the "base-prelude" project takes care of this problem: http://hackage.haskell.org/package/base-prelude 2015-04-12 11:14 GMT+03:00 Alan & Kim Zimmerman : > See https://ghc.haskell.org/trac/ghc/wiki/Migration/7.10. > > I have just solved one of these by doing > > {-# LANGUAGE CPP #-} > #if __GLASGOW_HASKELL__ < 709 > import Data.Monoid hiding ((<>)) > #endif > > Alan > > > On Sun, Apr 12, 2015 at 9:55 AM, Brendan Hay > wrote: > >> Hi, >> >> I've run into a couple of cases when attempting to support multiple GHC >> versions in my libraries (7.6.3 -> 7.10) where I've repeatedly made >> mistakes >> due to being warned about unused imports for various modules that are now >> part >> of the Prelude such as Data.Monoid, Data.Foldable, etc. which I >> subsequently >> remove either manually or via editor automation sufficiently >> indistinguishable >> from magic. >> >> This then results in successful compilation on 7.10 and failure on earlier >> versions of GHC due to missing imports (ie. Data.Monoid (mappend, >> mempty)), >> which prior to my current workflow of manually building on multiple >> versions of >> GHC before releasing a new version, manifested once or twice only after >> uploading to Hackage. >> >> Now this is all user/workflow error on my part, but I wondered if others >> have >> some kind of trick up their sleeve for avoiding these kind of issues? I >> could >> attempt to tailor the compiler's warning flags appropriately, but it >> bodes ill for >> spotting genuinely unused imports. >> >> Cheers, >> Brendan >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From benno.fuenfstueck at gmail.com Sun Apr 12 10:37:44 2015 From: benno.fuenfstueck at gmail.com (=?UTF-8?B?QmVubm8gRsO8bmZzdMO8Y2s=?=) Date: Sun, 12 Apr 2015 10:37:44 +0000 Subject: [Haskell-cafe] Dealing with GHC 7.10 prelude imports In-Reply-To: References: Message-ID: > I've run into a couple of cases when attempting to support multiple GHC versions in my libraries (7.6.3 -> 7.10) where I've repeatedly made mistakes due to being warned about unused imports for various modules that are now part of the Prelude such as Data.Monoid, Data.Foldable, etc. which I subsequently remove either manually or via editor automation sufficiently indistinguishable from magic. A trick is to import Prelude at the very end, like: import Control.Applicative import Data.Monoid ... import Prelude Since the unused import check is done from top to bottom, and you almost always use *something* from the Prelude, this will suppress the warning. There are some problems with qualified/explicit import lists if I recall correctly though. But it works for me most of the time. Regards, Benno -------------- next part -------------- An HTML attachment was scrubbed... URL: From brendan.g.hay at gmail.com Sun Apr 12 19:45:22 2015 From: brendan.g.hay at gmail.com (Brendan Hay) Date: Sun, 12 Apr 2015 21:45:22 +0200 Subject: [Haskell-cafe] Dealing with GHC 7.10 prelude imports In-Reply-To: References: Message-ID: Thanks for the tips! On 12 April 2015 at 12:37, Benno F?nfst?ck wrote: > > I've run into a couple of cases when attempting to support multiple GHC > versions in my libraries (7.6.3 -> 7.10) where I've repeatedly made > mistakes due to being warned about unused imports for various modules that > are now part of the Prelude such as Data.Monoid, Data.Foldable, etc. which > I subsequently remove either manually or via editor automation sufficiently > indistinguishable from magic. > > A trick is to import Prelude at the very end, like: > > import Control.Applicative > import Data.Monoid > ... > import Prelude > > Since the unused import check is done from top to bottom, and you almost > always use *something* from the Prelude, this will suppress the warning. > > There are some problems with qualified/explicit import lists if I recall > correctly though. But it works for me most of the time. > > Regards, > Benno > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Sun Apr 12 20:01:22 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Sun, 12 Apr 2015 21:01:22 +0100 Subject: [Haskell-cafe] Generalized Newtype Deriving not allowed in Safe Haskell In-Reply-To: References: <0d913dcca48b47abadb7ab6b4125b906@DB4PR30MB030.064d.mgd.msft.net> Message-ID: <2C81A9E3-EA90-46E6-8A52-7B0300F1E046@cis.upenn.edu> On Apr 12, 2015, at 9:51 AM, David Terei wrote: > > Ideally I'd like to find a way forward that works for everyone and > isn't just a Safe Haskell mode setting. Agreed. I'm not convinced this can be done, but it's certainly worth trying. > > I think the first question is, are there situations where you'd want > to use `coerce` internally to a module but disallow it externally? The > role mechanism is a little awkward as it doesn't allow this (although > it does for newtype's). If yes, then I think we should start there. Yes, the ability to use `coerce` within one module but not elsewhere would be nice. This can currently be simulated (without too much difficulty) with newtypes. A datatype D can have a more permissive role signature than an equivalent newtype N (where `newtype N = MkN D`). The package then exports N (without its constructor). This effectively allows local uses of `coerce`, even for datatypes. A more direct mechanism would be better, but I don't think we should bend over backwards for it. > > If it seems we don't need external vs internal control, then we could > simply change the default to be that GHC sets referential type > parameters to nominal and allows them to be weakened to referential > through role annotations. We could use hackage to test how much > breakage this would cause. I worry that the breakage would be significant. But, now that authors have had a chance to put in role annotations, maybe it wouldn't be so bad. The change to GHC to make this happen is trivial: just change default_role in TcTyDecls.initialRoleEnv1. I don't have the infrastructure around to make an all-of-Hackage test, but I'm happy to support someone else who does. Richard From aditya.siram at gmail.com Sun Apr 12 23:53:39 2015 From: aditya.siram at gmail.com (aditya siram) Date: Sun, 12 Apr 2015 18:53:39 -0500 Subject: [Haskell-cafe] [Announcement] FLTKHS - Bindings to the FLTK GUI Toolkit Message-ID: I'm pleased to announce the first release of Haskell bindings [1] to the FLTK GUI [2] toolkit. It now works smoothly on Windows (64-bit), Linux and Mac allowing you to create truly cross-platform native GUI applications in pure Haskell and deploy statically linked executables with no dependencies. Most of the FLTK API is covered except for a few minor widgets which I plan to get to in the next release. Motivation behind the package and installation instructions are found in the Haddocks [3]. And to get you started it ships with a number of demos. If you have any issues please report them on the Github [4] page. I'd also love any other feedback so feel free to comment here or email me at the address listed on the Hackage [5] page. Hope you enjoy! [1] https://hackage.haskell.org/package/fltkhs-0.1.0.2 [2] http://fltk.org [3] https://hackage.haskell.org/package/fltkhs-0.1.0.2/docs/Graphics-UI-FLTK-LowLevel-FLTKHS.html [4] http://github.com/deech/fltkhs [5] https://hackage.haskell.org/package/fltkhs-0.1.0.2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mantkiew at gsd.uwaterloo.ca Mon Apr 13 02:06:28 2015 From: mantkiew at gsd.uwaterloo.ca (Michal Antkiewicz) Date: Sun, 12 Apr 2015 22:06:28 -0400 Subject: [Haskell-cafe] Dealing with GHC 7.10 prelude imports In-Reply-To: References: Message-ID: Hi, the only problem with the "import Prelude" trick is that it does not help when you explicitly list what you import. For example, in one of my files I had: import Control.Applicative ((<$>)) which will still result in a warning. Simply change it to import Control.Applicative import Prelude However, sometimes there are conflicting names due to reexports. For example, forM is both in Data.Traversable and Control.Monad. In such a case, use "hiding" for one of them. Also change import () to import hiding () if you want to avoid CPP at all cost. I too went through a few cycles 7.8.4 <-> 7.10.1 trying to make the code warning free on both (I don't use Travis but having minghc on Windows or using HVR's PPA on Ubuntu helps a lot). Michal On Sun, Apr 12, 2015 at 3:45 PM, Brendan Hay wrote: > Thanks for the tips! > > On 12 April 2015 at 12:37, Benno F?nfst?ck > wrote: > >> > I've run into a couple of cases when attempting to support multiple GHC >> versions in my libraries (7.6.3 -> 7.10) where I've repeatedly made >> mistakes due to being warned about unused imports for various modules that >> are now part of the Prelude such as Data.Monoid, Data.Foldable, etc. which >> I subsequently remove either manually or via editor automation sufficiently >> indistinguishable from magic. >> >> A trick is to import Prelude at the very end, like: >> >> import Control.Applicative >> import Data.Monoid >> ... >> import Prelude >> >> Since the unused import check is done from top to bottom, and you almost >> always use *something* from the Prelude, this will suppress the warning. >> >> There are some problems with qualified/explicit import lists if I recall >> correctly though. But it works for me most of the time. >> >> Regards, >> Benno >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From simonpj at microsoft.com Mon Apr 13 07:10:59 2015 From: simonpj at microsoft.com (Simon Peyton Jones) Date: Mon, 13 Apr 2015 07:10:59 +0000 Subject: [Haskell-cafe] Generalized Newtype Deriving not allowed in Safe Haskell In-Reply-To: References: <0d913dcca48b47abadb7ab6b4125b906@DB4PR30MB030.064d.mgd.msft.net> Message-ID: David If you would like to lead a debate, and drive it to a conclusion, that would be most helpful. Usually it's constructive to write a wiki page that sets out the design choices, with examples to illustrate their consequences, to set the terms of the debate. Otherwise you risk misunderstandings, with red herrings being discussed repeatedly. Thanks Simon | -----Original Message----- | From: davidterei at gmail.com [mailto:davidterei at gmail.com] On Behalf Of | David Terei | Sent: 12 April 2015 09:52 | To: Simon Peyton Jones | Cc: Omari Norman; ghc-devs at haskell.org; haskell Cafe | Subject: Re: Generalized Newtype Deriving not allowed in Safe Haskell | | On 10 April 2015 at 01:48, Simon Peyton Jones | wrote: | > | prefer, such as only exporting the Coerce instance if all the | > | constructors are exported, it seems that the ship sailed on these | > | > Coercible is relatively recent; I don't think we should regard it as | cast in stone. | > | > But yes, the Coerbible instance of a newtype is only available when | the data constructor for the newtype is lexically in scope. | | Yes, so as you point out in the paper, this is done to preserve | abstractions, but the same rule isn't applied to data types since some | types like IORef don't even have constructors that can be in scope. | | Ideally I'd like to find a way forward that works for everyone and | isn't just a Safe Haskell mode setting. | | I think the first question is, are there situations where you'd want to | use `coerce` internally to a module but disallow it externally? The | role mechanism is a little awkward as it doesn't allow this (although | it does for newtype's). If yes, then I think we should start there. | | If it seems we don't need external vs internal control, then we could | simply change the default to be that GHC sets referential type | parameters to nominal and allows them to be weakened to referential | through role annotations. We could use hackage to test how much | breakage this would cause. | | The third option is something Safe Haskell specific, so probably | applying the newtype constructor rule to data types. | | > | > Simon | > | > | -----Original Message----- | > | From: davidterei at gmail.com [mailto:davidterei at gmail.com] On Behalf | > | Of David Terei | > | Sent: 10 April 2015 09:38 | > | To: Simon Peyton Jones | > | Cc: Omari Norman; haskell Cafe; ghc-devs at haskell.org | > | Subject: Re: Generalized Newtype Deriving not allowed in Safe | > | Haskell | > | | > | I'll prepare a patch for the userguide soon. | > | | > | As for something better, yes I think we can and should. It's on my | > | todo list :) Basically, the new-GND design has all the mechanisms | > | to be safe, but sadly the defaults are rather worrying. Without | > | explicit annotations from the user, module abstractions are | broken. | > | This is why we left GND out of Safe Haskell for the moment as it | is | > | a subtle and easy mistake to make. | > | | > | If the module contained explicit role annotations then it could be | > | allowed. The discussion in | > | https://ghc.haskell.org/trac/ghc/ticket/8827 has other solutions | > | that I prefer, such as only exporting the Coerce instance if all | > | the constructors are exported, it seems that the ship sailed on | > | these bigger changes sadly. | > | | > | Cheers, | > | David | > | | > | On 9 April 2015 at 00:56, Simon Peyton Jones | > | | > | wrote: | > | > There is a long discussion on | > | > https://ghc.haskell.org/trac/ghc/ticket/8827 | > | > about whether the new Coercible story makes GND ok for Safe | Haskell. | > | > At a type-soundness level, definitely yes. But there are other | > | > less-clear-cut issues like ?breaking abstractions? to consider. | > | The > decision on the ticket > (comment:36) seems to be: GND | stays | > | out of Safe Haskell for now, but > there is room for a better | > | proposal. | > | > | > | > | > | > | > | > I don?t have an opinion myself. David Terei and David Mazieres | > | are in > the driving seat, but I?m sure they?ll be responsive to | user input. | > | > | > | > | > | > | > | > However, I think the user manual may not have kept up with | #8827. | > | The | > | > sentence ?GeneralizedNewtypeDeriving ? It can be used to violate | > | > constructor access control, by allowing untrusted code to | > | manipulate > protected data types in ways the data type author did | > | not intend, > breaking invariants they have established.? | vanished | > | from the 7.8 > user manual (links below). Maybe it should be | restored. | > | > | > | > | > | > | > | > Safe Haskell aficionados, would you like to offer a patch for | the | > | manual? | > | > And maybe also a less drastic remedy than omitting GND | altogether? | > | > | > | > | > | > | > | > Simon | > | > | > | > | > | > | > | > From: Omari Norman [mailto:omari at smileystation.com] > Sent: 09 | > | April 2015 02:44 > To: haskell Cafe > Subject: Generalized | Newtype | > | Deriving not allowed in Safe Haskell > > > > When compiling | code | > | with Generalized Newtype Deriving and the > -fwarn-unsafe flag, I | > | get > > > > -XGeneralizedNewtypeDeriving is not allowed in Safe | > | Haskell > > > > This happens both in GHC 7.8 and GHC 7.10. | > | > | > | > | > | > | > | > I thought I remembered reading somewhere that GNTD is now part | of | > | the > safe language? The GHC manual used to state that GNTD is | not | > | allowed > in Safe > Haskell: | > | > | > | > | > | > | > | > | > | | https://downloads.haskell.org/~ghc/7.6.3/docs/html/users_guide/safe- | > | ha | > | > skell.html#safe-language | > | > | > | > | > | > | > | > But this language on GNTD not being part of the safe language | was | > | > removed in the 7.8 manual: | > | > | > | > | > | > | > | > | > | | https://downloads.haskell.org/~ghc/7.8.2/docs/html/users_guide/safe- | > | ha | > | > skell.html#safe-language | > | > | > | > | > | > | > | > The GHC release notes don't say anything about this one way or | > | the other. | > | > Thoughts? | > | > | > | > | > | > _______________________________________________ | > | > ghc-devs mailing list | > | > ghc-devs at haskell.org | > | > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs | > | > | > _______________________________________________ | > ghc-devs mailing list | > ghc-devs at haskell.org | > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs From michael at snoyman.com Mon Apr 13 08:54:14 2015 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 13 Apr 2015 08:54:14 +0000 Subject: [Haskell-cafe] Static executables in minimal Docker containers Message-ID: I'm trying to put together a minimal Docker container consisting of nothing but GHC-compiled static executables. Below you can see a full interaction I've had with GHC and Docker. The high level summary is that: * When compiled statically, the executable runs just fine in both my host OS (Ubuntu 14.04) and an Ubuntu 14.04 Docker image * That same executable run from a busybox (or a "scratch" image, not shown here since it's slightly longer to set up) hangs and then runs out of memory I've watched the process in top, and it uses up a huge amount of CPU and memory. I get the same behavior whether I compiled with or without optimizations or the multithreaded runtime. I also get identical behavior with both GHC 7.8.4 and 7.10.1. I'm not sure how best to proceed with trying to debug this, any suggestions? vagrant at vagrant-ubuntu-trusty-64:~/Desktop$ cat > hello.hs < From magnus at therning.org Mon Apr 13 09:07:49 2015 From: magnus at therning.org (Magnus Therning) Date: Mon, 13 Apr 2015 11:07:49 +0200 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: Message-ID: On 13 April 2015 at 10:54, Michael Snoyman wrote: > I'm trying to put together a minimal Docker container consisting of nothing > but GHC-compiled static executables. Below you can see a full interaction > I've had with GHC and Docker. The high level summary is that: > > * When compiled statically, the executable runs just fine in both my host OS > (Ubuntu 14.04) and an Ubuntu 14.04 Docker image > * That same executable run from a busybox (or a "scratch" image, not shown > here since it's slightly longer to set up) hangs and then runs out of memory >From what I remember busybox allows for quite a bit of configurability, so "a busybox" might need a bit more details mabye. Have you attempted running it under `strace` and/or `ltrace`? /M -- Magnus Therning OpenPGP: 0xAB4DFBA4 email: magnus at therning.org jabber: magnus at therning.org twitter: magthe http://therning.org/magnus From michael at snoyman.com Mon Apr 13 09:11:14 2015 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 13 Apr 2015 09:11:14 +0000 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: Message-ID: On Mon, Apr 13, 2015 at 12:07 PM Magnus Therning wrote: > On 13 April 2015 at 10:54, Michael Snoyman wrote: > > I'm trying to put together a minimal Docker container consisting of > nothing > > but GHC-compiled static executables. Below you can see a full interaction > > I've had with GHC and Docker. The high level summary is that: > > > > * When compiled statically, the executable runs just fine in both my > host OS > > (Ubuntu 14.04) and an Ubuntu 14.04 Docker image > > * That same executable run from a busybox (or a "scratch" image, not > shown > > here since it's slightly longer to set up) hangs and then runs out of > memory > > From what I remember busybox allows for quite a bit of > configurability, so "a busybox" might need a bit more details mabye. > > Have you attempted running it under `strace` and/or `ltrace`? > > > Sorry, I left off a word: I meant "a busybox image." There's a standard busybox Docker image which I'm testing with via the command I pasted below. There are certainly ways to configure busybox itself, but it should be trivial for others to reproduce this error simply by running my series of commands. Unfortunately, strace and ltrace aren't available in that Docker image, but it's a good idea to see if I can get them running there somehow. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Mon Apr 13 10:02:45 2015 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 13 Apr 2015 10:02:45 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security Message-ID: Many of you saw the blog post Mathieu wrote[1] about having more composable community infrastructure, which in particular focused on improvements to Hackage. I've been discussing some of these ideas with both Mathieu and others in the community working on some similar thoughts. I've also separately spent some time speaking with Chris about package signing[2]. Through those discussions, it's become apparent to me that there are in fact two core pieces of functionality we're relying on Hackage for today: * A centralized location for accessing package metadata (i.e., the cabal files) and the package contents themselves (i.e., the sdist tarballs) * A central authority for deciding who is allowed to make releases of packages, and make revisions to cabal files In my opinion, fixing the first problem is in fact very straightforward to do today using existing tools. FP Complete already hosts a full Hackage mirror[3] backed by S3, for instance, and having the metadata mirrored to a Git repository as well is not a difficult technical challenge. This is the core of what Mathieu was proposing as far as composable infrastructure, corresponding to next actions 1 and 3 at the end of his blog post (step 2, modifying Hackage, is not a prerequesite). In my opinion, such a system would far surpass in usability, reliability, and extensibility our current infrastructure, and could be rolled out in a few days at most. However, that second point- the central authority- is the more interesting one. As it stands, our entire package ecosystem is placing a huge level of trust in Hackage, without any serious way to vet what's going on there. Attack vectors abound, e.g.: * Man in the middle attacks: as we are all painfully aware, cabal-install does not support HTTPS, so a MITM attack on downloads from Hackage is trivial * A breach of the Hackage Server codebase would allow anyone to upload nefarious code[4] * Any kind of system level vulnerability could allow an attacker to compromise the server in the same way Chris's package signing work addresses most of these vulnerabilities, by adding a layer of cryptographic signatures on top of Hackage as the central authority. I'd like to propose taking this a step further: removing Hackage as the central authority, and instead relying entirely on cryptographic signatures to release new packages. I wrote up a strawman proposal last week[5] which clearly needs work to be a realistic option. My question is: are people interested in moving forward on this? If there's no interest, and everyone is satisfied with continuing with the current Hackage-central-authority, then we can proceed with having reliable and secure services built around Hackage. But if others- like me- would like to see a more secure system built from the ground up, please say so and let's continue that conversation. [1] https://www.fpcomplete.com/blog/2015/03/composable-community-infrastructure [2] https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror [4] I don't think this is just a theoretical possibility for some point in the future. I have reported an easily trigerrable DoS attack on the current Hackage Server codebase, which has been unresolved for 1.5 months now [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Mon Apr 13 10:28:33 2015 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 13 Apr 2015 10:28:33 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: Message-ID: Also, since it's relevant, here's a Github repo with all of the cabal files from Hackage which (thanks to a cron job and Travis CI) automatically updates every 30 minutes: https://github.com/commercialhaskell/all-cabal-files On Mon, Apr 13, 2015 at 1:02 PM Michael Snoyman wrote: > Many of you saw the blog post Mathieu wrote[1] about having more > composable community infrastructure, which in particular focused on > improvements to Hackage. I've been discussing some of these ideas with both > Mathieu and others in the community working on some similar thoughts. I've > also separately spent some time speaking with Chris about package > signing[2]. Through those discussions, it's become apparent to me that > there are in fact two core pieces of functionality we're relying on Hackage > for today: > > * A centralized location for accessing package metadata (i.e., the cabal > files) and the package contents themselves (i.e., the sdist tarballs) > * A central authority for deciding who is allowed to make releases of > packages, and make revisions to cabal files > > In my opinion, fixing the first problem is in fact very straightforward to > do today using existing tools. FP Complete already hosts a full Hackage > mirror[3] backed by S3, for instance, and having the metadata mirrored to a > Git repository as well is not a difficult technical challenge. This is the > core of what Mathieu was proposing as far as composable infrastructure, > corresponding to next actions 1 and 3 at the end of his blog post (step 2, > modifying Hackage, is not a prerequesite). In my opinion, such a system > would far surpass in usability, reliability, and extensibility our current > infrastructure, and could be rolled out in a few days at most. > > However, that second point- the central authority- is the more interesting > one. As it stands, our entire package ecosystem is placing a huge level of > trust in Hackage, without any serious way to vet what's going on there. > Attack vectors abound, e.g.: > > * Man in the middle attacks: as we are all painfully aware, cabal-install > does not support HTTPS, so a MITM attack on downloads from Hackage is > trivial > * A breach of the Hackage Server codebase would allow anyone to upload > nefarious code[4] > * Any kind of system level vulnerability could allow an attacker to > compromise the server in the same way > > Chris's package signing work addresses most of these vulnerabilities, by > adding a layer of cryptographic signatures on top of Hackage as the central > authority. I'd like to propose taking this a step further: removing Hackage > as the central authority, and instead relying entirely on cryptographic > signatures to release new packages. > > I wrote up a strawman proposal last week[5] which clearly needs work to be > a realistic option. My question is: are people interested in moving forward > on this? If there's no interest, and everyone is satisfied with continuing > with the current Hackage-central-authority, then we can proceed with having > reliable and secure services built around Hackage. But if others- like me- > would like to see a more secure system built from the ground up, please say > so and let's continue that conversation. > > [1] > https://www.fpcomplete.com/blog/2015/03/composable-community-infrastructure > > [2] > https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal > > [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror > [4] I don't think this is just a theoretical possibility for some point in > the future. I have reported an easily trigerrable DoS attack on the current > Hackage Server codebase, which has been unresolved for 1.5 months now > [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aeroboy94 at gmail.com Mon Apr 13 11:18:55 2015 From: aeroboy94 at gmail.com (Arian van Putten) Date: Mon, 13 Apr 2015 13:18:55 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: Message-ID: Without adding much to the discussion myself, I just want to drop this link here: http://www.cs.arizona.edu/stork/packagemanagersecurity/ . It addresses some interesting issues concerning package repositories. Anyhow I personally think the current state of hackage (not even https) is unacceptable and I'm really excited that people seem to be working on this. On Mon, Apr 13, 2015 at 12:28 PM, Michael Snoyman wrote: > Also, since it's relevant, here's a Github repo with all of the cabal > files from Hackage which (thanks to a cron job and Travis CI) automatically > updates every 30 minutes: > > https://github.com/commercialhaskell/all-cabal-files > > On Mon, Apr 13, 2015 at 1:02 PM Michael Snoyman > wrote: > >> Many of you saw the blog post Mathieu wrote[1] about having more >> composable community infrastructure, which in particular focused on >> improvements to Hackage. I've been discussing some of these ideas with both >> Mathieu and others in the community working on some similar thoughts. I've >> also separately spent some time speaking with Chris about package >> signing[2]. Through those discussions, it's become apparent to me that >> there are in fact two core pieces of functionality we're relying on Hackage >> for today: >> >> * A centralized location for accessing package metadata (i.e., the cabal >> files) and the package contents themselves (i.e., the sdist tarballs) >> * A central authority for deciding who is allowed to make releases of >> packages, and make revisions to cabal files >> >> In my opinion, fixing the first problem is in fact very straightforward >> to do today using existing tools. FP Complete already hosts a full Hackage >> mirror[3] backed by S3, for instance, and having the metadata mirrored to a >> Git repository as well is not a difficult technical challenge. This is the >> core of what Mathieu was proposing as far as composable infrastructure, >> corresponding to next actions 1 and 3 at the end of his blog post (step 2, >> modifying Hackage, is not a prerequesite). In my opinion, such a system >> would far surpass in usability, reliability, and extensibility our current >> infrastructure, and could be rolled out in a few days at most. >> >> However, that second point- the central authority- is the more >> interesting one. As it stands, our entire package ecosystem is placing a >> huge level of trust in Hackage, without any serious way to vet what's going >> on there. Attack vectors abound, e.g.: >> >> * Man in the middle attacks: as we are all painfully aware, cabal-install >> does not support HTTPS, so a MITM attack on downloads from Hackage is >> trivial >> * A breach of the Hackage Server codebase would allow anyone to upload >> nefarious code[4] >> * Any kind of system level vulnerability could allow an attacker to >> compromise the server in the same way >> >> Chris's package signing work addresses most of these vulnerabilities, by >> adding a layer of cryptographic signatures on top of Hackage as the central >> authority. I'd like to propose taking this a step further: removing Hackage >> as the central authority, and instead relying entirely on cryptographic >> signatures to release new packages. >> >> I wrote up a strawman proposal last week[5] which clearly needs work to >> be a realistic option. My question is: are people interested in moving >> forward on this? If there's no interest, and everyone is satisfied with >> continuing with the current Hackage-central-authority, then we can proceed >> with having reliable and secure services built around Hackage. But if >> others- like me- would like to see a more secure system built from the >> ground up, please say so and let's continue that conversation. >> >> [1] >> https://www.fpcomplete.com/blog/2015/03/composable-community-infrastructure >> >> [2] >> https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal >> >> [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror >> [4] I don't think this is just a theoretical possibility for some point >> in the future. I have reported an easily trigerrable DoS attack on the >> current Hackage Server codebase, which has been unresolved for 1.5 months >> now >> [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -- Groetjes, Arian -------------- next part -------------- An HTML attachment was scrubbed... URL: From fa-ml at ariis.it Mon Apr 13 12:18:48 2015 From: fa-ml at ariis.it (Francesco Ariis) Date: Mon, 13 Apr 2015 14:18:48 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: Message-ID: <20150413121848.GA3834@x60s.casa> On Mon, Apr 13, 2015 at 10:02:45AM +0000, Michael Snoyman wrote: > I wrote up a strawman proposal last week[5] which clearly needs work to be > a realistic option. My question is: are people interested in moving forward > on this? If there's no interest, and everyone is satisfied with continuing > with the current Hackage-central-authority, then we can proceed with having > reliable and secure services built around Hackage. But if others- like me- > would like to see a more secure system built from the ground up, please say > so and let's continue that conversation. I finished reading the proposal, the only minor remark I have is on this sentence: " Each signature may be revoked using standard GPG revokation. It is the /key/ being revoked really, not the single signature (in our case it would mean revoking every-package-version-or-revision-signed-by-that-key). This in turn highlights the need for a well defined process on how to handle "key transitions" (task left to the single implementators). A distributed and secure hackage sounds like a dream, I really hope this comes to life! From michael at snoyman.com Mon Apr 13 14:52:54 2015 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 13 Apr 2015 14:52:54 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <20150413121848.GA3834@x60s.casa> References: <20150413121848.GA3834@x60s.casa> Message-ID: On Mon, Apr 13, 2015 at 3:21 PM Francesco Ariis wrote: > On Mon, Apr 13, 2015 at 10:02:45AM +0000, Michael Snoyman wrote: > > I wrote up a strawman proposal last week[5] which clearly needs work to > be > > a realistic option. My question is: are people interested in moving > forward > > on this? If there's no interest, and everyone is satisfied with > continuing > > with the current Hackage-central-authority, then we can proceed with > having > > reliable and secure services built around Hackage. But if others- like > me- > > would like to see a more secure system built from the ground up, please > say > > so and let's continue that conversation. > > I finished reading the proposal, the only minor remark I have is on this > sentence: > > " Each signature may be revoked using standard GPG revokation. > > It is the /key/ being revoked really, not the single signature (in our case > it would mean revoking > every-package-version-or-revision-signed-by-that-key). > This in turn highlights the need for a well defined process on how to > handle "key transitions" (task left to the single implementators). > > > I think I was just wrong at that part of the proposal; it wouldn't be "standard GPG revokation" since, as you point out, that's for revoking a key. We'd need a custom revokation mechanism to make this work. But as to your more general point: there was an added layer of indirection that I considered but didn't write up, but I happen to like. The idea would be that all of the authorization lists would work based off of an identifier (e.g., an email address). We would then have a separate mapping between email addresses and GPG public keys, which would follow the same signature scheme that all of the other files in the repo follow. The downside to this is that it redoes the basic GPG keysigning mechanism to some extent, but it does address key transitions more easily. Another possibility would be to encode the release date of a package/version and package/version/revision and use that date for checking validity of keys. That way, old signatures remain valid for perpetuity. I'll admit to my relative lack of experience with GPG, so there's probably some built-in mechanism for addressing this kind of situation which would be better to follow. > A distributed and secure hackage sounds like a dream, I really hope this > comes to life! > > -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/20150413121848.GA3834%40x60s.casa > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Mon Apr 13 14:59:59 2015 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 13 Apr 2015 14:59:59 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> Message-ID: I purposely didn't get into those details in this document, as it can be layered on top of the setup I described here. The way I'd say this should be answered is twofold: * FP Complete already hosts all packages on S3, and we intend to continue hosting all packages there in the future * People in the community are welcome (and encouraged) to make redundant copies of packages, and then add hash-to-URL mappings to the main repo giving those redundant copies as additional download locations. In that sense, the FP Complete S3 copy would simply be one of potentially many redundant copies that could exist. On Mon, Apr 13, 2015 at 5:57 PM Dennis J. McWherter, Jr. < dennis at deathbytape.com> wrote: > This proposal looks great. The one thing I am failing to understand (and I > recognize the proposal is in early stages) is how to ensure redundancy in > the system. As far as I can tell, much of this proposal discusses the > centralized authority of the system (i.e. ensuring secure distribution) and > only references (with little detail) the distributed store. For instance, > say I host a package on a personal server and one day I decide to shut that > server down; is this package now lost forever? I do see this line: "backup > download links to S3" but this implies that the someone is willing to pay > for S3 storage for all of the packages. > > Are there plans to adopt a P2P-like model or something similar to support > any sort of replication? Public resources like this seem to come and go, so > it would be nice to avoid some of the problems associated with high churn > in the network. That said, there is an obvious cost to replication. > Likewise, the central authority would have to be updated with new, relevant > locations to find the file (as it is currently proposed). > > In any case, as I said before, the proposal looks great! I am looking > forward to this. > > > On Monday, April 13, 2015 at 5:02:46 AM UTC-5, Michael Snoyman wrote: >> >> Many of you saw the blog post Mathieu wrote[1] about having more >> composable community infrastructure, which in particular focused on >> improvements to Hackage. I've been discussing some of these ideas with both >> Mathieu and others in the community working on some similar thoughts. I've >> also separately spent some time speaking with Chris about package >> signing[2]. Through those discussions, it's become apparent to me that >> there are in fact two core pieces of functionality we're relying on Hackage >> for today: >> >> * A centralized location for accessing package metadata (i.e., the cabal >> files) and the package contents themselves (i.e., the sdist tarballs) >> * A central authority for deciding who is allowed to make releases of >> packages, and make revisions to cabal files >> >> In my opinion, fixing the first problem is in fact very straightforward >> to do today using existing tools. FP Complete already hosts a full Hackage >> mirror[3] backed by S3, for instance, and having the metadata mirrored to a >> Git repository as well is not a difficult technical challenge. This is the >> core of what Mathieu was proposing as far as composable infrastructure, >> corresponding to next actions 1 and 3 at the end of his blog post (step 2, >> modifying Hackage, is not a prerequesite). In my opinion, such a system >> would far surpass in usability, reliability, and extensibility our current >> infrastructure, and could be rolled out in a few days at most. >> >> However, that second point- the central authority- is the more >> interesting one. As it stands, our entire package ecosystem is placing a >> huge level of trust in Hackage, without any serious way to vet what's going >> on there. Attack vectors abound, e.g.: >> >> * Man in the middle attacks: as we are all painfully aware, cabal-install >> does not support HTTPS, so a MITM attack on downloads from Hackage is >> trivial >> * A breach of the Hackage Server codebase would allow anyone to upload >> nefarious code[4] >> * Any kind of system level vulnerability could allow an attacker to >> compromise the server in the same way >> >> Chris's package signing work addresses most of these vulnerabilities, by >> adding a layer of cryptographic signatures on top of Hackage as the central >> authority. I'd like to propose taking this a step further: removing Hackage >> as the central authority, and instead relying entirely on cryptographic >> signatures to release new packages. >> >> I wrote up a strawman proposal last week[5] which clearly needs work to >> be a realistic option. My question is: are people interested in moving >> forward on this? If there's no interest, and everyone is satisfied with >> continuing with the current Hackage-central-authority, then we can proceed >> with having reliable and secure services built around Hackage. But if >> others- like me- would like to see a more secure system built from the >> ground up, please say so and let's continue that conversation. >> >> [1] >> https://www.fpcomplete.com/blog/2015/03/composable-community-infrastructure >> >> [2] >> https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal >> >> [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror >> [4] I don't think this is just a theoretical possibility for some point >> in the future. I have reported an easily trigerrable DoS attack on the >> current Hackage Server codebase, which has been unresolved for 1.5 months >> now >> [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dennis at deathbytape.com Mon Apr 13 14:55:31 2015 From: dennis at deathbytape.com (Dennis J. McWherter, Jr.) Date: Mon, 13 Apr 2015 07:55:31 -0700 (PDT) Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: Message-ID: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> This proposal looks great. The one thing I am failing to understand (and I recognize the proposal is in early stages) is how to ensure redundancy in the system. As far as I can tell, much of this proposal discusses the centralized authority of the system (i.e. ensuring secure distribution) and only references (with little detail) the distributed store. For instance, say I host a package on a personal server and one day I decide to shut that server down; is this package now lost forever? I do see this line: "backup download links to S3" but this implies that the someone is willing to pay for S3 storage for all of the packages. Are there plans to adopt a P2P-like model or something similar to support any sort of replication? Public resources like this seem to come and go, so it would be nice to avoid some of the problems associated with high churn in the network. That said, there is an obvious cost to replication. Likewise, the central authority would have to be updated with new, relevant locations to find the file (as it is currently proposed). In any case, as I said before, the proposal looks great! I am looking forward to this. On Monday, April 13, 2015 at 5:02:46 AM UTC-5, Michael Snoyman wrote: > > Many of you saw the blog post Mathieu wrote[1] about having more > composable community infrastructure, which in particular focused on > improvements to Hackage. I've been discussing some of these ideas with both > Mathieu and others in the community working on some similar thoughts. I've > also separately spent some time speaking with Chris about package > signing[2]. Through those discussions, it's become apparent to me that > there are in fact two core pieces of functionality we're relying on Hackage > for today: > > * A centralized location for accessing package metadata (i.e., the cabal > files) and the package contents themselves (i.e., the sdist tarballs) > * A central authority for deciding who is allowed to make releases of > packages, and make revisions to cabal files > > In my opinion, fixing the first problem is in fact very straightforward to > do today using existing tools. FP Complete already hosts a full Hackage > mirror[3] backed by S3, for instance, and having the metadata mirrored to a > Git repository as well is not a difficult technical challenge. This is the > core of what Mathieu was proposing as far as composable infrastructure, > corresponding to next actions 1 and 3 at the end of his blog post (step 2, > modifying Hackage, is not a prerequesite). In my opinion, such a system > would far surpass in usability, reliability, and extensibility our current > infrastructure, and could be rolled out in a few days at most. > > However, that second point- the central authority- is the more interesting > one. As it stands, our entire package ecosystem is placing a huge level of > trust in Hackage, without any serious way to vet what's going on there. > Attack vectors abound, e.g.: > > * Man in the middle attacks: as we are all painfully aware, cabal-install > does not support HTTPS, so a MITM attack on downloads from Hackage is > trivial > * A breach of the Hackage Server codebase would allow anyone to upload > nefarious code[4] > * Any kind of system level vulnerability could allow an attacker to > compromise the server in the same way > > Chris's package signing work addresses most of these vulnerabilities, by > adding a layer of cryptographic signatures on top of Hackage as the central > authority. I'd like to propose taking this a step further: removing Hackage > as the central authority, and instead relying entirely on cryptographic > signatures to release new packages. > > I wrote up a strawman proposal last week[5] which clearly needs work to be > a realistic option. My question is: are people interested in moving forward > on this? If there's no interest, and everyone is satisfied with continuing > with the current Hackage-central-authority, then we can proceed with having > reliable and secure services built around Hackage. But if others- like me- > would like to see a more secure system built from the ground up, please say > so and let's continue that conversation. > > [1] > https://www.fpcomplete.com/blog/2015/03/composable-community-infrastructure > > [2] > https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal > > [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror > [4] I don't think this is just a theoretical possibility for some point in > the future. I have reported an easily trigerrable DoS attack on the current > Hackage Server codebase, which has been unresolved for 1.5 months now > [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.mcclean at gmail.com Mon Apr 13 15:31:30 2015 From: douglas.mcclean at gmail.com (Douglas McClean) Date: Mon, 13 Apr 2015 11:31:30 -0400 Subject: [Haskell-cafe] Why no Floating instance for Data.Fixed / Data.Fixed.Binary Message-ID: I'm wondering why the decision was made not to have a Floating instance for Data.Fixed. I understand that -- being a fixed point number type -- it doesn't line up with the name of the Floating typeclass. But it also seems that Floating is misnamed, because none of the functions there really have anything to do with floating point representation. This is why we still have Floating instances for, say, exact reals or symbolic numbers, neither of which has a floating point representation. It seems that the actual criterion for membership in Floating is something like "has a canonical choice for degree of approximation". Double and Float are in because we round to the nearest value. Exact reals are in because we don't have to approximate, symbolic numbers too. Rational is out because there's no clear choice of how to approximate. But by this criterion, Data.Fixed should be in. And indeed it would be useful for it to be in. If I choose to represent values from the real world by a fixed point approximation instead of a floating point approximation, shouldn't I still be able to take sines and cosines etc? Obviously I can fix this by using a newtype. I'm just curious if am I missing a reason why the current definition is more desirable? -Doug McClean -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at gregweber.info Mon Apr 13 16:09:42 2015 From: greg at gregweber.info (Greg Weber) Date: Mon, 13 Apr 2015 09:09:42 -0700 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> Message-ID: Haskell is not that great at producing statically linked libraries independent of the OS. The issue you are running into would likely show up in another non-ubuntu image (or even possibly a different version of an ubuntu image), so you could probably use a Fedora image that has tracing. How are you addressing the linker warning about needing a particular glibc version at runtime? On Mon, Apr 13, 2015 at 3:28 AM, Sharif Olorin wrote: > Unfortunately, strace and ltrace aren't available in that Docker image, >> but it's a good idea to see if I can get them running there somehow. >> > > Failing that, you might be able to get useful information of the same kind > by running docker (the server, not the `docker run` command) under perf[0] > and then running your busybox container. It should at least give you an > idea of what it's doing when it explodes. > > Sharif > > [0]: https://perf.wiki.kernel.org/index.php/Tutorial > > -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/86ca2603-37f2-4645-9cd2-f09703f2be67%40googlegroups.com > > . > > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From voldermort at hotmail.com Mon Apr 13 16:54:20 2015 From: voldermort at hotmail.com (Jeremy) Date: Mon, 13 Apr 2015 09:54:20 -0700 (MST) Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: Message-ID: <1428944060784-5768730.post@n5.nabble.com> Greg Weber wrote > Haskell is not that great at producing statically linked libraries > independent of the OS. > The issue you are running into would likely show up in another non-ubuntu > image (or even possibly a different version of an ubuntu image), so you > could probably use a Fedora image that has tracing. You could try compiling on Debian and running it in https://registry.hub.docker.com/u/accursoft/micro-jessie/. Much bigger than busybox, but 33Mb may be small enough for you. (Remember that the OS layer will be shared between multiple images - I don't know if that helps for your scenario.) -- View this message in context: http://haskell.1045720.n5.nabble.com/Static-executables-in-minimal-Docker-containers-tp5768703p5768730.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. From michael at snoyman.com Mon Apr 13 18:50:27 2015 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 13 Apr 2015 18:50:27 +0000 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> Message-ID: I'm not sure if this issue would show up, but I can try it in Fedora tomorrow. I didn't address the linker warning at all right now, it seems to not have been triggered, though I suppose it is possible that it's the cause of this issue. On Mon, Apr 13, 2015 at 7:10 PM Greg Weber wrote: > Haskell is not that great at producing statically linked libraries > independent of the OS. > The issue you are running into would likely show up in another non-ubuntu > image (or even possibly a different version of an ubuntu image), so you > could probably use a Fedora image that has tracing. > > How are you addressing the linker warning about needing a particular glibc > version at runtime? > > On Mon, Apr 13, 2015 at 3:28 AM, Sharif Olorin > wrote: > >> Unfortunately, strace and ltrace aren't available in that Docker image, >>> but it's a good idea to see if I can get them running there somehow. >>> >> >> Failing that, you might be able to get useful information of the same >> kind by running docker (the server, not the `docker run` command) under >> perf[0] and then running your busybox container. It should at least give >> you an idea of what it's doing when it explodes. >> >> Sharif >> >> [0]: https://perf.wiki.kernel.org/index.php/Tutorial >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Commercial Haskell" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to commercialhaskell+unsubscribe at googlegroups.com. >> To post to this group, send email to commercialhaskell at googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/commercialhaskell/86ca2603-37f2-4645-9cd2-f09703f2be67%40googlegroups.com >> >> . >> >> For more options, visit https://groups.google.com/d/optout. >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noteed at gmail.com Mon Apr 13 19:16:01 2015 From: noteed at gmail.com (Vo Minh Thu) Date: Mon, 13 Apr 2015 21:16:01 +0200 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> Message-ID: I missed this thread but I guess I tried something similar last month: https://gist.github.com/noteed/4155ffad2b1d13ab17ee 2015-04-13 20:50 GMT+02:00 Michael Snoyman : > I'm not sure if this issue would show up, but I can try it in Fedora > tomorrow. I didn't address the linker warning at all right now, it seems to > not have been triggered, though I suppose it is possible that it's the cause > of this issue. > > > On Mon, Apr 13, 2015 at 7:10 PM Greg Weber wrote: >> >> Haskell is not that great at producing statically linked libraries >> independent of the OS. >> The issue you are running into would likely show up in another non-ubuntu >> image (or even possibly a different version of an ubuntu image), so you >> could probably use a Fedora image that has tracing. >> >> How are you addressing the linker warning about needing a particular glibc >> version at runtime? >> >> On Mon, Apr 13, 2015 at 3:28 AM, Sharif Olorin >> wrote: >>>> >>>> Unfortunately, strace and ltrace aren't available in that Docker image, >>>> but it's a good idea to see if I can get them running there somehow. >>> >>> >>> Failing that, you might be able to get useful information of the same >>> kind by running docker (the server, not the `docker run` command) under >>> perf[0] and then running your busybox container. It should at least give you >>> an idea of what it's doing when it explodes. >>> >>> Sharif >>> >>> [0]: https://perf.wiki.kernel.org/index.php/Tutorial >>> >>> -- >>> You received this message because you are subscribed to the Google Groups >>> "Commercial Haskell" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to commercialhaskell+unsubscribe at googlegroups.com. >>> To post to this group, send email to commercialhaskell at googlegroups.com. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/commercialhaskell/86ca2603-37f2-4645-9cd2-f09703f2be67%40googlegroups.com. >>> >>> For more options, visit https://groups.google.com/d/optout. >> >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From vilevin at gmail.com Mon Apr 13 19:22:13 2015 From: vilevin at gmail.com (Aaron Levin) Date: Mon, 13 Apr 2015 19:22:13 +0000 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> Message-ID: FWIW: I experienced something similar to this (hanging) with the standard Debian image. On Mon, Apr 13, 2015 at 3:16 PM Vo Minh Thu wrote: > I missed this thread but I guess I tried something similar last month: > https://gist.github.com/noteed/4155ffad2b1d13ab17ee > > > 2015-04-13 20:50 GMT+02:00 Michael Snoyman : > > I'm not sure if this issue would show up, but I can try it in Fedora > > tomorrow. I didn't address the linker warning at all right now, it seems > to > > not have been triggered, though I suppose it is possible that it's the > cause > > of this issue. > > > > > > On Mon, Apr 13, 2015 at 7:10 PM Greg Weber wrote: > >> > >> Haskell is not that great at producing statically linked libraries > >> independent of the OS. > >> The issue you are running into would likely show up in another > non-ubuntu > >> image (or even possibly a different version of an ubuntu image), so you > >> could probably use a Fedora image that has tracing. > >> > >> How are you addressing the linker warning about needing a particular > glibc > >> version at runtime? > >> > >> On Mon, Apr 13, 2015 at 3:28 AM, Sharif Olorin > > >> wrote: > >>>> > >>>> Unfortunately, strace and ltrace aren't available in that Docker > image, > >>>> but it's a good idea to see if I can get them running there somehow. > >>> > >>> > >>> Failing that, you might be able to get useful information of the same > >>> kind by running docker (the server, not the `docker run` command) under > >>> perf[0] and then running your busybox container. It should at least > give you > >>> an idea of what it's doing when it explodes. > >>> > >>> Sharif > >>> > >>> [0]: https://perf.wiki.kernel.org/index.php/Tutorial > >>> > >>> -- > >>> You received this message because you are subscribed to the Google > Groups > >>> "Commercial Haskell" group. > >>> To unsubscribe from this group and stop receiving emails from it, send > an > >>> email to commercialhaskell+unsubscribe at googlegroups.com. > >>> To post to this group, send email to > commercialhaskell at googlegroups.com. > >>> To view this discussion on the web visit > >>> > https://groups.google.com/d/msgid/commercialhaskell/86ca2603-37f2-4645-9cd2-f09703f2be67%40googlegroups.com > . > >>> > >>> For more options, visit https://groups.google.com/d/optout. > >> > >> > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Apr 13 20:12:53 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 13 Apr 2015 16:12:53 -0400 Subject: [Haskell-cafe] [Announcement] FLTKHS - Bindings to the FLTK GUI Toolkit In-Reply-To: References: Message-ID: very cool! do you have any example applications you can share that use the library? cheers -Carter On Sun, Apr 12, 2015 at 7:53 PM, aditya siram wrote: > I'm pleased to announce the first release of Haskell bindings [1] to the > FLTK GUI [2] toolkit. > > It now works smoothly on Windows (64-bit), Linux and Mac allowing you to > create truly cross-platform native GUI applications in pure Haskell and > deploy statically linked executables with no dependencies. > > Most of the FLTK API is covered except for a few minor widgets which I > plan to get to in the next release. > > Motivation behind the package and installation instructions are found in > the Haddocks [3]. And to get you started it ships with a number of demos. > > If you have any issues please report them on the Github [4] page. > > I'd also love any other feedback so feel free to comment here or email me > at the address listed on the Hackage [5] page. > > Hope you enjoy! > > [1] https://hackage.haskell.org/package/fltkhs-0.1.0.2 > [2] http://fltk.org > [3] > https://hackage.haskell.org/package/fltkhs-0.1.0.2/docs/Graphics-UI-FLTK-LowLevel-FLTKHS.html > [4] http://github.com/deech/fltkhs > [5] https://hackage.haskell.org/package/fltkhs-0.1.0.2 > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Mon Apr 13 20:13:49 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Mon, 13 Apr 2015 16:13:49 -0400 Subject: [Haskell-cafe] [Announcement] FLTKHS - Bindings to the FLTK GUI Toolkit In-Reply-To: References: Message-ID: woops, i see now https://github.com/deech/fltkhs/tree/master/src/Examples thanks! -Carter On Mon, Apr 13, 2015 at 4:12 PM, Carter Schonwald < carter.schonwald at gmail.com> wrote: > very cool! > do you have any example applications you can share that use the library? > cheers > -Carter > > On Sun, Apr 12, 2015 at 7:53 PM, aditya siram > wrote: > >> I'm pleased to announce the first release of Haskell bindings [1] to the >> FLTK GUI [2] toolkit. >> >> It now works smoothly on Windows (64-bit), Linux and Mac allowing you to >> create truly cross-platform native GUI applications in pure Haskell and >> deploy statically linked executables with no dependencies. >> >> Most of the FLTK API is covered except for a few minor widgets which I >> plan to get to in the next release. >> >> Motivation behind the package and installation instructions are found in >> the Haddocks [3]. And to get you started it ships with a number of demos. >> >> If you have any issues please report them on the Github [4] page. >> >> I'd also love any other feedback so feel free to comment here or email me >> at the address listed on the Hackage [5] page. >> >> Hope you enjoy! >> >> [1] https://hackage.haskell.org/package/fltkhs-0.1.0.2 >> [2] http://fltk.org >> [3] >> https://hackage.haskell.org/package/fltkhs-0.1.0.2/docs/Graphics-UI-FLTK-LowLevel-FLTKHS.html >> [4] http://github.com/deech/fltkhs >> [5] https://hackage.haskell.org/package/fltkhs-0.1.0.2 >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trebla at vex.net Mon Apr 13 21:39:39 2015 From: trebla at vex.net (Albert Y. C. Lai) Date: Mon, 13 Apr 2015 17:39:39 -0400 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> Message-ID: <552C379B.8080601@vex.net> I wonder whether you already know the following, and whether it is relevant to begin with. (Plus, my knowledge is fairly sketchy.) Even though you statically link glibc, its code will, at run time, dlopen a certain part of glibc. Why: To provide a really uniform abstraction layer over user account queries, e.g., man 3 getpwnam, regardless of whether the accounts are from /etc/passwd, LDAP, or whatever. Therefore, during run time, glibc first reads some config files of the host to see what kind of user account database the host uses. If it's /etc/passwd, then dlopen the implementation of getpwnam and friends for /etc/passwd; else, if it's LDAP, then dlopen the implementation of getpwnam and friends for LDAP; etc etc. So that later when you call getpwnam, it will happen to "do the right thing". This demands the required *.so files to be accessible during run time. Moreoever, if you statically link glibc, this also demands the required *.so files to version-match the glibc you statically link. (It is the main reason why most people give up on statically linking glibc.) From trebla at vex.net Mon Apr 13 21:49:00 2015 From: trebla at vex.net (Albert Y. C. Lai) Date: Mon, 13 Apr 2015 17:49:00 -0400 Subject: [Haskell-cafe] Why no Floating instance for Data.Fixed / Data.Fixed.Binary In-Reply-To: References: Message-ID: <552C39CC.8020909@vex.net> On 2015-04-13 11:31 AM, Douglas McClean wrote: > I'm wondering why the decision was made not to have a Floating > instance for Data.Fixed. I have always found economics to be a powerful answer to this kind of questions. That is, perhaps simply, there has not been sufficient incentive for anyone to do the work. For example, do you want to do it? From qdunkan at gmail.com Mon Apr 13 21:53:16 2015 From: qdunkan at gmail.com (Evan Laforge) Date: Mon, 13 Apr 2015 14:53:16 -0700 Subject: [Haskell-cafe] ANNOUNCE: fast-tags 1.0 Message-ID: fast-tags makes vi (and now emacs) tags from haskell source. It does so quickly and incrementally. I have it bound to vim's BufWrite to keep tags always updated, and it's basically instant. It works with hsc, literate haskell, and mid-edit files with syntax errors. I've announced this before, but I'm doing it again because it's been a long time and there was a major update. I bumped to 1.0 to reflect that it's not dead, it just does everything I need it to do, so further development is unlikely unless someone finds a bug. That's just for me of course, if someone else wants to add features they are welcome to do so. The major update was almost entirely thanks to Sergey Vinokurov and a few others who contributed a bunch of patches. Details and changelog are at http://hackage.haskell.org/package/fast-tags-1.0 Source is at https://github.com/elaforge/fast-tags From ky3 at atamo.com Tue Apr 14 02:02:52 2015 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Tue, 14 Apr 2015 09:02:52 +0700 Subject: [Haskell-cafe] Haskell Weekly News Message-ID: *Top picks:* - Aditya Siram announces the first release of Haskell bindings to the C++-based FLTK cross-platform GUI library. FLTK can be used to build a modern windows-and-widgets-based desktop app and comes with a UI builder called FLUID. - The startup called Helium releases a "homegrown Webmachine-inspired web framework in Haskell." Airship is very small (just under 1000 LoC) and extremely unopinionated: it works with any WAI-compatible web server and any templating language (including none at all!). Reddit discussion. - Adam Chlipala releases a new Ur/Web library for producing custom event-planning web apps quickly. All you need to do is "assemble highly parametrized components." - Abe Voelker finds himself wanting algebraic data types and pattern matching when writing Ruby. He presents an evolution of a file upload validator, culminating in a finale that uses an Either monad provided by a Ruby gem called Kleisli. - Eitan Chatav shares a correct-by-construction JSON serializer/deserializer using lens-json. - Wouldn't it be neat to get profiling info without stopping the program and pissing off your users? Mark Wotton has filed exactly such a feature request. - Is Call Arity optimization to blame for your 7.10 compilation slowdowns? Joachim Breitner investigates. - Ever felt Haskell on Windows is 2nd class, even though it's Tier-1 according to the Platform? Well, installing hmatrix on Win requires additional steps, as Redditor wrvn kindly explains. - Do you program in Haskell using emacs? You must be using haskell-mode then. Here's monthly news straight from the haskell-mode development team. - Dominic Steinitz raises awareness about the brokenness of System.Random. Solution? Use tf-random for now. - Big number exponentiation segfaults, in this reddit discussion. Turns out it's a bug involving the GNU Multi-precision Library. Make sure you have the latest GMP version 6. - Devan Stormont creates his first hackage library that obtains weather forecast data via a web-based API. He writes, "The really brilliant part is in being able to completely replace a core piece of an app within a single day and having complete confidence in the result. It?s moments like this that make you really happy to be working with such a powerful language as Haskell." - Carl reminds us that GADT can always be pattern-matched in a case expression. "It's let expressions that cause GHC to provide amusing messages about its brain exploding." - Zohaib Rauf publishes a monad tutorial. He explains that the 'M' in 'M a' is "some metadata wrapped around 'a'." *Tweets of the week:* - John Carmack: If I had to write software that my life depended on, I would seriously consider using Haskell. - shanelogsdon: tried the #haskell web framework http://www.spock.li/ last night with a meaningless micro benchmark. ~38k req/s is pretty quick - AlexanderKatt: 'it is entirely unnecessary to understand category theory in order to understand monads in #haskell' said the guys who know category theory - justusadam_: I'm warming up to the idea of using #Haskell more. Wonderful language but the syntax and some of the concepts were difficult to understand - least_nathan: Algebraic Data Types Considered Harmful: once you use them, every language lacking them drives you to madness. #LangSec - chwthewke: #Haskell is a very hot programming language :) No, really, an hour of it and my laptop is on the verge of becoming a brown dwarf :D - robinbateboerop: Troll tries to get banned from #Haskell IRC channel, decides to learn Haskell instead. - stephan_gfx: Cool, cabal will multithread installation/build if you run it with the -j option. cabal install -j - Pythux: One thing I learned at Haskell's meetup yesterday: it's possible to create a start-up with Haskell full stack! #Haskell #Awesome *Quote of the week:* - Lisp, Haskell and other FP are currently killing it in the quiet world of DSLs, especially Mining, Oil & Gas industries which need to parse petaflops of seismic data and reservoir simulations, need rapid prototyping, need formally verified drilling platform components that field engineers can interact with easily, and HPC algorithms for financial trading. -- Source -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From douglas.mcclean at gmail.com Tue Apr 14 02:08:42 2015 From: douglas.mcclean at gmail.com (Douglas McClean) Date: Mon, 13 Apr 2015 22:08:42 -0400 Subject: [Haskell-cafe] Why no Floating instance for Data.Fixed / Data.Fixed.Binary In-Reply-To: <552C39CC.8020909@vex.net> References: <552C39CC.8020909@vex.net> Message-ID: I'd certainly be happy to do it, I'm just concerned that it would be actively unwanted for a reason that I can't see. I will look in to what the procedures are for contributing to base. It wasn't my intention to beg the internet to do it for me. On Mon, Apr 13, 2015 at 5:49 PM, Albert Y. C. Lai wrote: > On 2015-04-13 11:31 AM, Douglas McClean wrote: > >> I'm wondering why the decision was made not to have a Floating instance >> for Data.Fixed. >> > > I have always found economics to be a powerful answer to this kind of > questions. That is, perhaps simply, there has not been sufficient incentive > for anyone to do the work. For example, do you want to do it? > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- J. Douglas McClean (781) 561-5540 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From 2haskell at pkturner.org Tue Apr 14 04:19:10 2015 From: 2haskell at pkturner.org (Scott Turner) Date: Tue, 14 Apr 2015 00:19:10 -0400 Subject: [Haskell-cafe] Why no Floating instance for Data.Fixed / Data.Fixed.Binary In-Reply-To: References: <552C39CC.8020909@vex.net> Message-ID: <552C953E.1090503@pkturner.org> On 2015-04-13 22:08, Douglas McClean wrote: > I'd certainly be happy to do it, I'm just concerned that it would be > actively unwanted for a reason that I can't see. > > I will look in to what the procedures are for contributing to base. > > It wasn't my intention to beg the internet to do it for me. > > On Mon, Apr 13, 2015 at 5:49 PM, Albert Y. C. Lai > wrote: > > On 2015-04-13 11:31 AM, Douglas McClean wrote: > > I'm wondering why the decision was made not to have a Floating > instance for Data.Fixed. > > > I have always found economics to be a powerful answer to this kind > of questions. That is, perhaps simply, there has not been > sufficient incentive for anyone to do the work. For example, do > you want to do it? > It looks hairy to me. The big-number cases would need approaches quite different from floating point. sin(31415926.535897932384::Pico) sin(31415926535897932384.626433832795::Pico) sin(314159265358979323846264338327950288419.716939937510::Pico) To return the correct value (0) from each of these examples, and their successors, requires an implementation able to calculate pi to an unbounded precision. The value of exp(50::Uni) requires more precision than a double can provide. How far do you go? exp(100::Uni)? I hope this doesn't scare you off. Perhaps there's literature on acceptable limitations when implementing/using such fixed point transcendental functions. In any case an implementation would be interesting even if it doesn't provide correct results in the extreme cases. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Tue Apr 14 05:01:15 2015 From: michael at snoyman.com (Michael Snoyman) Date: Tue, 14 Apr 2015 05:01:15 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> Message-ID: That could work in theory. My concern with such an approach is that- AFAIK- the tooling around that kind of stuff is not very well developed, as opposed to an approach using Git, SHA512, and GPG, which should be easy to combine. But I could be completely mistaken on this point; if existing, well vetted technology exists for this, I'm not opposed to using it. On Mon, Apr 13, 2015 at 6:04 PM Arnaud Bailly | Capital Match < arnaud at capital-match.com> wrote: > Just thinking aloud but wouldn't it be possible to take advantage of > cryptographic ledgers a la Bitcoin for authenticating packages and tracking > the history of change ? This would provide redundancy as the transactions > log is distributed and "naturally" create a web of trust or at least > authenticate transactions. People uploading or modifying a package would > have to sign a transactions with someone having enough karma to allow this. > > Then packages themselves could be completely and rather safely distributed > through standard p2p file sharing. > > I am not a specialist of crypto money, though. > > My 50 cts > Arnaud > > Le lundi 13 avril 2015, Dennis J. McWherter, Jr. > a ?crit : > >> This proposal looks great. The one thing I am failing to understand (and >> I recognize the proposal is in early stages) is how to ensure redundancy in >> the system. As far as I can tell, much of this proposal discusses the >> centralized authority of the system (i.e. ensuring secure distribution) and >> only references (with little detail) the distributed store. For instance, >> say I host a package on a personal server and one day I decide to shut that >> server down; is this package now lost forever? I do see this line: "backup >> download links to S3" but this implies that the someone is willing to pay >> for S3 storage for all of the packages. >> >> Are there plans to adopt a P2P-like model or something similar to support >> any sort of replication? Public resources like this seem to come and go, so >> it would be nice to avoid some of the problems associated with high churn >> in the network. That said, there is an obvious cost to replication. >> Likewise, the central authority would have to be updated with new, relevant >> locations to find the file (as it is currently proposed). >> >> In any case, as I said before, the proposal looks great! I am looking >> forward to this. >> >> On Monday, April 13, 2015 at 5:02:46 AM UTC-5, Michael Snoyman wrote: >>> >>> Many of you saw the blog post Mathieu wrote[1] about having more >>> composable community infrastructure, which in particular focused on >>> improvements to Hackage. I've been discussing some of these ideas with both >>> Mathieu and others in the community working on some similar thoughts. I've >>> also separately spent some time speaking with Chris about package >>> signing[2]. Through those discussions, it's become apparent to me that >>> there are in fact two core pieces of functionality we're relying on Hackage >>> for today: >>> >>> * A centralized location for accessing package metadata (i.e., the cabal >>> files) and the package contents themselves (i.e., the sdist tarballs) >>> * A central authority for deciding who is allowed to make releases of >>> packages, and make revisions to cabal files >>> >>> In my opinion, fixing the first problem is in fact very straightforward >>> to do today using existing tools. FP Complete already hosts a full Hackage >>> mirror[3] backed by S3, for instance, and having the metadata mirrored to a >>> Git repository as well is not a difficult technical challenge. This is the >>> core of what Mathieu was proposing as far as composable infrastructure, >>> corresponding to next actions 1 and 3 at the end of his blog post (step 2, >>> modifying Hackage, is not a prerequesite). In my opinion, such a system >>> would far surpass in usability, reliability, and extensibility our current >>> infrastructure, and could be rolled out in a few days at most. >>> >>> However, that second point- the central authority- is the more >>> interesting one. As it stands, our entire package ecosystem is placing a >>> huge level of trust in Hackage, without any serious way to vet what's going >>> on there. Attack vectors abound, e.g.: >>> >>> * Man in the middle attacks: as we are all painfully aware, >>> cabal-install does not support HTTPS, so a MITM attack on downloads from >>> Hackage is trivial >>> * A breach of the Hackage Server codebase would allow anyone to upload >>> nefarious code[4] >>> * Any kind of system level vulnerability could allow an attacker to >>> compromise the server in the same way >>> >>> Chris's package signing work addresses most of these vulnerabilities, by >>> adding a layer of cryptographic signatures on top of Hackage as the central >>> authority. I'd like to propose taking this a step further: removing Hackage >>> as the central authority, and instead relying entirely on cryptographic >>> signatures to release new packages. >>> >>> I wrote up a strawman proposal last week[5] which clearly needs work to >>> be a realistic option. My question is: are people interested in moving >>> forward on this? If there's no interest, and everyone is satisfied with >>> continuing with the current Hackage-central-authority, then we can proceed >>> with having reliable and secure services built around Hackage. But if >>> others- like me- would like to see a more secure system built from the >>> ground up, please say so and let's continue that conversation. >>> >>> [1] https://www.fpcomplete.com/blog/2015/03/composable- >>> community-infrastructure >>> [2] https://github.com/commercialhaskell/commercialhaskell/wiki/ >>> Package-signing-detailed-propsal >>> [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror >>> [4] I don't think this is just a theoretical possibility for some point >>> in the future. I have reported an easily trigerrable DoS attack on the >>> current Hackage Server codebase, which has been unresolved for 1.5 months >>> now >>> [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 >>> >> -- >> > You received this message because you are subscribed to the Google Groups >> "Commercial Haskell" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to commercialhaskell+unsubscribe at googlegroups.com. >> To post to this group, send email to commercialhaskell at googlegroups.com. >> > To view this discussion on the web visit >> https://groups.google.com/d/msgid/commercialhaskell/4487776e-b862-429c-adae-477813e560f3%40googlegroups.com >> >> . > > >> For more options, visit https://groups.google.com/d/optout. >> > > > -- > *Arnaud Bailly* > > CTO | Capital Match > > CapitalMatch > > 71 Ayer Rajah Crescent | #06-16 | Singapore 139951 > > (FR) +33 617 121 978 / (SG) +65 8408 7973 | arnaud at capital-match.com | > www.capital-match.com > > Disclaimer: > > *Capital Match Platform Pte. Ltd. (the "Company") registered in Singapore > (Co. Reg. No. 201501788H), a subsidiary of Capital Match Holdings Pte. Ltd. > (Co. Reg. No. 201418682W), provides services that involve arranging for > multiple parties to enter into loan and invoice discounting agreements. The > Company does not provide any form of investment advice or recommendations > regarding any listings on its platform. In providing its services, the > Company's role is limited to an administrative function and the Company > does not and will not assume any advisory, fiduciary or other duties to > clients of its services.* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Tue Apr 14 06:25:22 2015 From: michael at snoyman.com (Michael Snoyman) Date: Tue, 14 Apr 2015 06:25:22 +0000 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: <552C379B.8080601@vex.net> References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> <552C379B.8080601@vex.net> Message-ID: I have a bit more information about this. In particular: I'm able to reproduce this using chroot (no Docker required), and it's reproducing with a dynamically linked executable too. Steps I used to reproduce: 1. Write a minimal "foo.hs" containing `main = putStrLn "Hello World"` 2. Compile that executable and put it in an empty directory 3. Run `ldd` on it and copy all necessary libraries inside that directory 4. Run `sudo strace -o log.txt . /foo` I've uploaded the logs to: https://gist.github.com/snoyberg/095efb17e36acc1d6360 Note that, due to size of the output, I killed the process just a few seconds after starting it, but when I let the output run much longer, I didn't see any difference in the results. I'll continue poking at this a bit, but most likely I'll open a GHC Trac ticket about it later today. On Tue, Apr 14, 2015 at 12:39 AM Albert Y. C. Lai wrote: > I wonder whether you already know the following, and whether it is > relevant to begin with. (Plus, my knowledge is fairly sketchy.) > > Even though you statically link glibc, its code will, at run time, > dlopen a certain part of glibc. > > Why: To provide a really uniform abstraction layer over user account > queries, e.g., man 3 getpwnam, regardless of whether the accounts are > from /etc/passwd, LDAP, or whatever. > > Therefore, during run time, glibc first reads some config files of the > host to see what kind of user account database the host uses. If it's > /etc/passwd, then dlopen the implementation of getpwnam and friends for > /etc/passwd; else, if it's LDAP, then dlopen the implementation of > getpwnam and friends for LDAP; etc etc. > > So that later when you call getpwnam, it will happen to "do the right > thing". > > This demands the required *.so files to be accessible during run time. > Moreoever, if you statically link glibc, this also demands the required > *.so files to version-match the glibc you statically link. > > (It is the main reason why most people give up on statically linking > glibc.) > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan.tibell at gmail.com Tue Apr 14 06:28:32 2015 From: johan.tibell at gmail.com (Johan Tibell) Date: Tue, 14 Apr 2015 08:28:32 +0200 Subject: [Haskell-cafe] Anyone interested in taking over network-uri? In-Reply-To: <20150411013800.953d4769771214526517a6af@mega-nerd.com> References: <20150411013800.953d4769771214526517a6af@mega-nerd.com> Message-ID: Hi everyone! I must say I got more volunteers than I thought. :) I'm sure any of you would do a great job. Since Ezra was first to volunteer I will give the maintenance to him. Ezra, if you like to have a co-maintainer just let the others know. -- Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dct25-561bs at mythic-beasts.com Tue Apr 14 06:42:46 2015 From: dct25-561bs at mythic-beasts.com (David Turner) Date: Tue, 14 Apr 2015 07:42:46 +0100 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> Message-ID: The cryptocurrency model is interesting, certainly, but it's solving a quite different problem: by giving authority to the majority of computational power, it allows users to trust the network without needing to break anonymity. Anonymity is really hard, and not needed here. Without it, the cryptocurrency model is basically just Git: a sequence of transactions that can be cryptographically verified. Stick with the Git + GPG plan IMO. On 14 April 2015 at 06:01, Michael Snoyman wrote: > That could work in theory. My concern with such an approach is that- AFAIK- > the tooling around that kind of stuff is not very well developed, as opposed > to an approach using Git, SHA512, and GPG, which should be easy to combine. > But I could be completely mistaken on this point; if existing, well vetted > technology exists for this, I'm not opposed to using it. > > On Mon, Apr 13, 2015 at 6:04 PM Arnaud Bailly | Capital Match > wrote: >> >> Just thinking aloud but wouldn't it be possible to take advantage of >> cryptographic ledgers a la Bitcoin for authenticating packages and tracking >> the history of change ? This would provide redundancy as the transactions >> log is distributed and "naturally" create a web of trust or at least >> authenticate transactions. People uploading or modifying a package would >> have to sign a transactions with someone having enough karma to allow this. >> >> Then packages themselves could be completely and rather safely distributed >> through standard p2p file sharing. >> >> I am not a specialist of crypto money, though. >> >> My 50 cts >> Arnaud >> >> Le lundi 13 avril 2015, Dennis J. McWherter, Jr. >> a ?crit : >>> >>> This proposal looks great. The one thing I am failing to understand (and >>> I recognize the proposal is in early stages) is how to ensure redundancy in >>> the system. As far as I can tell, much of this proposal discusses the >>> centralized authority of the system (i.e. ensuring secure distribution) and >>> only references (with little detail) the distributed store. For instance, >>> say I host a package on a personal server and one day I decide to shut that >>> server down; is this package now lost forever? I do see this line: "backup >>> download links to S3" but this implies that the someone is willing to pay >>> for S3 storage for all of the packages. >>> >>> Are there plans to adopt a P2P-like model or something similar to support >>> any sort of replication? Public resources like this seem to come and go, so >>> it would be nice to avoid some of the problems associated with high churn in >>> the network. That said, there is an obvious cost to replication. Likewise, >>> the central authority would have to be updated with new, relevant locations >>> to find the file (as it is currently proposed). >>> >>> In any case, as I said before, the proposal looks great! I am looking >>> forward to this. >>> >>> On Monday, April 13, 2015 at 5:02:46 AM UTC-5, Michael Snoyman wrote: >>>> >>>> Many of you saw the blog post Mathieu wrote[1] about having more >>>> composable community infrastructure, which in particular focused on >>>> improvements to Hackage. I've been discussing some of these ideas with both >>>> Mathieu and others in the community working on some similar thoughts. I've >>>> also separately spent some time speaking with Chris about package >>>> signing[2]. Through those discussions, it's become apparent to me that there >>>> are in fact two core pieces of functionality we're relying on Hackage for >>>> today: >>>> >>>> * A centralized location for accessing package metadata (i.e., the cabal >>>> files) and the package contents themselves (i.e., the sdist tarballs) >>>> * A central authority for deciding who is allowed to make releases of >>>> packages, and make revisions to cabal files >>>> >>>> In my opinion, fixing the first problem is in fact very straightforward >>>> to do today using existing tools. FP Complete already hosts a full Hackage >>>> mirror[3] backed by S3, for instance, and having the metadata mirrored to a >>>> Git repository as well is not a difficult technical challenge. This is the >>>> core of what Mathieu was proposing as far as composable infrastructure, >>>> corresponding to next actions 1 and 3 at the end of his blog post (step 2, >>>> modifying Hackage, is not a prerequesite). In my opinion, such a system >>>> would far surpass in usability, reliability, and extensibility our current >>>> infrastructure, and could be rolled out in a few days at most. >>>> >>>> However, that second point- the central authority- is the more >>>> interesting one. As it stands, our entire package ecosystem is placing a >>>> huge level of trust in Hackage, without any serious way to vet what's going >>>> on there. Attack vectors abound, e.g.: >>>> >>>> * Man in the middle attacks: as we are all painfully aware, >>>> cabal-install does not support HTTPS, so a MITM attack on downloads from >>>> Hackage is trivial >>>> * A breach of the Hackage Server codebase would allow anyone to upload >>>> nefarious code[4] >>>> * Any kind of system level vulnerability could allow an attacker to >>>> compromise the server in the same way >>>> >>>> Chris's package signing work addresses most of these vulnerabilities, by >>>> adding a layer of cryptographic signatures on top of Hackage as the central >>>> authority. I'd like to propose taking this a step further: removing Hackage >>>> as the central authority, and instead relying entirely on cryptographic >>>> signatures to release new packages. >>>> >>>> I wrote up a strawman proposal last week[5] which clearly needs work to >>>> be a realistic option. My question is: are people interested in moving >>>> forward on this? If there's no interest, and everyone is satisfied with >>>> continuing with the current Hackage-central-authority, then we can proceed >>>> with having reliable and secure services built around Hackage. But if >>>> others- like me- would like to see a more secure system built from the >>>> ground up, please say so and let's continue that conversation. >>>> >>>> [1] >>>> https://www.fpcomplete.com/blog/2015/03/composable-community-infrastructure >>>> [2] >>>> https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal >>>> [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror >>>> [4] I don't think this is just a theoretical possibility for some point >>>> in the future. I have reported an easily trigerrable DoS attack on the >>>> current Hackage Server codebase, which has been unresolved for 1.5 months >>>> now >>>> [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 >>> >>> -- >>> >>> You received this message because you are subscribed to the Google Groups >>> "Commercial Haskell" group. >>> To unsubscribe from this group and stop receiving emails from it, send an >>> email to commercialhaskell+unsubscribe at googlegroups.com. >>> To post to this group, send email to commercialhaskell at googlegroups.com. >>> >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/commercialhaskell/4487776e-b862-429c-adae-477813e560f3%40googlegroups.com. >>> >>> >>> For more options, visit https://groups.google.com/d/optout. >> >> >> >> -- >> Arnaud Bailly >> >> CTO | Capital Match >> >> CapitalMatch >> >> 71 Ayer Rajah Crescent | #06-16 | Singapore 139951 >> >> (FR) +33 617 121 978 / (SG) +65 8408 7973 | arnaud at capital-match.com | >> www.capital-match.com >> >> Disclaimer: >> >> Capital Match Platform Pte. Ltd. (the "Company") registered in Singapore >> (Co. Reg. No. 201501788H), a subsidiary of Capital Match Holdings Pte. Ltd. >> (Co. Reg. No. 201418682W), provides services that involve arranging for >> multiple parties to enter into loan and invoice discounting agreements. The >> Company does not provide any form of investment advice or recommendations >> regarding any listings on its platform. In providing its services, the >> Company's role is limited to an administrative function and the Company does >> not and will not assume any advisory, fiduciary or other duties to clients >> of its services. >> >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From martin.drautzburg at web.de Tue Apr 14 06:49:57 2015 From: martin.drautzburg at web.de (martin) Date: Tue, 14 Apr 2015 08:49:57 +0200 Subject: [Haskell-cafe] Semantics of temporal data In-Reply-To: <56D3AFD7-3666-40D6-92FC-FB9C609671C6@iki.fi> References: <5519B0B5.1080509@web.de> <20150331004152.GA6032@mintha> <551A2D24.1090908@web.de> <56D3AFD7-3666-40D6-92FC-FB9C609671C6@iki.fi> Message-ID: <552CB895.5000709@web.de> Am 03/31/2015 um 10:27 AM schrieb Oleg Grenrus: > Your Temporal type looks semantically very similar to FRP?s Behaviour. > > Following http://conal.net/papers/push-pull-frp/ : > > We could specify the time domain as `Maybe Time`, where - `Nothing` is ?before everything - distant past?, needed > for ?temporal default?, - `Just t` are finite values of time > > newtype Behaviour a = Behaviour (Maybe Time -> a) > > It?s quite straightforward to verify that your Temporal and this Behaviour are isomorphic, assuming that `Change`s > are sorted by time, and there aren?t consecutive duplicates. > > I find this ?higher-order? formulation easier to reason about, as it?s `(->) Maybe Time`, for which we have lot?s > of machinery already defined (e.g. Monad). I am certainly not happy with my definitions type Time = Integer data Change a = Chg { ct :: Time, -- "change time" cv :: a -- "change value" } deriving (Eq,Show) data Temporal a = Temporal { td :: a, -- "temporal default" tc :: [Change a] -- "temporal changes" } deriving (Eq, Show) because writing join turned out to be such a nightmare. I like your approach of wrapping a function instead of data, but the problem is, I can no longer retrieve the times (I cannot print a "resulting schedule"). But that could easily be fixed by also returning the time of the next Change: newtype Behaviour a = Behaviour (Maybe Time -> (a, Time) However that gets me in trouble with the last change, where there is no next change. Returning Nothing in that case won't help either, because then Nothing would have two meanings (distant past and distant future). So I need a different notion of Time, which includes these two special cases. I will give that a try. Still I don't understand why my original definition gave me so much trouble. I mean it all looks quite innocent. I tried to find flaws from 10000 feet above and the only thing I could find is that the type a is mentioned in both Change and Temoral and Change alone does not make much sense. Maybe you find some more flaws. From michael at snoyman.com Tue Apr 14 06:52:51 2015 From: michael at snoyman.com (Michael Snoyman) Date: Tue, 14 Apr 2015 06:52:51 +0000 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> <552C379B.8080601@vex.net> Message-ID: Actually, I seem to have found the problem: open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = -1 ENOENT (No such file or directory) open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) I found that I needed to copy over the following files to make my program complete: /usr/lib/x86_64-linux-gnu/gconv/gconv-modules /usr/lib/x86_64-linux-gnu/gconv/UTF-32.so Once I did that, I could get the executable to run in the chroot. However, even running the statically linked executable still required most of the shared libraries to be present inside the chroot. So it seems that: * We can come up with a list of a few files that need to be present inside a Docker image to provide for minimal GHC-compiled executables * There's a bug in the RTS that results in an infinite loop I'm going to try to put together a semi-robust solution for the first problem, and I'll report the RTS issue on Trac. On Tue, Apr 14, 2015 at 9:25 AM Michael Snoyman wrote: > I have a bit more information about this. In particular: I'm able to > reproduce this using chroot (no Docker required), and it's reproducing with > a dynamically linked executable too. Steps I used to reproduce: > > 1. Write a minimal "foo.hs" containing `main = putStrLn "Hello World"` > 2. Compile that executable and put it in an empty directory > 3. Run `ldd` on it and copy all necessary libraries inside that directory > 4. Run `sudo strace -o log.txt . /foo` > > I've uploaded the logs to: > > https://gist.github.com/snoyberg/095efb17e36acc1d6360 > > Note that, due to size of the output, I killed the process just a few > seconds after starting it, but when I let the output run much longer, I > didn't see any difference in the results. I'll continue poking at this a > bit, but most likely I'll open a GHC Trac ticket about it later today. > > On Tue, Apr 14, 2015 at 12:39 AM Albert Y. C. Lai wrote: > >> I wonder whether you already know the following, and whether it is >> relevant to begin with. (Plus, my knowledge is fairly sketchy.) >> >> Even though you statically link glibc, its code will, at run time, >> dlopen a certain part of glibc. >> >> Why: To provide a really uniform abstraction layer over user account >> queries, e.g., man 3 getpwnam, regardless of whether the accounts are >> from /etc/passwd, LDAP, or whatever. >> >> Therefore, during run time, glibc first reads some config files of the >> host to see what kind of user account database the host uses. If it's >> /etc/passwd, then dlopen the implementation of getpwnam and friends for >> /etc/passwd; else, if it's LDAP, then dlopen the implementation of >> getpwnam and friends for LDAP; etc etc. >> >> So that later when you call getpwnam, it will happen to "do the right >> thing". >> >> This demands the required *.so files to be accessible during run time. >> Moreoever, if you statically link glibc, this also demands the required >> *.so files to version-match the glibc you statically link. >> >> (It is the main reason why most people give up on statically linking >> glibc.) >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From magnus at therning.org Tue Apr 14 07:25:50 2015 From: magnus at therning.org (Magnus Therning) Date: Tue, 14 Apr 2015 09:25:50 +0200 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> <552C379B.8080601@vex.net> Message-ID: On 14 April 2015 at 08:52, Michael Snoyman wrote: > Actually, I seem to have found the problem: > > open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = -1 > ENOENT (No such file or directory) > open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules", O_RDONLY|O_CLOEXEC) = > -1 ENOENT (No such file or directory) > > I found that I needed to copy over the following files to make my program > complete: > > /usr/lib/x86_64-linux-gnu/gconv/gconv-modules > /usr/lib/x86_64-linux-gnu/gconv/UTF-32.so > > Once I did that, I could get the executable to run in the chroot. However, > even running the statically linked executable still required most of the > shared libraries to be present inside the chroot. So it seems that: > > * We can come up with a list of a few files that need to be present inside a > Docker image to provide for minimal GHC-compiled executables > * There's a bug in the RTS that results in an infinite loop > > I'm going to try to put together a semi-robust solution for the first > problem, and I'll report the RTS issue on Trac. Excellent that you found the issue. Is there some way of controlling which libc ghc links with? There are quite a few alternatives out there, and maybe it'd be easier to create a *true* static binary by using another libc? /M -- Magnus Therning OpenPGP: 0xAB4DFBA4 email: magnus at therning.org jabber: magnus at therning.org twitter: magthe http://therning.org/magnus From michael at snoyman.com Tue Apr 14 08:43:13 2015 From: michael at snoyman.com (Michael Snoyman) Date: Tue, 14 Apr 2015 08:43:13 +0000 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> <552C379B.8080601@vex.net> Message-ID: Trac ticket created: https://ghc.haskell.org/trac/ghc/ticket/10298#ticket I've also put together a Docker image called snoyberg/haskell-scratch (source at https://github.com/snoyberg/haskell-scratch), which seems to be working for me. Here's a minimal test I've put together which seems to be succeeding (note that I've also tried some real life programs): #!/bin/bash set -e set -x cat > tiny.hs < Dockerfile < wrote: > Actually, I seem to have found the problem: > > open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = -1 > ENOENT (No such file or directory) > open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules", O_RDONLY|O_CLOEXEC) > = -1 ENOENT (No such file or directory) > > I found that I needed to copy over the following files to make my program > complete: > > /usr/lib/x86_64-linux-gnu/gconv/gconv-modules > /usr/lib/x86_64-linux-gnu/gconv/UTF-32.so > > Once I did that, I could get the executable to run in the chroot. However, > even running the statically linked executable still required most of the > shared libraries to be present inside the chroot. So it seems that: > > * We can come up with a list of a few files that need to be present inside > a Docker image to provide for minimal GHC-compiled executables > * There's a bug in the RTS that results in an infinite loop > > I'm going to try to put together a semi-robust solution for the first > problem, and I'll report the RTS issue on Trac. > > On Tue, Apr 14, 2015 at 9:25 AM Michael Snoyman > wrote: > >> I have a bit more information about this. In particular: I'm able to >> reproduce this using chroot (no Docker required), and it's reproducing with >> a dynamically linked executable too. Steps I used to reproduce: >> >> 1. Write a minimal "foo.hs" containing `main = putStrLn "Hello World"` >> 2. Compile that executable and put it in an empty directory >> 3. Run `ldd` on it and copy all necessary libraries inside that directory >> 4. Run `sudo strace -o log.txt . /foo` >> >> I've uploaded the logs to: >> >> https://gist.github.com/snoyberg/095efb17e36acc1d6360 >> >> Note that, due to size of the output, I killed the process just a few >> seconds after starting it, but when I let the output run much longer, I >> didn't see any difference in the results. I'll continue poking at this a >> bit, but most likely I'll open a GHC Trac ticket about it later today. >> >> On Tue, Apr 14, 2015 at 12:39 AM Albert Y. C. Lai wrote: >> >>> I wonder whether you already know the following, and whether it is >>> relevant to begin with. (Plus, my knowledge is fairly sketchy.) >>> >>> Even though you statically link glibc, its code will, at run time, >>> dlopen a certain part of glibc. >>> >>> Why: To provide a really uniform abstraction layer over user account >>> queries, e.g., man 3 getpwnam, regardless of whether the accounts are >>> from /etc/passwd, LDAP, or whatever. >>> >>> Therefore, during run time, glibc first reads some config files of the >>> host to see what kind of user account database the host uses. If it's >>> /etc/passwd, then dlopen the implementation of getpwnam and friends for >>> /etc/passwd; else, if it's LDAP, then dlopen the implementation of >>> getpwnam and friends for LDAP; etc etc. >>> >>> So that later when you call getpwnam, it will happen to "do the right >>> thing". >>> >>> This demands the required *.so files to be accessible during run time. >>> Moreoever, if you statically link glibc, this also demands the required >>> *.so files to version-match the glibc you statically link. >>> >>> (It is the main reason why most people give up on statically linking >>> glibc.) >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From haskell at jschneider.net Tue Apr 14 09:34:10 2015 From: haskell at jschneider.net (Jon Schneider) Date: Tue, 14 Apr 2015 10:34:10 +0100 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> <552C379B.8080601@vex.net> Message-ID: <18828cd64877199c60b57f9d7290b544.squirrel@mail.jschneider.net> These ring a bell. I had pain with exactly this shared library (or lack of) when playing with the output of cross-compiling ghc. > /usr/lib/x86_64-linux-gnu/gconv/UTF-32.so Jon From austin at well-typed.com Tue Apr 14 19:58:04 2015 From: austin at well-typed.com (Austin Seipp) Date: Tue, 14 Apr 2015 14:58:04 -0500 Subject: [Haskell-cafe] Help wanted: working on the GHC webpage In-Reply-To: <551E9EDA.40301@sigrlami.eu> References: <551E92A8.7020901@sigrlami.eu> <87y4m9conm.fsf@gmail.com> <551E9EDA.40301@sigrlami.eu> Message-ID: Hey Sergey, Sorry for the delay - thanks for all your changes! A few other people have stepped up. But we still need more help of course. :) I'm incorporating your changes into the main Git repository as we speak, and I greatly appreciate it! I'm also incorporating changes from others. Please watch the repo and let me know if you have questions! PS: As for Trac, I agree that at the minimum there should be some kind of syndication or something from the homepage to Trac... the Weekly News *is* user-focused, but the homepage has been severely lacking. I'd appreciate comments here - make an issue about it on the bug tracker! On Fri, Apr 3, 2015 at 9:08 AM, Sergey Bushnyak wrote: > >> Why would it be easier? What's difficult about publishing on >> https://ghc.haskell.org/trac/ghc/blog? > > I'm actually don't know how it's published on track. From my standpoint as > newcomer it's better to see what's happening from one place, with one > design, have some shared git repo where people contribute in markdown. >> >> Moreover, the GHC weekly news are intimately linked to Trac, as they >> reference Trac-tickets and Git commits, which Trac is able to annotate >> with meta-data (ticket-type, -status, and -title for Ticket references, >> as well as part of the Git commit msg for Git-commit refs). > > > Ok, it was just a suggestion. Maybe it's a bad idea, doesn't know about > annotation. > > Anyway, still can help on updating ghc home page. > -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Tue Apr 14 21:04:15 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Tue, 14 Apr 2015 22:04:15 +0100 Subject: [Haskell-cafe] Eta-reducing case branches In-Reply-To: <20150411012222.83164fd0131ec57e59a0c9c3@mega-nerd.com> References: <20150410164102.GL31520@weber> <20150411012222.83164fd0131ec57e59a0c9c3@mega-nerd.com> Message-ID: <20150414210414.GP31520@weber> On Sat, Apr 11, 2015 at 01:22:22AM -0700, Erik de Castro Lopo wrote: > Tom Ellis wrote: > > > Likewise, I have often wanted to rewrite > > > > case m of > > Nothing -> n > > Just x -> quux x > > Why not emply the maybe function (b -> (a -> b) -> Maybe a -> b) > > maybe n quux m This is a fine idea which I often use on simple examples like this. My proposal has some additional benefits * Closer connection between the names of the constructors and the alternatives (I always think "is it `maybe n quux` or `maybe quux n`"?. In fact I made this mistake just today.) * No need to hand write such a function for each ADT you define (arguably they could be autogenerated, but that's a different story). * You could nest matches: data Foo = Foo (Maybe Int) Bool | Bar String case m of Foo (Just 1) -> f Foo Nothing -> g Bar "Hello" -> h From olf at aatal-apotheke.de Tue Apr 14 21:12:35 2015 From: olf at aatal-apotheke.de (Olaf Klinke) Date: Tue, 14 Apr 2015 23:12:35 +0200 Subject: [Haskell-cafe] FunDeps and type inference Message-ID: Dear cafe, I want to write an evaluation function that uncurries its function argument as necessary. Examples should include: eval :: (a -> b) -> a -> b eval :: (a -> b -> c) -> a -> (b -> c) eval :: (a -> b -> c) -> (a,b) -> c and hence have both eval (+) 4 5 eval (+) (4,5) typecheck. My approach is to use a type class: class Uncurry f a b where eval :: f -> a -> b instance Uncurry (a -> b) a b where eval = ($) instance (Uncurry f b c) => Uncurry ((->) a) f) (a,b) c where eval f (a,b) = eval (f a) b This works, but not for polymorphic arguments. One must annotate function and argument with concrete types when calling, otherwise the runtime does not know what instance to use. Type inference on ($) is able to infer the type of either of f, a or b in the expression b = f $ a if the type of two is known. Thus I am tempted to add functional dependencies class Uncurry f a b | f a -> b, a b -> f, f b -> a but I get scary errors: With only the first of the three dependencies, the coverage condition fails. Adding UndecidableInstances, the code compiles. Now type inference on the return type b works, but one can not use e.g. (+) as function argument. Adding the second dependency results in the compiler rejecting the code claiming "Functional dependencies conflict between instance declarations". I can not quite see where they would, and the compiler does not tell me its counterexample. I can see that eval max (True,False) :: Bool -- by second instance declaration, -- when max :: Bool -> Bool -> Bool eval max (True,False) :: (Bool,Bool) -> (Bool,Bool) -- by first instance declaration -- when max :: (Bool,Bool) -> (Bool,Bool) -> (Bool,Bool) but this ambiguity is precisely what the dependency a b -> f should help to avoid, isn't it? Judging by the number of coverage condition posts on this list this one is easy to get wrong and the compiler messages are not always helpful. Is this a kind problem? Would anyone care to elaborate? Thanks, Olaf From martin.drautzburg at web.de Tue Apr 14 22:09:39 2015 From: martin.drautzburg at web.de (martin) Date: Wed, 15 Apr 2015 00:09:39 +0200 Subject: [Haskell-cafe] Do all "fromList" functions redirect to List operations? Message-ID: <552D9023.4080406@web.de> Hello all, I have a datatype like this: data Time = DPast | T Integer deriving (Eq, Show) data Temporal a = Temporal { at :: Time -> (a, Maybe Time) } and I wrote a fromList function to create one. The fromList function implements it own "at" function and wraps it in a Temporal. IIUC, this means that whenever I invoke "at" on a Temporal created from a List I actually operate on that List. Alternatively I can directly create a Temporal as in exNat = Temporal f where f DPast = (0, Just $ T 1) f (T t) = (t, Just $ T $ t+1) whose "at" function is a lot faster. Do all "fromList" functions behave this way, i.e. they redirect operations on the new type to List operations? Is there a way to make a Temporal, created via fromList "forget" its List heritage? From douglas.mcclean at gmail.com Wed Apr 15 00:25:59 2015 From: douglas.mcclean at gmail.com (Douglas McClean) Date: Tue, 14 Apr 2015 20:25:59 -0400 Subject: [Haskell-cafe] Why no Floating instance for Data.Fixed / Data.Fixed.Binary In-Reply-To: <552C953E.1090503@pkturner.org> References: <552C39CC.8020909@vex.net> <552C953E.1090503@pkturner.org> Message-ID: Great point, thanks Scott. I will investigate that. Possible avenues include: - living with it, as you say, and identifiying bounds - moving forward only with Data.Fixed.Binary where the values have bounded size, and imposing contexts that verify that the values can't be too large/small for the implementation to work - moving forward with a newtype around Fixed and an arbitrary precision implementation On first look, the second one probably is the best match for my target applications, which are ultimately embedded. Cheers, Doug On Tue, Apr 14, 2015 at 12:19 AM, Scott Turner <2haskell at pkturner.org> wrote: > On 2015-04-13 22:08, Douglas McClean wrote: > > I'd certainly be happy to do it, I'm just concerned that it would be > actively unwanted for a reason that I can't see. > > I will look in to what the procedures are for contributing to base. > > It wasn't my intention to beg the internet to do it for me. > > On Mon, Apr 13, 2015 at 5:49 PM, Albert Y. C. Lai wrote: > >> On 2015-04-13 11:31 AM, Douglas McClean wrote: >> >>> I'm wondering why the decision was made not to have a Floating instance >>> for Data.Fixed. >>> >> >> I have always found economics to be a powerful answer to this kind of >> questions. That is, perhaps simply, there has not been sufficient incentive >> for anyone to do the work. For example, do you want to do it? >> > > It looks hairy to me. The big-number cases would need approaches quite > different from floating point. > sin(31415926.535897932384::Pico) > sin(31415926535897932384.626433832795::Pico) > sin(314159265358979323846264338327950288419.716939937510::Pico) > To return the correct value (0) from each of these examples, and their > successors, requires an implementation able to calculate pi to an unbounded > precision. > > The value of exp(50::Uni) requires more precision than a double can > provide. How far do you go? exp(100::Uni)? > > I hope this doesn't scare you off. Perhaps there's literature on > acceptable limitations when implementing/using such fixed point > transcendental functions. In any case an implementation would be > interesting even if it doesn't provide correct results in the extreme cases. > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -- J. Douglas McClean (781) 561-5540 (cell) -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Apr 15 02:56:31 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Tue, 14 Apr 2015 19:56:31 -0700 (PDT) Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> Message-ID: <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> any use of cryptographic primitives of any form NEEDS to articulate what the trust model is, and what the threat model is likewise, i'm trying to understand who the proposed feature set is meant to serve. Several groups are in the late stages of building prototypes at varying points in the design space for improving package hosting right now for haskell, and I'm personally inclined to let those various parties release the tools, and then experiment with them all, before trying to push heavily for any particular design that hasn't had larger community experimentation. I actually care most about being able to have the full package set be git cloneable, both for low pain on premise hackage hosting for corporate intranets, and also for when i'm on a plane or boat and have no wifi. At my current job, ANY "host packages via s3" approach is totally untenable, and i'm sure among haskell using teams/organizations, this isn't a unique problem! The Author authentication/signing model question in an important one, but I"m uncomfortable with just saying "SHA512 and GPG address that". Theres A LOT of subtlety to designing a signing protocol thats properly audit-able and secure! Indeed, GPG isn't even a darn asymmetric crypto algorithm, its a program that happens to IMPLEMENT many of these algorithms. If we are serious about having robust auditing/signing, handwaving about the cryptographic parts while saying its important is ... kinda irresponsible. And frustrating because it makes it hard to evaluate the hardest parts of the whole engineering problem! The rest of the design is crucially dependent on details of these choices, and yet its that part which isn't specified. to repeat myself: there is a pretty rich design space for how we can evolve future hackage, and i worry that speccing things out and design by committee is going to be less effective than encouraging various parties to build prototypes for their own visions of future hackage, and THEN come together to combine the best parts of everyones ideas/designs. Theres so much diversity in how different people use hackage, i worry that any other way will run into failing to serve the full range of haskell users! cheers On Tuesday, April 14, 2015 at 1:01:17 AM UTC-4, Michael Snoyman wrote: > > That could work in theory. My concern with such an approach is that- > AFAIK- the tooling around that kind of stuff is not very well developed, as > opposed to an approach using Git, SHA512, and GPG, which should be easy to > combine. But I could be completely mistaken on this point; if existing, > well vetted technology exists for this, I'm not opposed to using it. > > On Mon, Apr 13, 2015 at 6:04 PM Arnaud Bailly | Capital Match < > arn... at capital-match.com > wrote: > >> Just thinking aloud but wouldn't it be possible to take advantage of >> cryptographic ledgers a la Bitcoin for authenticating packages and tracking >> the history of change ? This would provide redundancy as the transactions >> log is distributed and "naturally" create a web of trust or at least >> authenticate transactions. People uploading or modifying a package would >> have to sign a transactions with someone having enough karma to allow this. >> >> Then packages themselves could be completely and rather safely >> distributed through standard p2p file sharing. >> >> I am not a specialist of crypto money, though. >> >> My 50 cts >> Arnaud >> >> Le lundi 13 avril 2015, Dennis J. McWherter, Jr. > > a ?crit : >> >>> This proposal looks great. The one thing I am failing to understand (and >>> I recognize the proposal is in early stages) is how to ensure redundancy in >>> the system. As far as I can tell, much of this proposal discusses the >>> centralized authority of the system (i.e. ensuring secure distribution) and >>> only references (with little detail) the distributed store. For instance, >>> say I host a package on a personal server and one day I decide to shut that >>> server down; is this package now lost forever? I do see this line: "backup >>> download links to S3" but this implies that the someone is willing to pay >>> for S3 storage for all of the packages. >>> >>> Are there plans to adopt a P2P-like model or something similar to >>> support any sort of replication? Public resources like this seem to come >>> and go, so it would be nice to avoid some of the problems associated with >>> high churn in the network. That said, there is an obvious cost to >>> replication. Likewise, the central authority would have to be updated with >>> new, relevant locations to find the file (as it is currently proposed). >>> >>> In any case, as I said before, the proposal looks great! I am looking >>> forward to this. >>> >>> On Monday, April 13, 2015 at 5:02:46 AM UTC-5, Michael Snoyman wrote: >>>> >>>> Many of you saw the blog post Mathieu wrote[1] about having more >>>> composable community infrastructure, which in particular focused on >>>> improvements to Hackage. I've been discussing some of these ideas with both >>>> Mathieu and others in the community working on some similar thoughts. I've >>>> also separately spent some time speaking with Chris about package >>>> signing[2]. Through those discussions, it's become apparent to me that >>>> there are in fact two core pieces of functionality we're relying on Hackage >>>> for today: >>>> >>>> * A centralized location for accessing package metadata (i.e., the >>>> cabal files) and the package contents themselves (i.e., the sdist tarballs) >>>> * A central authority for deciding who is allowed to make releases of >>>> packages, and make revisions to cabal files >>>> >>>> In my opinion, fixing the first problem is in fact very straightforward >>>> to do today using existing tools. FP Complete already hosts a full Hackage >>>> mirror[3] backed by S3, for instance, and having the metadata mirrored to a >>>> Git repository as well is not a difficult technical challenge. This is the >>>> core of what Mathieu was proposing as far as composable infrastructure, >>>> corresponding to next actions 1 and 3 at the end of his blog post (step 2, >>>> modifying Hackage, is not a prerequesite). In my opinion, such a system >>>> would far surpass in usability, reliability, and extensibility our current >>>> infrastructure, and could be rolled out in a few days at most. >>>> >>>> However, that second point- the central authority- is the more >>>> interesting one. As it stands, our entire package ecosystem is placing a >>>> huge level of trust in Hackage, without any serious way to vet what's going >>>> on there. Attack vectors abound, e.g.: >>>> >>>> * Man in the middle attacks: as we are all painfully aware, >>>> cabal-install does not support HTTPS, so a MITM attack on downloads from >>>> Hackage is trivial >>>> * A breach of the Hackage Server codebase would allow anyone to upload >>>> nefarious code[4] >>>> * Any kind of system level vulnerability could allow an attacker to >>>> compromise the server in the same way >>>> >>>> Chris's package signing work addresses most of these vulnerabilities, >>>> by adding a layer of cryptographic signatures on top of Hackage as the >>>> central authority. I'd like to propose taking this a step further: removing >>>> Hackage as the central authority, and instead relying entirely on >>>> cryptographic signatures to release new packages. >>>> >>>> I wrote up a strawman proposal last week[5] which clearly needs work to >>>> be a realistic option. My question is: are people interested in moving >>>> forward on this? If there's no interest, and everyone is satisfied with >>>> continuing with the current Hackage-central-authority, then we can proceed >>>> with having reliable and secure services built around Hackage. But if >>>> others- like me- would like to see a more secure system built from the >>>> ground up, please say so and let's continue that conversation. >>>> >>>> [1] https://www.fpcomplete.com/blog/2015/03/composable- >>>> community-infrastructure >>>> [2] https://github.com/commercialhaskell/commercialhaskell/wiki/ >>>> Package-signing-detailed-propsal >>>> [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror >>>> [4] I don't think this is just a theoretical possibility for some point >>>> in the future. I have reported an easily trigerrable DoS attack on the >>>> current Hackage Server codebase, which has been unresolved for 1.5 months >>>> now >>>> [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 >>>> >>> -- >>> >> You received this message because you are subscribed to the Google Groups >>> "Commercial Haskell" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to commercialhaskell+unsubscribe at googlegroups.com. >>> To post to this group, send email to commercialhaskell at googlegroups.com. >>> >> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/commercialhaskell/4487776e-b862-429c-adae-477813e560f3%40googlegroups.com >>> >>> . >> >> >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> >> -- >> *Arnaud Bailly* >> >> CTO | Capital Match >> >> CapitalMatch >> >> 71 Ayer Rajah Crescent | #06-16 | Singapore 139951 >> >> (FR) +33 617 121 978 / (SG) +65 8408 7973 | arn... at capital-match.com >> | www.capital-match.com >> >> Disclaimer: >> >> *Capital Match Platform Pte. Ltd. (the "Company") registered in Singapore >> (Co. Reg. No. 201501788H), a subsidiary of Capital Match Holdings Pte. Ltd. >> (Co. Reg. No. 201418682W), provides services that involve arranging for >> multiple parties to enter into loan and invoice discounting agreements. The >> Company does not provide any form of investment advice or recommendations >> regarding any listings on its platform. In providing its services, the >> Company's role is limited to an administrative function and the Company >> does not and will not assume any advisory, fiduciary or other duties to >> clients of its services.* >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From hawu.bnu at gmail.com Wed Apr 15 03:33:26 2015 From: hawu.bnu at gmail.com (Jean Lopes) Date: Wed, 15 Apr 2015 00:33:26 -0300 Subject: [Haskell-cafe] cabal install glade Message-ID: Hello, I am trying to install the Glade package from hackage, and I keep getting exit failure... Hope someone can help me solve it! What I did: $ mkdir ~/haskell/project $ cd ~/haskell/project $ cabal sandbox init $ cabal update $ cabal install alex $ cabal install happy $ cabal install gtk2hs-buildtools $ cabal install gtk #successful until here $ cabal install glade The last statement gave me the following error: $ [1 of 2] Compiling SetupWrapper ( /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs, /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o ) $ $ /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:91:17: $ Ambiguous occurrence ?die? $ It could refer to either ?Distribution.Simple.Utils.die?, $ imported from ?Distribution.Simple.Utils? at /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 $ or ?System.Exit.die?, $ imported from ?System.Exit? at /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 $ Failed to install cairo-0.12.5.3 $ [1 of 2] Compiling SetupWrapper ( /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs, /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o ) $ $ /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:91:17: $ Ambiguous occurrence ?die? $ It could refer to either ?Distribution.Simple.Utils.die?, $ imported from ?Distribution.Simple.Utils? at /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:8:1-32 $ or ?System.Exit.die?, $ imported from ?System.Exit? at /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:21:1-18 $ Failed to install glib-0.12.5.4 $ cabal: Error: some packages failed to install: $ cairo-0.12.5.3 failed during the configure step. The exception was: $ ExitFailure 1 $ gio-0.12.5.3 depends on glib-0.12.5.4 which failed to install. $ glade-0.12.5.0 depends on glib-0.12.5.4 which failed to install. $ glib-0.12.5.4 failed during the configure step. The exception was: $ ExitFailure 1 $ gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to install. $ pango-0.12.5.3 depends on glib-0.12.5.4 which failed to install. Important: You can assume I don't know much. I'm rather new to Haskell/cabal From gershomb at gmail.com Wed Apr 15 04:07:25 2015 From: gershomb at gmail.com (Gershom B) Date: Wed, 15 Apr 2015 00:07:25 -0400 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: So I want to focus just on the idea of a ?trust model? to hackage packages. I don?t think we even have a clear spec of the problem we?re trying to solve here in terms of security. In particular, the basic thing hackage is a central authority for is ?packages listed on hackage? ? it provides a namespace, and on top of that provides the ability to explore the code under each segment of the namespace, including docs and code listings. Along with that it provides the ability to search through that namespace for things like package descriptions and names. Now, how does security fit into this? Well, at the moment we can prevent packages from being uploaded by people who are not authorized. And whoever is authorized is the first person who uploaded the package, or people they delegate to, or people otherwise added by hackage admins via e.g. the orphaned package takeover process. A problem is this is less a guarantee than we would like since e.g. accounts may be compromised, we could be MITMed (or the upload could be) etc. Hence comes the motivation for some form of signing. Now, I think the proposal suggested is the wrong one ? it says ?this is a trustworthy package? for some notion of a web of trust of something. Webs of trust are hard and work poorly except in the small. It would be better, I think, to have something _orthogonal_ to hackage or any other package distribution system that attempts a _much simpler_ guarantee ? that e.g. the person who signed a package as being ?theirs? is either the same person that signed the prior version of the package, or was delegated by them (or hackage admins). Now, on top of that, we could also have a system that allowed for individual users, if they had some notion of ?a person?s signature? such that they believed it corresponded to a person, to verify that _actual_ signature was used. But there is no web of trust, no idea given of who a user does or doesn?t believe is who they say they are or anything like that. We don?t attempt to guarantee anything more than a ?chain of custody,? which is all we now have (weaker) mechanisms to enforce. In my mind, the key elements of such a system are that it is orthogonal to how code is distributed and that it is opt-in/out. One model to look at might be Apple?s ? distribute signing keys widely, but allow centralized revocation of a malicious actor is found. Another notion, somewhat similar, is ssl certificates. Anybody, including a malicious actor, can get such a certificate. But at least we have the guarantee that once we start talking to some party, malicious or otherwise, no other party will ?swap in? for them midstream. In general, what I?m urging is to limit the scope of what we aim for. We need to give users the tools to enforce the level of trust that they want to enforce, and to verify certain specific claims. But if we shoot for more, we will either have difficult to use system, or will fail in some fashion. And furthermore I think we should have this discussion _independent_ of hackage, which serves a whole number of functions, and until recently hasn?t even _purported_ to even weakly enforce any guarantees about who uploaded the code it hosts. Cheers, Gershom On April 14, 2015 at 10:57:00 PM, Carter Schonwald (carter.schonwald at gmail.com) wrote: > any use of cryptographic primitives of any form NEEDS to articulate what > the trust model is, and what the threat model is > > likewise, i'm trying to understand who the proposed feature set is meant to > serve. > > Several groups are in the late stages of building prototypes at varying > points in the design space for improving package hosting right now for > haskell, and I'm personally inclined to let those various parties release > the tools, and then experiment with them all, before trying to push heavily > for any particular design that hasn't had larger community experimentation. > > I actually care most about being able to have the full package set be git > cloneable, both for low pain on premise hackage hosting for corporate > intranets, and also for when i'm on a plane or boat and have no wifi. At > my current job, ANY "host packages via s3" approach is totally untenable, > and i'm sure among haskell using teams/organizations, this isn't a unique > problem! > > The Author authentication/signing model question in an important one, but > I"m uncomfortable with just saying "SHA512 and GPG address that". Theres A > LOT of subtlety to designing a signing protocol thats properly audit-able > and secure! Indeed, GPG isn't even a darn asymmetric crypto algorithm, its > a program that happens to IMPLEMENT many of these algorithms. If we are > serious about having robust auditing/signing, handwaving about the > cryptographic parts while saying its important is ... kinda irresponsible. > And frustrating because it makes it hard to evaluate the hardest parts of > the whole engineering problem! The rest of the design is crucially > dependent on details of these choices, and yet its that part which isn't > specified. > > to repeat myself: there is a pretty rich design space for how we can evolve > future hackage, and i worry that speccing things out and design by > committee is going to be less effective than encouraging various parties to > build prototypes for their own visions of future hackage, and THEN come > together to combine the best parts of everyones ideas/designs. Theres so > much diversity in how different people use hackage, i worry that any other > way will run into failing to serve the full range of haskell users! > > cheers > > On Tuesday, April 14, 2015 at 1:01:17 AM UTC-4, Michael Snoyman wrote: > > > > That could work in theory. My concern with such an approach is that- > > AFAIK- the tooling around that kind of stuff is not very well developed, as > > opposed to an approach using Git, SHA512, and GPG, which should be easy to > > combine. But I could be completely mistaken on this point; if existing, > > well vetted technology exists for this, I'm not opposed to using it. > > > > On Mon, Apr 13, 2015 at 6:04 PM Arnaud Bailly | Capital Match < > > arn... at capital-match.com > wrote: > > > >> Just thinking aloud but wouldn't it be possible to take advantage of > >> cryptographic ledgers a la Bitcoin for authenticating packages and tracking > >> the history of change ? This would provide redundancy as the transactions > >> log is distributed and "naturally" create a web of trust or at least > >> authenticate transactions. People uploading or modifying a package would > >> have to sign a transactions with someone having enough karma to allow this. > >> > >> Then packages themselves could be completely and rather safely > >> distributed through standard p2p file sharing. > >> > >> I am not a specialist of crypto money, though. > >> > >> My 50 cts > >> Arnaud > >> > >> Le lundi 13 avril 2015, Dennis J. McWherter, Jr. > >> > a ?crit : > >> > >>> This proposal looks great. The one thing I am failing to understand (and > >>> I recognize the proposal is in early stages) is how to ensure redundancy in > >>> the system. As far as I can tell, much of this proposal discusses the > >>> centralized authority of the system (i.e. ensuring secure distribution) and > >>> only references (with little detail) the distributed store. For instance, > >>> say I host a package on a personal server and one day I decide to shut that > >>> server down; is this package now lost forever? I do see this line: "backup > >>> download links to S3" but this implies that the someone is willing to pay > >>> for S3 storage for all of the packages. > >>> > >>> Are there plans to adopt a P2P-like model or something similar to > >>> support any sort of replication? Public resources like this seem to come > >>> and go, so it would be nice to avoid some of the problems associated with > >>> high churn in the network. That said, there is an obvious cost to > >>> replication. Likewise, the central authority would have to be updated with > >>> new, relevant locations to find the file (as it is currently proposed). > >>> > >>> In any case, as I said before, the proposal looks great! I am looking > >>> forward to this. > >>> > >>> On Monday, April 13, 2015 at 5:02:46 AM UTC-5, Michael Snoyman wrote: > >>>> > >>>> Many of you saw the blog post Mathieu wrote[1] about having more > >>>> composable community infrastructure, which in particular focused on > >>>> improvements to Hackage. I've been discussing some of these ideas with both > >>>> Mathieu and others in the community working on some similar thoughts. I've > >>>> also separately spent some time speaking with Chris about package > >>>> signing[2]. Through those discussions, it's become apparent to me that > >>>> there are in fact two core pieces of functionality we're relying on Hackage > >>>> for today: > >>>> > >>>> * A centralized location for accessing package metadata (i.e., the > >>>> cabal files) and the package contents themselves (i.e., the sdist tarballs) > >>>> * A central authority for deciding who is allowed to make releases of > >>>> packages, and make revisions to cabal files > >>>> > >>>> In my opinion, fixing the first problem is in fact very straightforward > >>>> to do today using existing tools. FP Complete already hosts a full Hackage > >>>> mirror[3] backed by S3, for instance, and having the metadata mirrored to a > >>>> Git repository as well is not a difficult technical challenge. This is the > >>>> core of what Mathieu was proposing as far as composable infrastructure, > >>>> corresponding to next actions 1 and 3 at the end of his blog post (step 2, > >>>> modifying Hackage, is not a prerequesite). In my opinion, such a system > >>>> would far surpass in usability, reliability, and extensibility our current > >>>> infrastructure, and could be rolled out in a few days at most. > >>>> > >>>> However, that second point- the central authority- is the more > >>>> interesting one. As it stands, our entire package ecosystem is placing a > >>>> huge level of trust in Hackage, without any serious way to vet what's going > >>>> on there. Attack vectors abound, e.g.: > >>>> > >>>> * Man in the middle attacks: as we are all painfully aware, > >>>> cabal-install does not support HTTPS, so a MITM attack on downloads from > >>>> Hackage is trivial > >>>> * A breach of the Hackage Server codebase would allow anyone to upload > >>>> nefarious code[4] > >>>> * Any kind of system level vulnerability could allow an attacker to > >>>> compromise the server in the same way > >>>> > >>>> Chris's package signing work addresses most of these vulnerabilities, > >>>> by adding a layer of cryptographic signatures on top of Hackage as the > >>>> central authority. I'd like to propose taking this a step further: removing > >>>> Hackage as the central authority, and instead relying entirely on > >>>> cryptographic signatures to release new packages. > >>>> > >>>> I wrote up a strawman proposal last week[5] which clearly needs work to > >>>> be a realistic option. My question is: are people interested in moving > >>>> forward on this? If there's no interest, and everyone is satisfied with > >>>> continuing with the current Hackage-central-authority, then we can proceed > >>>> with having reliable and secure services built around Hackage. But if > >>>> others- like me- would like to see a more secure system built from the > >>>> ground up, please say so and let's continue that conversation. > >>>> > >>>> [1] https://www.fpcomplete.com/blog/2015/03/composable- > >>>> community-infrastructure > >>>> [2] https://github.com/commercialhaskell/commercialhaskell/wiki/ > >>>> Package-signing-detailed-propsal > >>>> [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror > >>>> [4] I don't think this is just a theoretical possibility for some point > >>>> in the future. I have reported an easily trigerrable DoS attack on the > >>>> current Hackage Server codebase, which has been unresolved for 1.5 months > >>>> now > >>>> [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 > >>>> > >>> -- > >>> > >> You received this message because you are subscribed to the Google Groups > >>> "Commercial Haskell" group. > >>> To unsubscribe from this group and stop receiving emails from it, send > >>> an email to commercialhaskell+unsubscribe at googlegroups.com. > >>> To post to this group, send email to commercialhaskell at googlegroups.com. > >>> > >> To view this discussion on the web visit > >>> https://groups.google.com/d/msgid/commercialhaskell/4487776e-b862-429c-adae-477813e560f3%40googlegroups.com > >>> > >>> . > >> > >> > >>> For more options, visit https://groups.google.com/d/optout. > >>> > >> > >> > >> -- > >> *Arnaud Bailly* > >> > >> CTO | Capital Match > >> > >> CapitalMatch > >> > >> 71 Ayer Rajah Crescent | #06-16 | Singapore 139951 > >> > >> (FR) +33 617 121 978 / (SG) +65 8408 7973 | arn... at capital-match.com > >> | www.capital-match.com > >> > >> Disclaimer: > >> > >> *Capital Match Platform Pte. Ltd. (the "Company") registered in Singapore > >> (Co. Reg. No. 201501788H), a subsidiary of Capital Match Holdings Pte. Ltd. > >> (Co. Reg. No. 201418682W), provides services that involve arranging for > >> multiple parties to enter into loan and invoice discounting agreements. The > >> Company does not provide any form of investment advice or recommendations > >> regarding any listings on its platform. In providing its services, the > >> Company's role is limited to an administrative function and the Company > >> does not and will not assume any advisory, fiduciary or other duties to > >> clients of its services.* > >> > >> _______________________________________________ > haskell-infrastructure mailing list > haskell-infrastructure at community.galois.com > http://community.galois.com/mailman/listinfo/haskell-infrastructure > From greg at gregweber.info Wed Apr 15 04:12:00 2015 From: greg at gregweber.info (Greg Weber) Date: Tue, 14 Apr 2015 21:12:00 -0700 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: Message-ID: What security guarantees do we get from this proposal that are not present from Chris's package signing work? Part of the goal of the package signing is that we no longer need to trust Hackage. If it is compromised and packages are compromised, then anyone using signing tools should automatically reject the compromised packages. Right now I think the answer is: that this provides a security model for revisions: it limits what can be done and formalizes the trust of this process in a cryptographic way. Whereas with Chris's work there is no concept of a (trusted) revision and a new package must be released? On Mon, Apr 13, 2015 at 3:02 AM, Michael Snoyman wrote: > Many of you saw the blog post Mathieu wrote[1] about having more > composable community infrastructure, which in particular focused on > improvements to Hackage. I've been discussing some of these ideas with both > Mathieu and others in the community working on some similar thoughts. I've > also separately spent some time speaking with Chris about package > signing[2]. Through those discussions, it's become apparent to me that > there are in fact two core pieces of functionality we're relying on Hackage > for today: > > * A centralized location for accessing package metadata (i.e., the cabal > files) and the package contents themselves (i.e., the sdist tarballs) > * A central authority for deciding who is allowed to make releases of > packages, and make revisions to cabal files > > In my opinion, fixing the first problem is in fact very straightforward to > do today using existing tools. FP Complete already hosts a full Hackage > mirror[3] backed by S3, for instance, and having the metadata mirrored to a > Git repository as well is not a difficult technical challenge. This is the > core of what Mathieu was proposing as far as composable infrastructure, > corresponding to next actions 1 and 3 at the end of his blog post (step 2, > modifying Hackage, is not a prerequesite). In my opinion, such a system > would far surpass in usability, reliability, and extensibility our current > infrastructure, and could be rolled out in a few days at most. > > However, that second point- the central authority- is the more interesting > one. As it stands, our entire package ecosystem is placing a huge level of > trust in Hackage, without any serious way to vet what's going on there. > Attack vectors abound, e.g.: > > * Man in the middle attacks: as we are all painfully aware, cabal-install > does not support HTTPS, so a MITM attack on downloads from Hackage is > trivial > * A breach of the Hackage Server codebase would allow anyone to upload > nefarious code[4] > * Any kind of system level vulnerability could allow an attacker to > compromise the server in the same way > > Chris's package signing work addresses most of these vulnerabilities, by > adding a layer of cryptographic signatures on top of Hackage as the central > authority. I'd like to propose taking this a step further: removing Hackage > as the central authority, and instead relying entirely on cryptographic > signatures to release new packages. > > I wrote up a strawman proposal last week[5] which clearly needs work to be > a realistic option. My question is: are people interested in moving forward > on this? If there's no interest, and everyone is satisfied with continuing > with the current Hackage-central-authority, then we can proceed with having > reliable and secure services built around Hackage. But if others- like me- > would like to see a more secure system built from the ground up, please say > so and let's continue that conversation. > > [1] > https://www.fpcomplete.com/blog/2015/03/composable-community-infrastructure > > [2] > https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal > > [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror > [4] I don't think this is just a theoretical possibility for some point in > the future. I have reported an easily trigerrable DoS attack on the current > Hackage Server codebase, which has been unresolved for 1.5 months now > [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 > > -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/CAKA2JgL4MviHic52_S3P8RqxyJndkj3oFA%2BPVG11AAgMhMJksw%40mail.gmail.com > > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Wed Apr 15 04:34:45 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 15 Apr 2015 04:34:45 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: On Wed, Apr 15, 2015 at 5:56 AM Carter Schonwald wrote: > any use of cryptographic primitives of any form NEEDS to articulate what > the trust model is, and what the threat model is > > likewise, i'm trying to understand who the proposed feature set is meant > to serve. > > Several groups are in the late stages of building prototypes at varying > points in the design space for improving package hosting right now for > haskell, and I'm personally inclined to let those various parties release > the tools, and then experiment with them all, before trying to push heavily > for any particular design that hasn't had larger community experimentation. > > I'd be fine with that, if there was public discussion of what those projects are trying to solve. Of the ones that I have asked questions about, I haven't heard any of them trying to address the trust/security issues I've raised here, which is why I'm asking the mailing list if there's interest. I'm not OK with simply stalling any community process for improving our situation because "someone's working on something related, and it'll be done Real Soon Now(tm)." That's a recipe for stagnation. > I actually care most about being able to have the full package set be git > cloneable, both for low pain on premise hackage hosting for corporate > intranets, and also for when i'm on a plane or boat and have no wifi. At > my current job, ANY "host packages via s3" approach is totally untenable, > and i'm sure among haskell using teams/organizations, this isn't a unique > problem! > > I agree completely. And similarly, hosting all packages in a Git repository is *also* unusable in other situations, such as normal users wanting to get a minimal set of downloads to get started on a project. That's why I left the download information in this proposal at URL; you can add different URLs to support Git repository contents as well. It would also be pretty easy to modify the all-cabal-files repo I pointed to and create a repository containing the tarballs themselves. I don't know if Github would like hosting that much content, but I have no problem helping roll that out. > The Author authentication/signing model question in an important one, but > I"m uncomfortable with just saying "SHA512 and GPG address that". Theres A > LOT of subtlety to designing a signing protocol thats properly audit-able > and secure! Indeed, GPG isn't even a darn asymmetric crypto algorithm, its > a program that happens to IMPLEMENT many of these algorithms. If we are > serious about having robust auditing/signing, handwaving about the > cryptographic parts while saying its important is ... kinda irresponsible. > And frustrating because it makes it hard to evaluate the hardest parts of > the whole engineering problem! The rest of the design is crucially > dependent on details of these choices, and yet its that part which isn't > specified. > > I think you're assuming that my "proposal" was more than a point of discussion. It's not. When starting this thread, I tried to make it clear that this is to gauge interest in creating a real solution. If there's interest, we should figure out these points. If there's no interest, then I'm glad I didn't invest weeks in coming up with a more robust proposal. > to repeat myself: there is a pretty rich design space for how we can > evolve future hackage, and i worry that speccing things out and design by > committee is going to be less effective than encouraging various parties to > build prototypes for their own visions of future hackage, and THEN come > together to combine the best parts of everyones ideas/designs. Theres so > much diversity in how different people use hackage, i worry that any other > way will run into failing to serve the full range of haskell users! > > I disagree here pretty strongly. Something with a strong social element requires discussion upfront, not someone creating a complete solution and then trying to impose it on everyone else. There are certainly things that *can* be done without discussion. Hosting cabal and tar.gz files in a Git repo, or mirroring to S3, are orthogonal actions that require no coordination, for instance. But tweaking the way we view the trust model of Hackage is pretty central, and needs discussion. Michael > cheers > > On Tuesday, April 14, 2015 at 1:01:17 AM UTC-4, Michael Snoyman wrote: > >> That could work in theory. My concern with such an approach is that- >> AFAIK- the tooling around that kind of stuff is not very well developed, as >> opposed to an approach using Git, SHA512, and GPG, which should be easy to >> combine. But I could be completely mistaken on this point; if existing, >> well vetted technology exists for this, I'm not opposed to using it. >> >> On Mon, Apr 13, 2015 at 6:04 PM Arnaud Bailly | Capital Match < >> arn... at capital-match.com> wrote: >> > Just thinking aloud but wouldn't it be possible to take advantage of >>> cryptographic ledgers a la Bitcoin for authenticating packages and tracking >>> the history of change ? This would provide redundancy as the transactions >>> log is distributed and "naturally" create a web of trust or at least >>> authenticate transactions. People uploading or modifying a package would >>> have to sign a transactions with someone having enough karma to allow this. >>> >>> Then packages themselves could be completely and rather safely >>> distributed through standard p2p file sharing. >>> >>> I am not a specialist of crypto money, though. >>> >>> My 50 cts >>> Arnaud >>> >>> Le lundi 13 avril 2015, Dennis J. McWherter, Jr. >>> a ?crit : >>> >>>> This proposal looks great. The one thing I am failing to understand >>>> (and I recognize the proposal is in early stages) is how to ensure >>>> redundancy in the system. As far as I can tell, much of this proposal >>>> discusses the centralized authority of the system (i.e. ensuring secure >>>> distribution) and only references (with little detail) the distributed >>>> store. For instance, say I host a package on a personal server and one day >>>> I decide to shut that server down; is this package now lost forever? I do >>>> see this line: "backup download links to S3" but this implies that the >>>> someone is willing to pay for S3 storage for all of the packages. >>>> >>>> Are there plans to adopt a P2P-like model or something similar to >>>> support any sort of replication? Public resources like this seem to come >>>> and go, so it would be nice to avoid some of the problems associated with >>>> high churn in the network. That said, there is an obvious cost to >>>> replication. Likewise, the central authority would have to be updated with >>>> new, relevant locations to find the file (as it is currently proposed). >>>> >>>> In any case, as I said before, the proposal looks great! I am looking >>>> forward to this. >>>> >>>> On Monday, April 13, 2015 at 5:02:46 AM UTC-5, Michael Snoyman wrote: >>>>> >>>>> Many of you saw the blog post Mathieu wrote[1] about having more >>>>> composable community infrastructure, which in particular focused on >>>>> improvements to Hackage. I've been discussing some of these ideas with both >>>>> Mathieu and others in the community working on some similar thoughts. I've >>>>> also separately spent some time speaking with Chris about package >>>>> signing[2]. Through those discussions, it's become apparent to me that >>>>> there are in fact two core pieces of functionality we're relying on Hackage >>>>> for today: >>>>> >>>>> * A centralized location for accessing package metadata (i.e., the >>>>> cabal files) and the package contents themselves (i.e., the sdist tarballs) >>>>> * A central authority for deciding who is allowed to make releases of >>>>> packages, and make revisions to cabal files >>>>> >>>>> In my opinion, fixing the first problem is in fact very >>>>> straightforward to do today using existing tools. FP Complete already hosts >>>>> a full Hackage mirror[3] backed by S3, for instance, and having the >>>>> metadata mirrored to a Git repository as well is not a difficult technical >>>>> challenge. This is the core of what Mathieu was proposing as far as >>>>> composable infrastructure, corresponding to next actions 1 and 3 at the end >>>>> of his blog post (step 2, modifying Hackage, is not a prerequesite). In my >>>>> opinion, such a system would far surpass in usability, reliability, and >>>>> extensibility our current infrastructure, and could be rolled out in a few >>>>> days at most. >>>>> >>>>> However, that second point- the central authority- is the more >>>>> interesting one. As it stands, our entire package ecosystem is placing a >>>>> huge level of trust in Hackage, without any serious way to vet what's going >>>>> on there. Attack vectors abound, e.g.: >>>>> >>>>> * Man in the middle attacks: as we are all painfully aware, >>>>> cabal-install does not support HTTPS, so a MITM attack on downloads from >>>>> Hackage is trivial >>>>> * A breach of the Hackage Server codebase would allow anyone to upload >>>>> nefarious code[4] >>>>> * Any kind of system level vulnerability could allow an attacker to >>>>> compromise the server in the same way >>>>> >>>>> Chris's package signing work addresses most of these vulnerabilities, >>>>> by adding a layer of cryptographic signatures on top of Hackage as the >>>>> central authority. I'd like to propose taking this a step further: removing >>>>> Hackage as the central authority, and instead relying entirely on >>>>> cryptographic signatures to release new packages. >>>>> >>>>> I wrote up a strawman proposal last week[5] which clearly needs work >>>>> to be a realistic option. My question is: are people interested in moving >>>>> forward on this? If there's no interest, and everyone is satisfied with >>>>> continuing with the current Hackage-central-authority, then we can proceed >>>>> with having reliable and secure services built around Hackage. But if >>>>> others- like me- would like to see a more secure system built from the >>>>> ground up, please say so and let's continue that conversation. >>>>> >>>>> [1] https://www.fpcomplete.com/blog/2015/03/composable- >>>>> community-infrastructure >>>>> [2] https://github.com/commercialhaskell/commercialhaskell/wiki/ >>>>> Package-signing-detailed-propsal >>>>> [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror >>>>> [4] I don't think this is just a theoretical possibility for some >>>>> point in the future. I have reported an easily trigerrable DoS attack on >>>>> the current Hackage Server codebase, which has been unresolved for 1.5 >>>>> months now >>>>> [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 >>>>> >>>> -- >>>> >>> You received this message because you are subscribed to the Google >>>> Groups "Commercial Haskell" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to commercialhaskell+unsubscribe at googlegroups.com. >>>> To post to this group, send email to commercialhaskell at googlegroups.com >>>> . >>>> >>> To view this discussion on the web visit >>>> https://groups.google.com/d/msgid/commercialhaskell/4487776e-b862-429c-adae-477813e560f3%40googlegroups.com >>>> >>>> . >>> >>> >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>> >>> >>> -- >>> >> *Arnaud Bailly* >>> >>> CTO | Capital Match >>> >>> CapitalMatch >>> >> 71 Ayer Rajah Crescent | #06-16 | Singapore 139951 >>> >> (FR) +33 617 121 978 / (SG) +65 8408 7973 | arn... at capital-match.com | >>> www.capital-match.com >>> >> Disclaimer: >>> >>> *Capital Match Platform Pte. Ltd. (the "Company") registered in >>> Singapore (Co. Reg. No. 201501788H), a subsidiary of Capital Match Holdings >>> Pte. Ltd. (Co. Reg. No. 201418682W), provides services that involve >>> arranging for multiple parties to enter into loan and invoice discounting >>> agreements. The Company does not provide any form of investment advice or >>> recommendations regarding any listings on its platform. In providing its >>> services, the Company's role is limited to an administrative function and >>> the Company does not and will not assume any advisory, fiduciary or other >>> duties to clients of its services.* >>> >> -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/33c89d4a-12b9-495b-a151-7e317177b061%40googlegroups.com > > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Wed Apr 15 04:47:56 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 15 Apr 2015 04:47:56 +0000 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: I'd like to ignore features of Hackage like "browsing code" for purposes of this discussion. That's clearly something that can be a feature layered on top of a real package store by a web interface. I'm focused on just that lower level of actually creating a coherent set of packages. In that realm, I think you've understated what trust we're putting in Hackage today. We have to trust it to: * Properly authenticate users * Keep authorization lists of who can make uploads/revisions (and who can grant those rights) * Allow safe uploads of packages and metadata * Distribute packages and metadata to users safely I think we agree, but I'll say it outright: Hackage currently *cannot* succeed at the last two points, since all interactions with it from cabal-install are occurring over non-secure HTTP connections, making it vulnerable to MITM attacks on both upload and download. The package signing work- if completely adopted by the community- would address that. What I'm raising here is the first two points. And even those points have an impact on the other two points. To draw this out a bit more clearly: * Currently, authorized uploaders are identified by a user name and a password on Hackage. How do we correlate that to a GPG key? Ideally, the central upload authority would be collecting GPG public keys for all uploaders so that signature verification can happen correctly. * There's no way for an outside authority to vet the 00-index.tar.gz file downloaded from Hackage; it's a completely opaque, black box. Having the set of authorization rules be publicly viewable, auditable, and verifiable overcomes that. I'd really like to make sure that we're separating two questions here: (1) Is there a problem with the way we're trusting Hackage today? (2) Is the strawman proposal I sent anywhere close to a real solution? I feel strongly about (1), and very weakly about (2). On Wed, Apr 15, 2015 at 7:07 AM Gershom B wrote: > So I want to focus just on the idea of a ?trust model? to hackage packages. > > I don?t think we even have a clear spec of the problem we?re trying to > solve here in terms of security. In particular, the basic thing hackage is > a central authority for is ?packages listed on hackage? ? it provides a > namespace, and on top of that provides the ability to explore the code > under each segment of the namespace, including docs and code listings. > Along with that it provides the ability to search through that namespace > for things like package descriptions and names. > > Now, how does security fit into this? Well, at the moment we can prevent > packages from being uploaded by people who are not authorized. And whoever > is authorized is the first person who uploaded the package, or people they > delegate to, or people otherwise added by hackage admins via e.g. the > orphaned package takeover process. A problem is this is less a guarantee > than we would like since e.g. accounts may be compromised, we could be > MITMed (or the upload could be) etc. > > Hence comes the motivation for some form of signing. Now, I think the > proposal suggested is the wrong one ? it says ?this is a trustworthy > package? for some notion of a web of trust of something. Webs of trust are > hard and work poorly except in the small. It would be better, I think, to > have something _orthogonal_ to hackage or any other package distribution > system that attempts a _much simpler_ guarantee ? that e.g. the person who > signed a package as being ?theirs? is either the same person that signed > the prior version of the package, or was delegated by them (or hackage > admins). Now, on top of that, we could also have a system that allowed for > individual users, if they had some notion of ?a person?s signature? such > that they believed it corresponded to a person, to verify that _actual_ > signature was used. But there is no web of trust, no idea given of who a > user does or doesn?t believe is who they say they are or anything like > that. We don?t attempt to guarantee anything more than a ?chain of > custody,? which is all we now have (weaker) mechanisms to enforce. > > In my mind, the key elements of such a system are that it is orthogonal to > how code is distributed and that it is opt-in/out. > > One model to look at might be Apple?s ? distribute signing keys widely, > but allow centralized revocation of a malicious actor is found. Another > notion, somewhat similar, is ssl certificates. Anybody, including a > malicious actor, can get such a certificate. But at least we have the > guarantee that once we start talking to some party, malicious or otherwise, > no other party will ?swap in? for them midstream. > > In general, what I?m urging is to limit the scope of what we aim for. We > need to give users the tools to enforce the level of trust that they want > to enforce, and to verify certain specific claims. But if we shoot for > more, we will either have difficult to use system, or will fail in some > fashion. And furthermore I think we should have this discussion > _independent_ of hackage, which serves a whole number of functions, and > until recently hasn?t even _purported_ to even weakly enforce any > guarantees about who uploaded the code it hosts. > > Cheers, > Gershom > > > On April 14, 2015 at 10:57:00 PM, Carter Schonwald ( > carter.schonwald at gmail.com) wrote: > > any use of cryptographic primitives of any form NEEDS to articulate what > > the trust model is, and what the threat model is > > > > likewise, i'm trying to understand who the proposed feature set is meant > to > > serve. > > > > Several groups are in the late stages of building prototypes at varying > > points in the design space for improving package hosting right now for > > haskell, and I'm personally inclined to let those various parties release > > the tools, and then experiment with them all, before trying to push > heavily > > for any particular design that hasn't had larger community > experimentation. > > > > I actually care most about being able to have the full package set be git > > cloneable, both for low pain on premise hackage hosting for corporate > > intranets, and also for when i'm on a plane or boat and have no wifi. At > > my current job, ANY "host packages via s3" approach is totally untenable, > > and i'm sure among haskell using teams/organizations, this isn't a unique > > problem! > > > > The Author authentication/signing model question in an important one, but > > I"m uncomfortable with just saying "SHA512 and GPG address that". Theres > A > > LOT of subtlety to designing a signing protocol thats properly audit-able > > and secure! Indeed, GPG isn't even a darn asymmetric crypto algorithm, > its > > a program that happens to IMPLEMENT many of these algorithms. If we are > > serious about having robust auditing/signing, handwaving about the > > cryptographic parts while saying its important is ... kinda > irresponsible. > > And frustrating because it makes it hard to evaluate the hardest parts of > > the whole engineering problem! The rest of the design is crucially > > dependent on details of these choices, and yet its that part which isn't > > specified. > > > > to repeat myself: there is a pretty rich design space for how we can > evolve > > future hackage, and i worry that speccing things out and design by > > committee is going to be less effective than encouraging various parties > to > > build prototypes for their own visions of future hackage, and THEN come > > together to combine the best parts of everyones ideas/designs. Theres so > > much diversity in how different people use hackage, i worry that any > other > > way will run into failing to serve the full range of haskell users! > > > > cheers > > > > On Tuesday, April 14, 2015 at 1:01:17 AM UTC-4, Michael Snoyman wrote: > > > > > > That could work in theory. My concern with such an approach is that- > > > AFAIK- the tooling around that kind of stuff is not very well > developed, as > > > opposed to an approach using Git, SHA512, and GPG, which should be > easy to > > > combine. But I could be completely mistaken on this point; if existing, > > > well vetted technology exists for this, I'm not opposed to using it. > > > > > > On Mon, Apr 13, 2015 at 6:04 PM Arnaud Bailly | Capital Match < > > > arn... at capital-match.com > wrote: > > > > > >> Just thinking aloud but wouldn't it be possible to take advantage of > > >> cryptographic ledgers a la Bitcoin for authenticating packages and > tracking > > >> the history of change ? This would provide redundancy as the > transactions > > >> log is distributed and "naturally" create a web of trust or at least > > >> authenticate transactions. People uploading or modifying a package > would > > >> have to sign a transactions with someone having enough karma to allow > this. > > >> > > >> Then packages themselves could be completely and rather safely > > >> distributed through standard p2p file sharing. > > >> > > >> I am not a specialist of crypto money, though. > > >> > > >> My 50 cts > > >> Arnaud > > >> > > >> Le lundi 13 avril 2015, Dennis J. McWherter, Jr. > >> > a ?crit : > > >> > > >>> This proposal looks great. The one thing I am failing to understand > (and > > >>> I recognize the proposal is in early stages) is how to ensure > redundancy in > > >>> the system. As far as I can tell, much of this proposal discusses the > > >>> centralized authority of the system (i.e. ensuring secure > distribution) and > > >>> only references (with little detail) the distributed store. For > instance, > > >>> say I host a package on a personal server and one day I decide to > shut that > > >>> server down; is this package now lost forever? I do see this line: > "backup > > >>> download links to S3" but this implies that the someone is willing > to pay > > >>> for S3 storage for all of the packages. > > >>> > > >>> Are there plans to adopt a P2P-like model or something similar to > > >>> support any sort of replication? Public resources like this seem to > come > > >>> and go, so it would be nice to avoid some of the problems associated > with > > >>> high churn in the network. That said, there is an obvious cost to > > >>> replication. Likewise, the central authority would have to be > updated with > > >>> new, relevant locations to find the file (as it is currently > proposed). > > >>> > > >>> In any case, as I said before, the proposal looks great! I am looking > > >>> forward to this. > > >>> > > >>> On Monday, April 13, 2015 at 5:02:46 AM UTC-5, Michael Snoyman wrote: > > >>>> > > >>>> Many of you saw the blog post Mathieu wrote[1] about having more > > >>>> composable community infrastructure, which in particular focused on > > >>>> improvements to Hackage. I've been discussing some of these ideas > with both > > >>>> Mathieu and others in the community working on some similar > thoughts. I've > > >>>> also separately spent some time speaking with Chris about package > > >>>> signing[2]. Through those discussions, it's become apparent to me > that > > >>>> there are in fact two core pieces of functionality we're relying on > Hackage > > >>>> for today: > > >>>> > > >>>> * A centralized location for accessing package metadata (i.e., the > > >>>> cabal files) and the package contents themselves (i.e., the sdist > tarballs) > > >>>> * A central authority for deciding who is allowed to make releases > of > > >>>> packages, and make revisions to cabal files > > >>>> > > >>>> In my opinion, fixing the first problem is in fact very > straightforward > > >>>> to do today using existing tools. FP Complete already hosts a full > Hackage > > >>>> mirror[3] backed by S3, for instance, and having the metadata > mirrored to a > > >>>> Git repository as well is not a difficult technical challenge. This > is the > > >>>> core of what Mathieu was proposing as far as composable > infrastructure, > > >>>> corresponding to next actions 1 and 3 at the end of his blog post > (step 2, > > >>>> modifying Hackage, is not a prerequesite). In my opinion, such a > system > > >>>> would far surpass in usability, reliability, and extensibility our > current > > >>>> infrastructure, and could be rolled out in a few days at most. > > >>>> > > >>>> However, that second point- the central authority- is the more > > >>>> interesting one. As it stands, our entire package ecosystem is > placing a > > >>>> huge level of trust in Hackage, without any serious way to vet > what's going > > >>>> on there. Attack vectors abound, e.g.: > > >>>> > > >>>> * Man in the middle attacks: as we are all painfully aware, > > >>>> cabal-install does not support HTTPS, so a MITM attack on downloads > from > > >>>> Hackage is trivial > > >>>> * A breach of the Hackage Server codebase would allow anyone to > upload > > >>>> nefarious code[4] > > >>>> * Any kind of system level vulnerability could allow an attacker to > > >>>> compromise the server in the same way > > >>>> > > >>>> Chris's package signing work addresses most of these > vulnerabilities, > > >>>> by adding a layer of cryptographic signatures on top of Hackage as > the > > >>>> central authority. I'd like to propose taking this a step further: > removing > > >>>> Hackage as the central authority, and instead relying entirely on > > >>>> cryptographic signatures to release new packages. > > >>>> > > >>>> I wrote up a strawman proposal last week[5] which clearly needs > work to > > >>>> be a realistic option. My question is: are people interested in > moving > > >>>> forward on this? If there's no interest, and everyone is satisfied > with > > >>>> continuing with the current Hackage-central-authority, then we can > proceed > > >>>> with having reliable and secure services built around Hackage. But > if > > >>>> others- like me- would like to see a more secure system built from > the > > >>>> ground up, please say so and let's continue that conversation. > > >>>> > > >>>> [1] https://www.fpcomplete.com/blog/2015/03/composable- > > >>>> community-infrastructure > > >>>> [2] https://github.com/commercialhaskell/commercialhaskell/wiki/ > > >>>> Package-signing-detailed-propsal > > >>>> [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror > > >>>> [4] I don't think this is just a theoretical possibility for some > point > > >>>> in the future. I have reported an easily trigerrable DoS attack on > the > > >>>> current Hackage Server codebase, which has been unresolved for 1.5 > months > > >>>> now > > >>>> [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 > > >>>> > > >>> -- > > >>> > > >> You received this message because you are subscribed to the Google > Groups > > >>> "Commercial Haskell" group. > > >>> To unsubscribe from this group and stop receiving emails from it, > send > > >>> an email to commercialhaskell+unsubscribe at googlegroups.com. > > >>> To post to this group, send email to > commercialhaskell at googlegroups.com. > > >>> > > >> To view this discussion on the web visit > > >>> > https://groups.google.com/d/msgid/commercialhaskell/4487776e-b862-429c-adae-477813e560f3%40googlegroups.com > > >>> > > >>> . > > >> > > >> > > >>> For more options, visit https://groups.google.com/d/optout. > > >>> > > >> > > >> > > >> -- > > >> *Arnaud Bailly* > > >> > > >> CTO | Capital Match > > >> > > >> CapitalMatch > > >> > > >> 71 Ayer Rajah Crescent | #06-16 | Singapore 139951 > > >> > > >> (FR) +33 617 121 978 / (SG) +65 8408 7973 | arn... at capital-match.com > > >> | www.capital-match.com > > >> > > >> Disclaimer: > > >> > > >> *Capital Match Platform Pte. Ltd. (the "Company") registered in > Singapore > > >> (Co. Reg. No. 201501788H), a subsidiary of Capital Match Holdings > Pte. Ltd. > > >> (Co. Reg. No. 201418682W), provides services that involve arranging > for > > >> multiple parties to enter into loan and invoice discounting > agreements. The > > >> Company does not provide any form of investment advice or > recommendations > > >> regarding any listings on its platform. In providing its services, the > > >> Company's role is limited to an administrative function and the > Company > > >> does not and will not assume any advisory, fiduciary or other duties > to > > >> clients of its services.* > > >> > > >> _______________________________________________ > > haskell-infrastructure mailing list > > haskell-infrastructure at community.galois.com > > http://community.galois.com/mailman/listinfo/haskell-infrastructure > > > > -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/etPan.552de40d.3d1b58ba.f2%40mbp.local > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Wed Apr 15 04:50:51 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 15 Apr 2015 04:50:51 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: Message-ID: Yes, I think you've summarized the security aspects of this nicely. There's also the reliability and availability guarantees we get from a distributed system, but that's outside the realm of security (unless you're talking about denial of service). On Wed, Apr 15, 2015 at 7:12 AM Greg Weber wrote: > What security guarantees do we get from this proposal that are not present > from Chris's package signing work? > Part of the goal of the package signing is that we no longer need to trust > Hackage. If it is compromised and packages are compromised, then anyone > using signing tools should automatically reject the compromised packages. > > Right now I think the answer is: that this provides a security model for > revisions: it limits what can be done and formalizes the trust of this > process in a cryptographic way. Whereas with Chris's work there is no > concept of a (trusted) revision and a new package must be released? > > On Mon, Apr 13, 2015 at 3:02 AM, Michael Snoyman > wrote: > >> Many of you saw the blog post Mathieu wrote[1] about having more >> composable community infrastructure, which in particular focused on >> improvements to Hackage. I've been discussing some of these ideas with both >> Mathieu and others in the community working on some similar thoughts. I've >> also separately spent some time speaking with Chris about package >> signing[2]. Through those discussions, it's become apparent to me that >> there are in fact two core pieces of functionality we're relying on Hackage >> for today: >> >> * A centralized location for accessing package metadata (i.e., the cabal >> files) and the package contents themselves (i.e., the sdist tarballs) >> * A central authority for deciding who is allowed to make releases of >> packages, and make revisions to cabal files >> >> In my opinion, fixing the first problem is in fact very straightforward >> to do today using existing tools. FP Complete already hosts a full Hackage >> mirror[3] backed by S3, for instance, and having the metadata mirrored to a >> Git repository as well is not a difficult technical challenge. This is the >> core of what Mathieu was proposing as far as composable infrastructure, >> corresponding to next actions 1 and 3 at the end of his blog post (step 2, >> modifying Hackage, is not a prerequesite). In my opinion, such a system >> would far surpass in usability, reliability, and extensibility our current >> infrastructure, and could be rolled out in a few days at most. >> >> However, that second point- the central authority- is the more >> interesting one. As it stands, our entire package ecosystem is placing a >> huge level of trust in Hackage, without any serious way to vet what's going >> on there. Attack vectors abound, e.g.: >> >> * Man in the middle attacks: as we are all painfully aware, cabal-install >> does not support HTTPS, so a MITM attack on downloads from Hackage is >> trivial >> * A breach of the Hackage Server codebase would allow anyone to upload >> nefarious code[4] >> * Any kind of system level vulnerability could allow an attacker to >> compromise the server in the same way >> >> Chris's package signing work addresses most of these vulnerabilities, by >> adding a layer of cryptographic signatures on top of Hackage as the central >> authority. I'd like to propose taking this a step further: removing Hackage >> as the central authority, and instead relying entirely on cryptographic >> signatures to release new packages. >> >> I wrote up a strawman proposal last week[5] which clearly needs work to >> be a realistic option. My question is: are people interested in moving >> forward on this? If there's no interest, and everyone is satisfied with >> continuing with the current Hackage-central-authority, then we can proceed >> with having reliable and secure services built around Hackage. But if >> others- like me- would like to see a more secure system built from the >> ground up, please say so and let's continue that conversation. >> >> [1] >> https://www.fpcomplete.com/blog/2015/03/composable-community-infrastructure >> >> [2] >> https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal >> >> [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror >> [4] I don't think this is just a theoretical possibility for some point >> in the future. I have reported an easily trigerrable DoS attack on the >> current Hackage Server codebase, which has been unresolved for 1.5 months >> now >> [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 >> > -- >> > You received this message because you are subscribed to the Google Groups >> "Commercial Haskell" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to commercialhaskell+unsubscribe at googlegroups.com. >> To post to this group, send email to commercialhaskell at googlegroups.com. >> > To view this discussion on the web visit >> https://groups.google.com/d/msgid/commercialhaskell/CAKA2JgL4MviHic52_S3P8RqxyJndkj3oFA%2BPVG11AAgMhMJksw%40mail.gmail.com >> >> . > > >> For more options, visit https://groups.google.com/d/optout. >> > > -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/CAKRanNCnSV%3Ddds4ZDmacNO8WMxSgDmEh6acc0StMh%2Btgz%3D09hA%40mail.gmail.com > > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at gregweber.info Wed Apr 15 05:01:53 2015 From: greg at gregweber.info (Greg Weber) Date: Tue, 14 Apr 2015 22:01:53 -0700 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: Message-ID: On Tue, Apr 14, 2015 at 9:50 PM, Michael Snoyman wrote: > Yes, I think you've summarized the security aspects of this nicely. > There's also the reliability and availability guarantees we get from a > distributed system, but that's outside the realm of security (unless you're > talking about denial of service). > Is it possible to separate out the concept of trusted revisions from a distributed hackage (into 2 separate proposals) then? If Hackage wanted to it could implement trusted revisions. Or some other (distributed or non-distributed) package service could implement it (as long as the installer tool knows to check for revisions there, perhaps this would be added to Chris's signing tooling). > > On Wed, Apr 15, 2015 at 7:12 AM Greg Weber wrote: > >> What security guarantees do we get from this proposal that are not >> present from Chris's package signing work? >> Part of the goal of the package signing is that we no longer need to >> trust Hackage. If it is compromised and packages are compromised, then >> anyone using signing tools should automatically reject the compromised >> packages. >> >> Right now I think the answer is: that this provides a security model for >> revisions: it limits what can be done and formalizes the trust of this >> process in a cryptographic way. Whereas with Chris's work there is no >> concept of a (trusted) revision and a new package must be released? >> >> On Mon, Apr 13, 2015 at 3:02 AM, Michael Snoyman >> wrote: >> >>> Many of you saw the blog post Mathieu wrote[1] about having more >>> composable community infrastructure, which in particular focused on >>> improvements to Hackage. I've been discussing some of these ideas with both >>> Mathieu and others in the community working on some similar thoughts. I've >>> also separately spent some time speaking with Chris about package >>> signing[2]. Through those discussions, it's become apparent to me that >>> there are in fact two core pieces of functionality we're relying on Hackage >>> for today: >>> >>> * A centralized location for accessing package metadata (i.e., the cabal >>> files) and the package contents themselves (i.e., the sdist tarballs) >>> * A central authority for deciding who is allowed to make releases of >>> packages, and make revisions to cabal files >>> >>> In my opinion, fixing the first problem is in fact very straightforward >>> to do today using existing tools. FP Complete already hosts a full Hackage >>> mirror[3] backed by S3, for instance, and having the metadata mirrored to a >>> Git repository as well is not a difficult technical challenge. This is the >>> core of what Mathieu was proposing as far as composable infrastructure, >>> corresponding to next actions 1 and 3 at the end of his blog post (step 2, >>> modifying Hackage, is not a prerequesite). In my opinion, such a system >>> would far surpass in usability, reliability, and extensibility our current >>> infrastructure, and could be rolled out in a few days at most. >>> >>> However, that second point- the central authority- is the more >>> interesting one. As it stands, our entire package ecosystem is placing a >>> huge level of trust in Hackage, without any serious way to vet what's going >>> on there. Attack vectors abound, e.g.: >>> >>> * Man in the middle attacks: as we are all painfully aware, >>> cabal-install does not support HTTPS, so a MITM attack on downloads from >>> Hackage is trivial >>> * A breach of the Hackage Server codebase would allow anyone to upload >>> nefarious code[4] >>> * Any kind of system level vulnerability could allow an attacker to >>> compromise the server in the same way >>> >>> Chris's package signing work addresses most of these vulnerabilities, by >>> adding a layer of cryptographic signatures on top of Hackage as the central >>> authority. I'd like to propose taking this a step further: removing Hackage >>> as the central authority, and instead relying entirely on cryptographic >>> signatures to release new packages. >>> >>> I wrote up a strawman proposal last week[5] which clearly needs work to >>> be a realistic option. My question is: are people interested in moving >>> forward on this? If there's no interest, and everyone is satisfied with >>> continuing with the current Hackage-central-authority, then we can proceed >>> with having reliable and secure services built around Hackage. But if >>> others- like me- would like to see a more secure system built from the >>> ground up, please say so and let's continue that conversation. >>> >>> [1] >>> https://www.fpcomplete.com/blog/2015/03/composable-community-infrastructure >>> >>> [2] >>> https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal >>> >>> [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror >>> [4] I don't think this is just a theoretical possibility for some point >>> in the future. I have reported an easily trigerrable DoS attack on the >>> current Hackage Server codebase, which has been unresolved for 1.5 months >>> now >>> [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 >>> >> -- >>> >> You received this message because you are subscribed to the Google Groups >>> "Commercial Haskell" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to commercialhaskell+unsubscribe at googlegroups.com. >>> To post to this group, send email to commercialhaskell at googlegroups.com. >>> >> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/commercialhaskell/CAKA2JgL4MviHic52_S3P8RqxyJndkj3oFA%2BPVG11AAgMhMJksw%40mail.gmail.com >>> >>> . >> >> >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "Commercial Haskell" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to commercialhaskell+unsubscribe at googlegroups.com. >> To post to this group, send email to commercialhaskell at googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/commercialhaskell/CAKRanNCnSV%3Ddds4ZDmacNO8WMxSgDmEh6acc0StMh%2Btgz%3D09hA%40mail.gmail.com >> >> . >> For more options, visit https://groups.google.com/d/optout. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Wed Apr 15 05:08:43 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 15 Apr 2015 05:08:43 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: Message-ID: On Wed, Apr 15, 2015 at 8:02 AM Greg Weber wrote: > On Tue, Apr 14, 2015 at 9:50 PM, Michael Snoyman > wrote: > >> Yes, I think you've summarized the security aspects of this nicely. >> There's also the reliability and availability guarantees we get from a >> distributed system, but that's outside the realm of security (unless you're >> talking about denial of service). >> > > Is it possible to separate out the concept of trusted revisions from a > distributed hackage (into 2 separate proposals) then? > If Hackage wanted to it could implement trusted revisions. Or some other > (distributed or non-distributed) package service could implement it (as > long as the installer tool knows to check for revisions there, perhaps this > would be added to Chris's signing tooling). > > It would be a fundamental shift away from how Hackage does things today. I think the necessary steps would be: 1. Hackage ships all revisions to cabal files somehow (personally, I think it should be doing this anyway). 2. We have a list of trustees who are allowed to edit metadata. The signing work already has to recapture that information for allowed uploaders since Hackage doesn't collect GPG keys 3. Every time a revision is made, the person making the revision would need to sign the new revision I'm open to other ideas, this is just what came to mind first. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From vogt.adam at gmail.com Wed Apr 15 05:09:10 2015 From: vogt.adam at gmail.com (adam vogt) Date: Wed, 15 Apr 2015 01:09:10 -0400 Subject: [Haskell-cafe] FunDeps and type inference In-Reply-To: References: Message-ID: Hi Olaf, You can use ~ to let instances get selected before ghc has deduced that two types are equal. https://gist.github.com/aavogt/1cb0ca6f1654b09111d3 is closer to what you're looking for, except "eval (+) (4,5)" doesn't work unless the result type is given. Besides the ghc manual, it might also help to look at http://okmij.org/ftp/Haskell/typecast.html Regards, Adam On Tue, Apr 14, 2015 at 5:12 PM, Olaf Klinke wrote: > Dear cafe, > > I want to write an evaluation function that uncurries its function > argument as necessary. Examples should include: > > eval :: (a -> b) -> a -> b > eval :: (a -> b -> c) -> a -> (b -> c) > eval :: (a -> b -> c) -> (a,b) -> c > and hence have both > eval (+) 4 5 > eval (+) (4,5) > typecheck. > > My approach is to use a type class: > > class Uncurry f a b where > eval :: f -> a -> b > instance Uncurry (a -> b) a b where > eval = ($) > instance (Uncurry f b c) => Uncurry ((->) a) f) (a,b) c where > eval f (a,b) = eval (f a) b > > This works, but not for polymorphic arguments. One must annotate function > and argument with concrete types when calling, otherwise the runtime does > not know what instance to use. > > Type inference on ($) is able to infer the type of either of f, a or b in > the expression b = f $ a if the type of two is known. Thus I am tempted to > add functional dependencies > > class Uncurry f a b | f a -> b, a b -> f, f b -> a > > but I get scary errors: With only the first of the three dependencies, the > coverage condition fails. Adding UndecidableInstances, the code compiles. > Now type inference on the return type b works, but one can not use e.g. (+) > as function argument. Adding the second dependency results in the compiler > rejecting the code claiming "Functional dependencies conflict between > instance declarations". I can not quite see where they would, and the > compiler does not tell me its counterexample. > I can see that > eval max (True,False) :: Bool > -- by second instance declaration, > -- when max :: Bool -> Bool -> Bool > eval max (True,False) :: (Bool,Bool) -> (Bool,Bool) > -- by first instance declaration > -- when max :: (Bool,Bool) -> (Bool,Bool) -> (Bool,Bool) > but this ambiguity is precisely what the dependency a b -> f should help > to avoid, isn't it? > > Judging by the number of coverage condition posts on this list this one is > easy to get wrong and the compiler messages are not always helpful. Is this > a kind problem? Would anyone care to elaborate? > > Thanks, Olaf > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gershomb at gmail.com Wed Apr 15 05:18:29 2015 From: gershomb at gmail.com (Gershom B) Date: Wed, 15 Apr 2015 01:18:29 -0400 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: Ok, to narrow it down, you are concerned about the ability to > * Properly authenticate users > * Keep authorization lists of who can make uploads/revisions (and who can grant those rights) and more specifically: > * Currently, authorized uploaders are identified by a user name and a > password on Hackage. How do we correlate that to a GPG key? Ideally, the > central upload authority would be collecting GPG public keys for all > uploaders so that signature verification can happen correctly. > * There's no way for an outside authority to vet the 00-index.tar.gz file > downloaded from Hackage; it's a completely opaque, black box. Having the > set of authorization rules be publicly viewable, auditable, and verifiable > overcomes that. On 1) now you have the problem ?what if the central upload authority?s store of GPG keys is violated?. You?ve just kicked the can. ?Web of Trust? is not a tractable answer. My answer is simpler: I can verify that the signer of version 1 of a package is the same as the signer of version 0.1. This is no small trick. And I can do so orthogonal to hackage. Now, if I really want to verify that the signer of version 1 is the person who is ?Michael Snoyman? and is in fact the exact Michael Snoyman I intend, then I need to get your key by some entirely other mechanism. And that is my problem, and, definitionally, no centralized store can help me in that regard unless I trust it absolutely ? which is precisely what I don?t want to do. On 2) I would like to understand more of what your concern with regards to ?auditing? is. What specific information would you like to know that you do not? Improved audit logs seem again orthogonal to any of these other security concerns, unless you are simply worried about a ?metadata only? attack vector. In any case, we can incorporate the same signing practices for metadata as for packages ? orthogonal to hackage or any other particular storage mechanism. It is simply an unrelated question. And, honestly, compared to all the other issues we face I feel it is relatively minor (the signing component, not a better audit trail). In any case, your account of the first two points reveals some of the confusion I think that remains: > * Allow safe uploads of packages and metadata > * Distribute packages and metadata to users safely What is the definition of ?safe? here? My understanding is that in the field of security one doesn?t talk about ?safe? in general, but with regards to a particular profile of a sort of attacker, and always only as a difference of degree, not kind. So who do we want to prevent from doing what? How ?safe? is ?safe?? Safe from what? From a malicious script-kid, from a malicious collective ?in it for the lulz,? from a targeted attack against a particular end-client, from just poorly/incompetently written code? What are we ?trusting?? What concrete guarantees would we like to make about user interactions with packages and package repositories? While I?m interrogating language, let me pick out one other thing I don?t understand: "creating a coherent set of packages? ? what do you mean by ?coherent?? Is this something we can specify? Hackage isn?t supposed to be coherent ? it is supposed to be everything. Within that ?everything? we are now attempting to manage metadata to provide accurate dependency information, at a local level. But we have no claims about any global coherence conditions on the resultant graphs. Certainly we intend to be coherent in the sense that the combination of a name/version/revision should indicate one and only one thing (and that all revisions of a version should differ at most in dependency constraints in their cabal file) ? but this is a fairly minimal criteria. And in fact, it is one that is nearly orthogonal to security concerns altogether. What I?m driving at is ? it sounds like we _mainly_ want new decentralized security mechanisms, at the cabal level, but we also want, potentially, a few centralized mechanisms. However, centralization is weakness from a security standpoint. So, ideally, we want as few centralized mechanisms as possible, and we want the consequences of those mechanisms being broken to be ?recoverable? at the point of local verification. Let me spell out a threat model where that makes sense. An adversary takes control of the entire hackage server through some zero day linux exploit we have no control over ? or perhaps they are an employee at the datacenter where we host hackage and secure control via more direct means, etc. They have total and complete control over the box. They can accept anything they want, and they can serve anything they want. And they are sophisticated enough to be undetected for say a week. Now, we want it to be the case that _whatever_ this adversary does, they cannot ?trick? someone who types ?cabal install warp? into instead cabal installing something malicious. How do we do so? _Now_ we have a security problem that is concrete enough to discuss. And furthermore, I would claim that if we don?t have at least some story for this threat model, then we haven?t established anything much ?safer? at all. This points towards a large design space, and a lot of potential ideas, all of which feel entirely different than the ?strawman? proposal, since the emphasis there is towards the changes to a centralized mechanism (even if in turn, the product of that mechanism itself is then distributed and git cloneable or whatever). Cheers, Gershom On April 15, 2015 at 12:47:58 AM, Michael Snoyman (michael at snoyman.com) wrote: > I'd like to ignore features of Hackage like "browsing code" for purposes of > this discussion. That's clearly something that can be a feature layered on > top of a real package store by a web interface. I'm focused on just that > lower level of actually creating a coherent set of packages. > > In that realm, I think you've understated what trust we're putting in > Hackage today. We have to trust it to: > > * Properly authenticate users > * Keep authorization lists of who can make uploads/revisions (and who can > grant those rights) > * Allow safe uploads of packages and metadata > * Distribute packages and metadata to users safely > > I think we agree, but I'll say it outright: Hackage currently *cannot* > succeed at the last two points, since all interactions with it from > cabal-install are occurring over non-secure HTTP connections, making it > vulnerable to MITM attacks on both upload and download. The package signing > work- if completely adopted by the community- would address that. > > What I'm raising here is the first two points. And even those points have > an impact on the other two points. To draw this out a bit more clearly: > > * Currently, authorized uploaders are identified by a user name and a > password on Hackage. How do we correlate that to a GPG key? Ideally, the > central upload authority would be collecting GPG public keys for all > uploaders so that signature verification can happen correctly. > * There's no way for an outside authority to vet the 00-index.tar.gz file > downloaded from Hackage; it's a completely opaque, black box. Having the > set of authorization rules be publicly viewable, auditable, and verifiable > overcomes that. > > I'd really like to make sure that we're separating two questions here: (1) > Is there a problem with the way we're trusting Hackage today? (2) Is the > strawman proposal I sent anywhere close to a real solution? I feel strongly > about (1), and very weakly about (2). From greg at gregweber.info Wed Apr 15 05:20:09 2015 From: greg at gregweber.info (Greg Weber) Date: Tue, 14 Apr 2015 22:20:09 -0700 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: Message-ID: On Tue, Apr 14, 2015 at 10:08 PM, Michael Snoyman wrote: > > > On Wed, Apr 15, 2015 at 8:02 AM Greg Weber wrote: > >> On Tue, Apr 14, 2015 at 9:50 PM, Michael Snoyman >> wrote: >> >>> Yes, I think you've summarized the security aspects of this nicely. >>> There's also the reliability and availability guarantees we get from a >>> distributed system, but that's outside the realm of security (unless you're >>> talking about denial of service). >>> >> >> Is it possible to separate out the concept of trusted revisions from a >> distributed hackage (into 2 separate proposals) then? >> If Hackage wanted to it could implement trusted revisions. Or some other >> (distributed or non-distributed) package service could implement it (as >> long as the installer tool knows to check for revisions there, perhaps this >> would be added to Chris's signing tooling). >> >> > > It would be a fundamental shift away from how Hackage does things today. I > think the necessary steps would be: > > 1. Hackage ships all revisions to cabal files somehow (personally, I think > it should be doing this anyway). > 2. We have a list of trustees who are allowed to edit metadata. The > signing work already has to recapture that information for allowed > uploaders since Hackage doesn't collect GPG keys > 3. Every time a revision is made, the person making the revision would > need to sign the new revision > > I'm open to other ideas, this is just what came to mind first. > Perhaps this is not really doable, but I was thinking there should be a proposal for a specification for trusted revisions. These are integration details for Hackage just as the current proposal has some implementation about a distributed package service. I actually think the easiest way to make revisions secure with Hackage is to precisely limit what can be revised. If one can only change an upper bound of an existing dependency that greatly limits the attack vectors. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Wed Apr 15 05:43:40 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 15 Apr 2015 05:43:40 +0000 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: On Wed, Apr 15, 2015 at 8:19 AM Gershom B wrote: > Ok, to narrow it down, you are concerned about the ability to > > > * Properly authenticate users > > * Keep authorization lists of who can make uploads/revisions (and who > can grant those rights) > > and more specifically: > > > * Currently, authorized uploaders are identified by a user name and a > > password on Hackage. How do we correlate that to a GPG key? Ideally, the > > central upload authority would be collecting GPG public keys for all > > uploaders so that signature verification can happen correctly. > > * There's no way for an outside authority to vet the 00-index.tar.gz file > > downloaded from Hackage; it's a completely opaque, black box. Having the > > set of authorization rules be publicly viewable, auditable, and > verifiable > > overcomes that. > > On 1) now you have the problem ?what if the central upload authority?s > store of GPG keys is violated?. You?ve just kicked the can. ?Web of Trust? > is not a tractable answer. My answer is simpler: I can verify that the > signer of version 1 of a package is the same as the signer of version 0.1. > This is no small trick. And I can do so orthogonal to hackage. Now, if I > really want to verify that the signer of version 1 is the person who is > ?Michael Snoyman? and is in fact the exact Michael Snoyman I intend, then I > need to get your key by some entirely other mechanism. And that is my > problem, and, definitionally, no centralized store can help me in that > regard unless I trust it absolutely ? which is precisely what I don?t want > to do. > > You've ruled out all known solutions to the problem, therefore no solution exists ;) To elaborate slightly: the issue of obtaining people's keys is a problem that exists in general, and has two main resolutions: a central authority, and a web of trust. You've somehow written off completely the web of trust (I'm not sure *why* you think that's a good idea, you haven't explained it), and then stated that- since the only remaining option is a central authority- it's no better than Hackage. I disagree: 1. Maintaining security of a single GPG key is much simpler than maintaining the security of an entire web application, as is currently needed by Hackage. 2. There's no reason we need an either/or setup: we can have a central authority sign keys. If user's wish to trust that authority, they may do so, and thereby get access to other keys. If that central authority is compromised, we revoke that authority and move on to another one. Importantly: we haven't put all our eggs in one basket, as is done today. > On 2) I would like to understand more of what your concern with regards to > ?auditing? is. What specific information would you like to know that you do > not? Improved audit logs seem again orthogonal to any of these other > security concerns, unless you are simply worried about a ?metadata only? > attack vector. In any case, we can incorporate the same signing practices > for metadata as for packages ? orthogonal to hackage or any other > particular storage mechanism. It is simply an unrelated question. And, > honestly, compared to all the other issues we face I feel it is relatively > minor (the signing component, not a better audit trail). > > There's a lot of stuff going on inside of Hackage which we have no insight into or control over. The simplest is that we can't review a log of revisions. Improving that is a good thing, and I hope Hackage does so. Nonetheless, I'd still prefer a fully open, auditable system, which isn't possible with "just tack it on to Hackage." > In any case, your account of the first two points reveals some of the > confusion I think that remains: > > > * Allow safe uploads of packages and metadata > > * Distribute packages and metadata to users safely > > What is the definition of ?safe? here? My understanding is that in the > field of security one doesn?t talk about ?safe? in general, but with > regards to a particular profile of a sort of attacker, and always only as a > difference of degree, not kind. > > I didn't think this needed diving into, because the problems seem so fundamental they weren't worth explaining. Examples of safety issues are: * An attacker sitting between an uploader and Hackage can replace the package contents with something nefarious, corrupting the package for all downloaders * An attacker sitting between a downloader and Hackage can replace the package contents with something nefarious, corrupting the package for that downloader * This doesn't even have to be a conscious attack; I saw someone on Reddit report that they tried to download a package at an airport WiFi, and instead ended up downloading the HTML "please log in" page * Eavesdropping attacks on uploaders: it's possible to capture packets indicating upload headers to Hackage, such as when using open WiFi (think the airport example again). Those headers include authorization headers. Thanks to Hackage now using digest authentication, this doesn't lead to an immediate attack, but digest authentication is based on MD5, which is not the most robust hash function * Normal issues with password based authentication: insecure passwords, keyloggers, etc. * Vulnerabilities in the Hackage codebase or its hosting that expose passwords and/or allow arbitrary uploads > So who do we want to prevent from doing what? How ?safe? is ?safe?? Safe > from what? From a malicious script-kid, from a malicious collective ?in it > for the lulz,? from a targeted attack against a particular end-client, from > just poorly/incompetently written code? What are we ?trusting?? What > concrete guarantees would we like to make about user interactions with > packages and package repositories? > > While I?m interrogating language, let me pick out one other thing I don?t > understand: "creating a coherent set of packages? ? what do you mean by > ?coherent?? Is this something we can specify? Hackage isn?t supposed to be > coherent ? it is supposed to be everything. Within that ?everything? we are > now attempting to manage metadata to provide accurate dependency > information, at a local level. But we have no claims about any global > coherence conditions on the resultant graphs. Certainly we intend to be > coherent in the sense that the combination of a name/version/revision > should indicate one and only one thing (and that all revisions of a version > should differ at most in dependency constraints in their cabal file) ? but > this is a fairly minimal criteria. And in fact, it is one that is nearly > orthogonal to security concerns altogether. > > All I meant is a set of packages uploaded by an approved set of uploaders, as opposed to allowing in arbitrary modifications used by others. > What I?m driving at is ? it sounds like we _mainly_ want new decentralized > security mechanisms, at the cabal level, but we also want, potentially, a > few centralized mechanisms. However, centralization is weakness from a > security standpoint. So, ideally, we want as few centralized mechanisms as > possible, and we want the consequences of those mechanisms being broken to > be ?recoverable? at the point of local verification. > > Yes, that's exactly the kind of goal I'm aiming towards. > Let me spell out a threat model where that makes sense. An adversary takes > control of the entire hackage server through some zero day linux exploit we > have no control over ? or perhaps they are an employee at the datacenter > where we host hackage and secure control via more direct means, etc. They > have total and complete control over the box. They can accept anything they > want, and they can serve anything they want. And they are sophisticated > enough to be undetected for say a week. > > Now, we want it to be the case that _whatever_ this adversary does, they > cannot ?trick? someone who types ?cabal install warp? into instead cabal > installing something malicious. How do we do so? _Now_ we have a security > problem that is concrete enough to discuss. And furthermore, I would claim > that if we don?t have at least some story for this threat model, then we > haven?t established anything much ?safer? at all. > > This points towards a large design space, and a lot of potential ideas, > all of which feel entirely different than the ?strawman? proposal, since > the emphasis there is towards the changes to a centralized mechanism (even > if in turn, the product of that mechanism itself is then distributed and > git cloneable or whatever). > > If we have agreement that the problem exists, I'm quite happy to flesh out other kinds of attack vectors and then discuss solutions. Again, my proposal is purely meant to be a starting point for discussion, not an answer to the problems. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From gershomb at gmail.com Wed Apr 15 05:50:14 2015 From: gershomb at gmail.com (Gershom B) Date: Wed, 15 Apr 2015 01:50:14 -0400 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: On April 15, 2015 at 1:43:42 AM, Michael Snoyman (michael at snoyman.com) wrote: > > There's a lot of stuff going on inside of Hackage which we have > no insight into or control over. The simplest is that we can't > review a log of revisions. Improving that is a good thing, and > I hope Hackage does so. Nonetheless, I'd still prefer a fully > open, auditable system, which isn't possible with "just tack > it on to Hackage.? Ok, I?m going to ignore everything else and just focus on this, because it seems to be the only thing related to hackage, and therefore should be thought of separately from everything else. What _else_ goes on that ?we have no insight or control over?? Can we document the full list. Can we specify what we mean by insight? I take that to mean auditability. Can we specify what we mean by ?control? (There I have no idea). (With regards to revision logs, revisions are still a relatively new feature and there?s lots of bits and bobs missing, and I agree this is low hanging fruit to improve). ?Gershom From michael at snoyman.com Wed Apr 15 05:57:06 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 15 Apr 2015 05:57:06 +0000 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: On Wed, Apr 15, 2015 at 8:50 AM Gershom B wrote: > On April 15, 2015 at 1:43:42 AM, Michael Snoyman (michael at snoyman.com) > wrote: > > > There's a lot of stuff going on inside of Hackage which we have > > no insight into or control over. The simplest is that we can't > > review a log of revisions. Improving that is a good thing, and > > I hope Hackage does so. Nonetheless, I'd still prefer a fully > > open, auditable system, which isn't possible with "just tack > > it on to Hackage.? > > Ok, I?m going to ignore everything else and just focus on this, because it > seems to be the only thing related to hackage, and therefore should be > thought of separately from everything else. > > What _else_ goes on that ?we have no insight or control over?? Can we > document the full list. Can we specify what we mean by insight? I take that > to mean auditability. Can we specify what we mean by ?control? (There I > have no idea). > > (With regards to revision logs, revisions are still a relatively new > feature and there?s lots of bits and bobs missing, and I agree this is low > hanging fruit to improve). > > > I'm not intimately familiar with the Hackage API, so I can't give a point-by-point description of what information is and is not auditable. However, *all* of that is predicated on trusting Hackage to properly authenticate users and be immune to attacks. For example, even if I can ask Hackage who uploaded a certain package/version, there's no way I can audit that that's actually the case, besides going and asking that person. And I can't even do *that* reliably, since the only identification for an uploader is the Hackage username, and I can't verify that someone actually owns that username without asking for his/her password also. One feature Hackage could add that would make the latter a bit better would be to verify identity claims from people (ala OpenID), though that still leaves us in the position of needing to fully trust Hackage. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From gershomb at gmail.com Wed Apr 15 06:14:01 2015 From: gershomb at gmail.com (Gershom B) Date: Wed, 15 Apr 2015 02:14:01 -0400 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: On April 15, 2015 at 1:57:07 AM, Michael Snoyman (michael at snoyman.com) wrote: > I'm not intimately familiar with the Hackage API, so I can't give a > point-by-point description of what information is and is not auditable. Okay, then why did you write "There's a lot of stuff going on inside of Hackage which we have no insight into or control over.?? I would very much like to have a clarifying discussion, as you are gesturing towards some issue we should think about. But it is difficult when you make broad claims, and are not able to explain what they mean. Cheers, Gershom From paul at paulsamways.com Wed Apr 15 06:19:58 2015 From: paul at paulsamways.com (Paul Samways) Date: Wed, 15 Apr 2015 06:19:58 +0000 Subject: [Haskell-cafe] cabal install glade In-Reply-To: References: Message-ID: Hi Jean, How did you install ghc? It looks (and this is coming from someone relatively new to Haskell) that you've installed GHC 7.10 which the Glade package isn't compatible with yet. I'm assuming this based on the ambiguous occurrence of 'die' which was a new function added to System.Exit in base-4.8 (GHC 7.10). You could try rolling back to GHC 7.8 or clone the Glade package and fix the imports. Cheers, Paul. On Wed, Apr 15, 2015 at 1:33 PM Jean Lopes wrote: > Hello, I am trying to install the Glade package from hackage, and I > keep getting exit failure... > > Hope someone can help me solve it! > > What I did: > $ mkdir ~/haskell/project > $ cd ~/haskell/project > $ cabal sandbox init > $ cabal update > $ cabal install alex > $ cabal install happy > $ cabal install gtk2hs-buildtools > $ cabal install gtk #successful until here > $ cabal install glade > > The last statement gave me the following error: > > $ [1 of 2] Compiling SetupWrapper ( > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs, > > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o > ) > $ > $ /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:91:17: > $ Ambiguous occurrence ?die? > $ It could refer to either ?Distribution.Simple.Utils.die?, > $ imported from > ?Distribution.Simple.Utils? at > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 > $ or ?System.Exit.die?, > $ imported from ?System.Exit? at > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 > $ Failed to install cairo-0.12.5.3 > $ [1 of 2] Compiling SetupWrapper ( > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs, > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/dist/dist-sandbox- > acbd4b7/setup/SetupWrapper.o > ) > $ > $ /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:91:17: > $ Ambiguous occurrence ?die? > $ It could refer to either ?Distribution.Simple.Utils.die?, > $ imported from > ?Distribution.Simple.Utils? at > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:8:1-32 > $ or ?System.Exit.die?, > $ imported from ?System.Exit? at > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:21:1-18 > $ Failed to install glib-0.12.5.4 > $ cabal: Error: some packages failed to install: > $ cairo-0.12.5.3 failed during the configure step. The exception was: > $ ExitFailure 1 > $ gio-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > $ glade-0.12.5.0 depends on glib-0.12.5.4 which failed to install. > $ glib-0.12.5.4 failed during the configure step. The exception was: > $ ExitFailure 1 > $ gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to install. > $ pango-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > > Important: You can assume I don't know much. I'm rather new to > Haskell/cabal > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gautier.difolco at gmail.com Wed Apr 15 06:34:49 2015 From: gautier.difolco at gmail.com (Gautier DI FOLCO) Date: Wed, 15 Apr 2015 06:34:49 +0000 Subject: [Haskell-cafe] Coding katas/dojos and functional programming introduction Message-ID: Hello, Some days ago I have participated to a coding dojo* which aimed to be an introduction to functional programming. I have also facilitate 3 of events like this and do several talks on this subject. But I'm kinda disappointed because each time there is a common pattern: 1. Unveil the problem which will be treated 2. Let the attendees solve it 3. Show the FP-ish solution (generally a bunch of map/fold) I think it's frustrating for the attendees (because they don't try to solve it) and gives a false illusion of knowledge. I don't consider myself as a "FP guru" or anything but for me FP is a matter of types and expressions, so when someone illustrate FP via map/fold, it's kind of irritating. Ironically, the best workshop I have done was on functional generalization (you begin by two hard coded functions, sum and product, and you extract Foldable and Monoid), but again, it's not satisfying. We can do better, we should do better. Have you got any feedback/subjet ideas/examples on how to introduce "real" FP to beginners in a short amount of time (like 1-3 hours)? Thanks in advance for your help. Regards. * Basically you have 2 to 3 hours, a problem and you try to solve with in iteration with different constraints -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Wed Apr 15 06:47:14 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 15 Apr 2015 06:47:14 +0000 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: On Wed, Apr 15, 2015 at 9:14 AM Gershom B wrote: > On April 15, 2015 at 1:57:07 AM, Michael Snoyman (michael at snoyman.com) > wrote: > > I'm not intimately familiar with the Hackage API, so I can't give a > > point-by-point description of what information is and is not auditable. > > Okay, then why did you write "There's a lot of stuff going on inside of > Hackage which we have no insight into or control over.?? > > I would very much like to have a clarifying discussion, as you are > gesturing towards some issue we should think about. But it is difficult > when you make broad claims, and are not able to explain what they mean. > > Cheers, > Gershom > I think you're reading too much into my claims, and specifically on the unimportant aspects of them. I can clarify these points, but I think drilling down deeper is a waste of time. To answer this specific question: * There's no clarity on *why* change was approved. I see that person X uploaded a revision, but why was person X allowed to do so? * I know of no way to see the history of authorization rules. * Was JohnDoe always a maintainer of foobar, or was that added at some point? * Who added this person as a maintainer? * Who gave this other person trustee power? Who took it away? All of these things would come for free with an open system where authorization rules are required to be encoded in a freely viewable file, and signature are used to verify the data. And to be clear, to make sure no one thinks I'm saying otherwise: I don't think Hackage has done anything wrong by approaching things the way it has until now. I probably would have come up with a very similar system. I'm talking about new functionality and requirements that weren't stated for the original system. Don't take this as "Hackage is bad," but rather, "time to batten down the hatches." Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthewtpickering at gmail.com Wed Apr 15 08:48:34 2015 From: matthewtpickering at gmail.com (Matthew Pickering) Date: Wed, 15 Apr 2015 09:48:34 +0100 Subject: [Haskell-cafe] cabal install glade In-Reply-To: References: Message-ID: Hi Jean, You can try cloning my branch until a push gets accepted upstream. https://github.com/mpickering/glade The fixes to get it working with 7.10 were fairly minimal. Matt On Wed, Apr 15, 2015 at 4:33 AM, Jean Lopes wrote: > Hello, I am trying to install the Glade package from hackage, and I > keep getting exit failure... > > Hope someone can help me solve it! > > What I did: > $ mkdir ~/haskell/project > $ cd ~/haskell/project > $ cabal sandbox init > $ cabal update > $ cabal install alex > $ cabal install happy > $ cabal install gtk2hs-buildtools > $ cabal install gtk #successful until here > $ cabal install glade > > The last statement gave me the following error: > > $ [1 of 2] Compiling SetupWrapper ( > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs, > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o > ) > $ > $ /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:91:17: > $ Ambiguous occurrence ?die? > $ It could refer to either ?Distribution.Simple.Utils.die?, > $ imported from > ?Distribution.Simple.Utils? at > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 > $ or ?System.Exit.die?, > $ imported from ?System.Exit? at > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 > $ Failed to install cairo-0.12.5.3 > $ [1 of 2] Compiling SetupWrapper ( > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs, > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o > ) > $ > $ /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:91:17: > $ Ambiguous occurrence ?die? > $ It could refer to either ?Distribution.Simple.Utils.die?, > $ imported from > ?Distribution.Simple.Utils? at > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:8:1-32 > $ or ?System.Exit.die?, > $ imported from ?System.Exit? at > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:21:1-18 > $ Failed to install glib-0.12.5.4 > $ cabal: Error: some packages failed to install: > $ cairo-0.12.5.3 failed during the configure step. The exception was: > $ ExitFailure 1 > $ gio-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > $ glade-0.12.5.0 depends on glib-0.12.5.4 which failed to install. > $ glib-0.12.5.4 failed during the configure step. The exception was: > $ ExitFailure 1 > $ gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to install. > $ pango-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > > Important: You can assume I don't know much. I'm rather new to Haskell/cabal > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From haskell at jschneider.net Wed Apr 15 09:07:24 2015 From: haskell at jschneider.net (Jon Schneider) Date: Wed, 15 Apr 2015 10:07:24 +0100 Subject: [Haskell-cafe] Execution order in IO Message-ID: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> Good morning all, I think I've got the hang of the way state is carried and fancy operators work in monads but still have a major sticky issue. With lazy evaluation where is it written that if you write things with no dependencies with a "do" things will be done in order ? Or isn't it ? Is it a feature of the language we're supposed to accept ? Is it something in the implementation of IO ? Is the do keyword more than just a syntactic sugar for a string of binds and lambdas ? Jon From hyarion at iinet.net.au Wed Apr 15 09:30:14 2015 From: hyarion at iinet.net.au (Ben) Date: Wed, 15 Apr 2015 19:30:14 +1000 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> Message-ID: It actually isn't written anywhere, and multiple statements in a do actually don't have to be evaluated in in the order they're written, for monads in general. For most monads though, the implementation of >>= examines the left argument sufficiently to force things to be evaluated in that order (at least to the level of the monad's constructors). IO in particular is implemented such that the effects of everything on the left of the bind (or earlier in a do block) will have been carried out before any of the effects from the right of the bind (later in the do). But even in the IO monad laziness can "delay" evaluation of pure computation mixed in with the IO actions; it just can't change the order the actions are executed in. On 15 April 2015 7:07:24 pm AEST, Jon Schneider wrote: >Good morning all, > >I think I've got the hang of the way state is carried and fancy >operators >work in monads but still have a major sticky issue. > >With lazy evaluation where is it written that if you write things with >no >dependencies with a "do" things will be done in order ? Or isn't it ? > >Is it a feature of the language we're supposed to accept ? > >Is it something in the implementation of IO ? > >Is the do keyword more than just a syntactic sugar for a string of >binds >and lambdas ? > >Jon > >_______________________________________________ >Haskell-Cafe mailing list >Haskell-Cafe at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From hsyl20 at gmail.com Wed Apr 15 09:46:40 2015 From: hsyl20 at gmail.com (Sylvain Henry) Date: Wed, 15 Apr 2015 11:46:40 +0200 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> Message-ID: Hi, You can consider that: type IO a = World -> (World, a) Where World is the state of the impure world. So when you have: getLine :: IO String putStrLn :: String -> IO () Is is in fact: getLine :: World -> (World, String) putStrLn :: String -> World -> (World, ()) You can compose IO actions with: (>>=) :: IO a -> (a -> IO b) -> IO b (>>=) :: (World -> (World,a)) -> (a -> World -> (World,b)) -> World -> (World,b) (>>=) f g w = let (w2,a) = f w in g a w2 do-notation is just syntactic sugar for this operator. So there is an implicit dependency between both IO functions: the state of the World (which obviously doesn't appear in the compiled code). Sylvain 2015-04-15 11:07 GMT+02:00 Jon Schneider : > Good morning all, > > I think I've got the hang of the way state is carried and fancy operators > work in monads but still have a major sticky issue. > > With lazy evaluation where is it written that if you write things with no > dependencies with a "do" things will be done in order ? Or isn't it ? > > Is it a feature of the language we're supposed to accept ? > > Is it something in the implementation of IO ? > > Is the do keyword more than just a syntactic sugar for a string of binds > and lambdas ? > > Jon > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hawu.bnu at gmail.com Wed Apr 15 12:01:18 2015 From: hawu.bnu at gmail.com (Jean Lopes) Date: Wed, 15 Apr 2015 05:01:18 -0700 (PDT) Subject: [Haskell-cafe] cabal install glade In-Reply-To: References: Message-ID: <9992f880-81e3-4041-8307-3b5857687c97@googlegroups.com> I will try to use your branch before going back to GHC 7.8... But, how exactly should I do that ? Clone your branch; Build from local source code with cabal ? (I just scrolled this part while reading cabal tutorials, guess I'll have to take a look now) What about dependencies ? I should use $ cabal install glade --only-dependencies and than install glade from your branch ? Em quarta-feira, 15 de abril de 2015 05:48:42 UTC-3, Matthew Pickering escreveu: > > Hi Jean, > > You can try cloning my branch until a push gets accepted upstream. > > https://github.com/mpickering/glade > > The fixes to get it working with 7.10 were fairly minimal. > > Matt > > On Wed, Apr 15, 2015 at 4:33 AM, Jean Lopes > wrote: > > Hello, I am trying to install the Glade package from hackage, and I > > keep getting exit failure... > > > > Hope someone can help me solve it! > > > > What I did: > > $ mkdir ~/haskell/project > > $ cd ~/haskell/project > > $ cabal sandbox init > > $ cabal update > > $ cabal install alex > > $ cabal install happy > > $ cabal install gtk2hs-buildtools > > $ cabal install gtk #successful until here > > $ cabal install glade > > > > The last statement gave me the following error: > > > > $ [1 of 2] Compiling SetupWrapper ( > > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs, > > > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o > > ) > > $ > > $ /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:91:17: > > $ Ambiguous occurrence ?die? > > $ It could refer to either ?Distribution.Simple.Utils.die?, > > $ imported from > > ?Distribution.Simple.Utils? at > > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 > > $ or ?System.Exit.die?, > > $ imported from ?System.Exit? at > > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 > > $ Failed to install cairo-0.12.5.3 > > $ [1 of 2] Compiling SetupWrapper ( > > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs, > > > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o > > ) > > $ > > $ /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:91:17: > > $ Ambiguous occurrence ?die? > > $ It could refer to either ?Distribution.Simple.Utils.die?, > > $ imported from > > ?Distribution.Simple.Utils? at > > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:8:1-32 > > $ or ?System.Exit.die?, > > $ imported from ?System.Exit? at > > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:21:1-18 > > $ Failed to install glib-0.12.5.4 > > $ cabal: Error: some packages failed to install: > > $ cairo-0.12.5.3 failed during the configure step. The exception was: > > $ ExitFailure 1 > > $ gio-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > > $ glade-0.12.5.0 depends on glib-0.12.5.4 which failed to install. > > $ glib-0.12.5.4 failed during the configure step. The exception was: > > $ ExitFailure 1 > > $ gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to install. > > $ pango-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > > > > Important: You can assume I don't know much. I'm rather new to > Haskell/cabal > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskel... at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > _______________________________________________ > Haskell-Cafe mailing list > Haskel... at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Apr 15 12:19:45 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 15 Apr 2015 08:19:45 -0400 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: Ok, let me counter that with a simpler idea: every Hackage edit action has an explanation field that the trustee can choose to optionally write some text in And additonally: there Is a globally visible feed / log of all Hackage edits. I believe some folks are working to add those features to hackage this spring. I am emphatically against stronger security things being tacked on top without a threat model that precisely jusrifies why. Recent experience has shown me that organizations which mandate processes in the the name of a nebulous security model counter intuitively become less secure and less effective. Let me repeat myself, enterprise sounding security processes should only be adopted in the context of a concrete threat model that actually specifically motivates the applicable security model. Anything else is kiss of death. Please be concrete. Additonally, specificity allows us to think of approaches that can be both secure and easy to use. On Apr 15, 2015 2:47 AM, "Michael Snoyman" wrote: > > > On Wed, Apr 15, 2015 at 9:14 AM Gershom B wrote: > >> On April 15, 2015 at 1:57:07 AM, Michael Snoyman (michael at snoyman.com) >> wrote: >> > I'm not intimately familiar with the Hackage API, so I can't give a >> > point-by-point description of what information is and is not auditable. >> >> Okay, then why did you write "There's a lot of stuff going on inside of >> Hackage which we have no insight into or control over.?? >> >> I would very much like to have a clarifying discussion, as you are >> gesturing towards some issue we should think about. But it is difficult >> when you make broad claims, and are not able to explain what they mean. >> >> Cheers, >> Gershom >> > > I think you're reading too much into my claims, and specifically on the > unimportant aspects of them. I can clarify these points, but I think > drilling down deeper is a waste of time. To answer this specific question: > > * There's no clarity on *why* change was approved. I see that person X > uploaded a revision, but why was person X allowed to do so? > * I know of no way to see the history of authorization rules. > * Was JohnDoe always a maintainer of foobar, or was that added at some > point? > * Who added this person as a maintainer? > * Who gave this other person trustee power? Who took it away? > > All of these things would come for free with an open system where > authorization rules are required to be encoded in a freely viewable file, > and signature are used to verify the data. > > And to be clear, to make sure no one thinks I'm saying otherwise: I don't > think Hackage has done anything wrong by approaching things the way it has > until now. I probably would have come up with a very similar system. I'm > talking about new functionality and requirements that weren't stated for > the original system. Don't take this as "Hackage is bad," but rather, "time > to batten down the hatches." > > Michael > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Wed Apr 15 12:34:06 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 15 Apr 2015 12:34:06 +0000 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: I've given plenty of concrete attack vectors in this thread. I'm not going to repeat all of them here. But addressing your "simpler idea": how do we know that the claimed person actually performed that action? If Hackage is hacked, there's no way to verify *any* such log. With a crypto-based system, we know specifically which key is tied to which action, and can invalidate those actions in the case of a key becoming compromised. There are no nebulous claims going on here. Hackage is interacting with users in a way that is completely susceptible to MITM attacks. That's a fact, and an easily exploitable attack vector for someone in the right position in the network. I'm also precisely *not* recommending we tack security things on top: I'm proposing we design a secure system from the ground up. Also, if we're going to talk about nebulous, let's start with the word "enterprise sounding." That's an empty criticism, and I should hope we're above that kind of thing. On Wed, Apr 15, 2015 at 3:20 PM Carter Schonwald wrote: > Ok, let me counter that with a simpler idea: every Hackage edit action has > an explanation field that the trustee can choose to optionally write some > text in > > And additonally: there Is a globally visible feed / log of all Hackage > edits. > > I believe some folks are working to add those features to hackage this > spring. > > I am emphatically against stronger security things being tacked on top > without a threat model that precisely jusrifies why. Recent experience has > shown me that organizations which mandate processes in the the name of a > nebulous security model counter intuitively become less secure and less > effective. > > Let me repeat myself, enterprise sounding security processes should only > be adopted in the context of a concrete threat model that actually > specifically motivates the applicable security model. Anything else is > kiss of death. Please be concrete. Additonally, specificity allows us to > think of approaches that can be both secure and easy to use. > On Apr 15, 2015 2:47 AM, "Michael Snoyman" wrote: > >> >> >> On Wed, Apr 15, 2015 at 9:14 AM Gershom B wrote: >> >>> On April 15, 2015 at 1:57:07 AM, Michael Snoyman (michael at snoyman.com) >>> wrote: >>> > I'm not intimately familiar with the Hackage API, so I can't give a >>> > point-by-point description of what information is and is not auditable. >>> >>> Okay, then why did you write "There's a lot of stuff going on inside of >>> Hackage which we have no insight into or control over.?? >>> >>> I would very much like to have a clarifying discussion, as you are >>> gesturing towards some issue we should think about. But it is difficult >>> when you make broad claims, and are not able to explain what they mean. >>> >>> Cheers, >>> Gershom >>> >> >> I think you're reading too much into my claims, and specifically on the >> unimportant aspects of them. I can clarify these points, but I think >> drilling down deeper is a waste of time. To answer this specific question: >> >> * There's no clarity on *why* change was approved. I see that person X >> uploaded a revision, but why was person X allowed to do so? >> * I know of no way to see the history of authorization rules. >> * Was JohnDoe always a maintainer of foobar, or was that added at >> some point? >> * Who added this person as a maintainer? >> * Who gave this other person trustee power? Who took it away? >> >> All of these things would come for free with an open system where >> authorization rules are required to be encoded in a freely viewable file, >> and signature are used to verify the data. >> >> And to be clear, to make sure no one thinks I'm saying otherwise: I don't >> think Hackage has done anything wrong by approaching things the way it has >> until now. I probably would have come up with a very similar system. I'm >> talking about new functionality and requirements that weren't stated for >> the original system. Don't take this as "Hackage is bad," but rather, "time >> to batten down the hatches." >> >> Michael >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From silvio.frischi at gmail.com Wed Apr 15 13:06:00 2015 From: silvio.frischi at gmail.com (silvio) Date: Wed, 15 Apr 2015 15:06:00 +0200 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> Message-ID: <552E6238.7020308@gmail.com> > With lazy evaluation where is it written that if you write things > with no dependencies with a "do" things will be done in order ? Or > isn't it ? Since Haskell is purely functional the evaluation order shouldn't matter unless you do IO or for efficiency purposes. In general you can use "seq :: a -> b -> b" or Bang patterns (!) to control execution order. "seq" will evaluate "a" to Weak Head Normal Form (WHNF) and then return "b". WHNF just means it will evaluate until it finds a Constructor. In the case of lists for instance it will see if your value is ":" or "[]". Now you can use that to implement your monads. For the state monad for instance there is a strict and a "lazy" version. newtype State s a = State { runState :: s -> (a,s) } instance Monad (State s) where act1 >>= f2 = State $ \s1 -> runState (f2 input2) s2 where (input2, s2) = runState act1 s1 instance Monad (State s) where act1 >>= f2 = State $ \s1 -> s2 `seq` runState (f2 input2) s2 where (input2, s2) = runState act1 s1 You can see in the second implementation s2 has to be evaluated to WHNF. So the runState on the 3rd line has to be evaluated before the runState on the 2nd line. > Is it something in the implementation of IO ? In the case of IO monad you are almost guaranteed that things get executed in order. The one exception is (unsafeInterleaveIO :: IO a -> IO a). This will only be evaluated when you use the result (use the result means make a case statement to see which constructor it is. e.g. ":" or "[]"). LazyIO probably also uses unsafeInterleaveIO and should in my opinion be called unsafeIO. The problem with unsafeInterleaveIO is that you have to catch the IOErrors at arbitrary places. For instance, you can get a read error when looking at a part of a string because the reading was delayed until you look at it. > Is the do keyword more than just a syntactic sugar for a string of > binds and lambdas ? No, "do" is not more than syntactic sugar. The way to control execution order is "seq" and bang patterns. Hope this helps Silvio From carter.schonwald at gmail.com Wed Apr 15 13:08:55 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 15 Apr 2015 09:08:55 -0400 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: Ok. Let's get https support into cabal. How do we best go about doing that? On Apr 15, 2015 8:34 AM, "Michael Snoyman" wrote: > I've given plenty of concrete attack vectors in this thread. I'm not going > to repeat all of them here. But addressing your "simpler idea": how do we > know that the claimed person actually performed that action? If Hackage is > hacked, there's no way to verify *any* such log. With a crypto-based > system, we know specifically which key is tied to which action, and can > invalidate those actions in the case of a key becoming compromised. > > There are no nebulous claims going on here. Hackage is interacting with > users in a way that is completely susceptible to MITM attacks. That's a > fact, and an easily exploitable attack vector for someone in the right > position in the network. I'm also precisely *not* recommending we tack > security things on top: I'm proposing we design a secure system from the > ground up. > > Also, if we're going to talk about nebulous, let's start with the word > "enterprise sounding." That's an empty criticism, and I should hope we're > above that kind of thing. > > On Wed, Apr 15, 2015 at 3:20 PM Carter Schonwald < > carter.schonwald at gmail.com> wrote: > >> Ok, let me counter that with a simpler idea: every Hackage edit action >> has an explanation field that the trustee can choose to optionally write >> some text in >> >> And additonally: there Is a globally visible feed / log of all Hackage >> edits. >> >> I believe some folks are working to add those features to hackage this >> spring. >> >> I am emphatically against stronger security things being tacked on top >> without a threat model that precisely jusrifies why. Recent experience has >> shown me that organizations which mandate processes in the the name of a >> nebulous security model counter intuitively become less secure and less >> effective. >> >> Let me repeat myself, enterprise sounding security processes should only >> be adopted in the context of a concrete threat model that actually >> specifically motivates the applicable security model. Anything else is >> kiss of death. Please be concrete. Additonally, specificity allows us to >> think of approaches that can be both secure and easy to use. >> On Apr 15, 2015 2:47 AM, "Michael Snoyman" wrote: >> >>> >>> >>> On Wed, Apr 15, 2015 at 9:14 AM Gershom B wrote: >>> >>>> On April 15, 2015 at 1:57:07 AM, Michael Snoyman (michael at snoyman.com) >>>> wrote: >>>> > I'm not intimately familiar with the Hackage API, so I can't give a >>>> > point-by-point description of what information is and is not >>>> auditable. >>>> >>>> Okay, then why did you write "There's a lot of stuff going on inside of >>>> Hackage which we have no insight into or control over.?? >>>> >>>> I would very much like to have a clarifying discussion, as you are >>>> gesturing towards some issue we should think about. But it is difficult >>>> when you make broad claims, and are not able to explain what they mean. >>>> >>>> Cheers, >>>> Gershom >>>> >>> >>> I think you're reading too much into my claims, and specifically on the >>> unimportant aspects of them. I can clarify these points, but I think >>> drilling down deeper is a waste of time. To answer this specific question: >>> >>> * There's no clarity on *why* change was approved. I see that person X >>> uploaded a revision, but why was person X allowed to do so? >>> * I know of no way to see the history of authorization rules. >>> * Was JohnDoe always a maintainer of foobar, or was that added at >>> some point? >>> * Who added this person as a maintainer? >>> * Who gave this other person trustee power? Who took it away? >>> >>> All of these things would come for free with an open system where >>> authorization rules are required to be encoded in a freely viewable file, >>> and signature are used to verify the data. >>> >>> And to be clear, to make sure no one thinks I'm saying otherwise: I >>> don't think Hackage has done anything wrong by approaching things the way >>> it has until now. I probably would have come up with a very similar system. >>> I'm talking about new functionality and requirements that weren't stated >>> for the original system. Don't take this as "Hackage is bad," but rather, >>> "time to batten down the hatches." >>> >>> Michael >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Wed Apr 15 13:11:20 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 15 Apr 2015 13:11:20 +0000 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: I'm 100% in favor of that. Last time it was brought up, we ended up in a debate about the the Haskell Platform and the PVP, which left the relevant package authors not wanting to get involved. If someone starts the conversation up, I will fully support it. That will fix the largest problem we have. It still means we're placing all of our trust in Hackage, which sets up a single point of failure. We can, and should, do better than that. On Wed, Apr 15, 2015 at 4:09 PM Carter Schonwald wrote: > Ok. Let's get https support into cabal. > > How do we best go about doing that? > On Apr 15, 2015 8:34 AM, "Michael Snoyman" wrote: > >> I've given plenty of concrete attack vectors in this thread. I'm not >> going to repeat all of them here. But addressing your "simpler idea": how >> do we know that the claimed person actually performed that action? If >> Hackage is hacked, there's no way to verify *any* such log. With a >> crypto-based system, we know specifically which key is tied to which >> action, and can invalidate those actions in the case of a key becoming >> compromised. >> >> There are no nebulous claims going on here. Hackage is interacting with >> users in a way that is completely susceptible to MITM attacks. That's a >> fact, and an easily exploitable attack vector for someone in the right >> position in the network. I'm also precisely *not* recommending we tack >> security things on top: I'm proposing we design a secure system from the >> ground up. >> >> Also, if we're going to talk about nebulous, let's start with the word >> "enterprise sounding." That's an empty criticism, and I should hope we're >> above that kind of thing. >> >> On Wed, Apr 15, 2015 at 3:20 PM Carter Schonwald < >> carter.schonwald at gmail.com> wrote: >> >>> Ok, let me counter that with a simpler idea: every Hackage edit action >>> has an explanation field that the trustee can choose to optionally write >>> some text in >>> >>> And additonally: there Is a globally visible feed / log of all Hackage >>> edits. >>> >>> I believe some folks are working to add those features to hackage this >>> spring. >>> >>> I am emphatically against stronger security things being tacked on top >>> without a threat model that precisely jusrifies why. Recent experience has >>> shown me that organizations which mandate processes in the the name of a >>> nebulous security model counter intuitively become less secure and less >>> effective. >>> >>> Let me repeat myself, enterprise sounding security processes should only >>> be adopted in the context of a concrete threat model that actually >>> specifically motivates the applicable security model. Anything else is >>> kiss of death. Please be concrete. Additonally, specificity allows us to >>> think of approaches that can be both secure and easy to use. >>> On Apr 15, 2015 2:47 AM, "Michael Snoyman" wrote: >>> >>>> >>>> >>>> On Wed, Apr 15, 2015 at 9:14 AM Gershom B wrote: >>>> >>>>> On April 15, 2015 at 1:57:07 AM, Michael Snoyman (michael at snoyman.com) >>>>> wrote: >>>>> > I'm not intimately familiar with the Hackage API, so I can't give a >>>>> > point-by-point description of what information is and is not >>>>> auditable. >>>>> >>>>> Okay, then why did you write "There's a lot of stuff going on inside >>>>> of Hackage which we have no insight into or control over.?? >>>>> >>>>> I would very much like to have a clarifying discussion, as you are >>>>> gesturing towards some issue we should think about. But it is difficult >>>>> when you make broad claims, and are not able to explain what they mean. >>>>> >>>>> Cheers, >>>>> Gershom >>>>> >>>> >>>> I think you're reading too much into my claims, and specifically on the >>>> unimportant aspects of them. I can clarify these points, but I think >>>> drilling down deeper is a waste of time. To answer this specific question: >>>> >>>> * There's no clarity on *why* change was approved. I see that person X >>>> uploaded a revision, but why was person X allowed to do so? >>>> * I know of no way to see the history of authorization rules. >>>> * Was JohnDoe always a maintainer of foobar, or was that added at >>>> some point? >>>> * Who added this person as a maintainer? >>>> * Who gave this other person trustee power? Who took it away? >>>> >>>> All of these things would come for free with an open system where >>>> authorization rules are required to be encoded in a freely viewable file, >>>> and signature are used to verify the data. >>>> >>>> And to be clear, to make sure no one thinks I'm saying otherwise: I >>>> don't think Hackage has done anything wrong by approaching things the way >>>> it has until now. I probably would have come up with a very similar system. >>>> I'm talking about new functionality and requirements that weren't stated >>>> for the original system. Don't take this as "Hackage is bad," but rather, >>>> "time to batten down the hatches." >>>> >>>> Michael >>>> >>> -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/CAHYVw0xbNQPZ%2Bockbn1Zve69eQoZ4OOeUKt-bqa72vn-N_FQPg%40mail.gmail.com > > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Wed Apr 15 13:11:59 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Wed, 15 Apr 2015 09:11:59 -0400 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: A cryptographcially unforgable Hackage log is an interesting idea. I'll have to think about what that means though. On Apr 15, 2015 9:08 AM, "Carter Schonwald" wrote: > Ok. Let's get https support into cabal. > > How do we best go about doing that? > On Apr 15, 2015 8:34 AM, "Michael Snoyman" wrote: > >> I've given plenty of concrete attack vectors in this thread. I'm not >> going to repeat all of them here. But addressing your "simpler idea": how >> do we know that the claimed person actually performed that action? If >> Hackage is hacked, there's no way to verify *any* such log. With a >> crypto-based system, we know specifically which key is tied to which >> action, and can invalidate those actions in the case of a key becoming >> compromised. >> >> There are no nebulous claims going on here. Hackage is interacting with >> users in a way that is completely susceptible to MITM attacks. That's a >> fact, and an easily exploitable attack vector for someone in the right >> position in the network. I'm also precisely *not* recommending we tack >> security things on top: I'm proposing we design a secure system from the >> ground up. >> >> Also, if we're going to talk about nebulous, let's start with the word >> "enterprise sounding." That's an empty criticism, and I should hope we're >> above that kind of thing. >> >> On Wed, Apr 15, 2015 at 3:20 PM Carter Schonwald < >> carter.schonwald at gmail.com> wrote: >> >>> Ok, let me counter that with a simpler idea: every Hackage edit action >>> has an explanation field that the trustee can choose to optionally write >>> some text in >>> >>> And additonally: there Is a globally visible feed / log of all Hackage >>> edits. >>> >>> I believe some folks are working to add those features to hackage this >>> spring. >>> >>> I am emphatically against stronger security things being tacked on top >>> without a threat model that precisely jusrifies why. Recent experience has >>> shown me that organizations which mandate processes in the the name of a >>> nebulous security model counter intuitively become less secure and less >>> effective. >>> >>> Let me repeat myself, enterprise sounding security processes should only >>> be adopted in the context of a concrete threat model that actually >>> specifically motivates the applicable security model. Anything else is >>> kiss of death. Please be concrete. Additonally, specificity allows us to >>> think of approaches that can be both secure and easy to use. >>> On Apr 15, 2015 2:47 AM, "Michael Snoyman" wrote: >>> >>>> >>>> >>>> On Wed, Apr 15, 2015 at 9:14 AM Gershom B wrote: >>>> >>>>> On April 15, 2015 at 1:57:07 AM, Michael Snoyman (michael at snoyman.com) >>>>> wrote: >>>>> > I'm not intimately familiar with the Hackage API, so I can't give a >>>>> > point-by-point description of what information is and is not >>>>> auditable. >>>>> >>>>> Okay, then why did you write "There's a lot of stuff going on inside >>>>> of Hackage which we have no insight into or control over.?? >>>>> >>>>> I would very much like to have a clarifying discussion, as you are >>>>> gesturing towards some issue we should think about. But it is difficult >>>>> when you make broad claims, and are not able to explain what they mean. >>>>> >>>>> Cheers, >>>>> Gershom >>>>> >>>> >>>> I think you're reading too much into my claims, and specifically on the >>>> unimportant aspects of them. I can clarify these points, but I think >>>> drilling down deeper is a waste of time. To answer this specific question: >>>> >>>> * There's no clarity on *why* change was approved. I see that person X >>>> uploaded a revision, but why was person X allowed to do so? >>>> * I know of no way to see the history of authorization rules. >>>> * Was JohnDoe always a maintainer of foobar, or was that added at >>>> some point? >>>> * Who added this person as a maintainer? >>>> * Who gave this other person trustee power? Who took it away? >>>> >>>> All of these things would come for free with an open system where >>>> authorization rules are required to be encoded in a freely viewable file, >>>> and signature are used to verify the data. >>>> >>>> And to be clear, to make sure no one thinks I'm saying otherwise: I >>>> don't think Hackage has done anything wrong by approaching things the way >>>> it has until now. I probably would have come up with a very similar system. >>>> I'm talking about new functionality and requirements that weren't stated >>>> for the original system. Don't take this as "Hackage is bad," but rather, >>>> "time to batten down the hatches." >>>> >>>> Michael >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gershomb at gmail.com Wed Apr 15 13:12:49 2015 From: gershomb at gmail.com (Gershom B) Date: Wed, 15 Apr 2015 09:12:49 -0400 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: On April 15, 2015 at 8:34:07 AM, Michael Snoyman (michael at snoyman.com) wrote: > I've given plenty of concrete attack vectors in this thread. I'm not going > to repeat all of them here. But addressing your "simpler idea": how do we > know that the claimed person actually performed that action? If Hackage is > hacked, there's no way to verify *any* such log. With a crypto-based > system, we know specifically which key is tied to which action, and can > invalidate those actions in the case of a key becoming compromised. So amend Carter?s proposal with the requirement that admin/trustee actions be signed as well. Now we can audit the verification trail. Done. But let me pose a more basic question: Assume somebody falsified the log, but could _not_ falsify any package contents (because the latter were verified at the use site). And further, assume we had a signing trail for revisions as well. Now what is the worst that this bad actor could accomplish?? This is why it helps to have a ?threat model?. I think there is a misunderstanding here on what Carter is asking for. A ?threat model? is not a list of potential vulnerabilities. Rather, it is a statement of what types of things are important to mitigate against, and from whom. There is no such thing as a completely secure system, except, perhaps an unplugged one. So when you say you want something ?safe? and then tell us ways the current system is ?unsafe? then that?s not enough. We need to have a criterion by which we _could_ judge a future system at least ?reasonably safe enough?. My sense of a threat model prioritizes package signing (and I guess revision signing now too) but e.g. doesn?t consider a signed verifiable audit trail a big deal, because falsifying those logs doesn?t easily translate into an attack vector. You are proposing large, drastic changes. Such changes are likely to get bogged down and fail, especially to the degree they involve designing systems in ways that are not in widespread use already. And even if such changes were feasible, and even if they were a sound approach, it would take a long time to put the pieces together to carry them out smoothly across the ecosystem. Meanwhile, if we can say ?in fact this problem decomposes into six nearly unrelated problems? and then prioritize those problems, it is likely that all can be addressed incrementally, which means less development work, greater chance of success, and easier rollout. I remain convinced that you raise some genuine issues, but they decompose into nearly unrelated problems that can and should be tackled individually. Cheers, Gershom From fryguybob at gmail.com Wed Apr 15 13:11:38 2015 From: fryguybob at gmail.com (Ryan Yates) Date: Wed, 15 Apr 2015 09:11:38 -0400 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <552E6238.7020308@gmail.com> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> Message-ID: > > > Is the do keyword more than just a syntactic sugar for a string of > > binds and lambdas ? > > No, "do" is not more than syntactic sugar. The way to control execution > order is "seq" and bang patterns. > I think better wording is to say *evaluation* order, not execution order. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dennis at deathbytape.com Wed Apr 15 13:24:47 2015 From: dennis at deathbytape.com (Dennis J. McWherter, Jr.) Date: Wed, 15 Apr 2015 06:24:47 -0700 (PDT) Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: <2df9a692-4e41-45c5-ac60-fd26ce81e3da@googlegroups.com> As far as the threat model is concerned, I believe the major concern is using "untrusted" code (for the definition of untrusted such that the source is not the author you expected). Supposing this group succeeds in facilitating greater commercial adoption of Haskell, the one of the easiest vectors (at this moment) to break someone's Haskell-based system is to simply swap a modified version of a library containing an exploit. That said, we should also recognize this as a general problem. Some ideas on package manager attacks are at [1]. Further, I see what Gershom is saying about gaining adoption within the current community. However, I wonder (going off of his thought about decomposing the problem) if the system for trust could be generic enough to integrate into an existing solution to help mitigate this risk. [1] http://www.cs.arizona.edu/stork/packagemanagersecurity/attacks-on-package-managers.html On Wednesday, April 15, 2015 at 8:13:28 AM UTC-5, Gershom B wrote: > > On April 15, 2015 at 8:34:07 AM, Michael Snoyman (mic... at snoyman.com > ) wrote: > > I've given plenty of concrete attack vectors in this thread. I'm not > going > > to repeat all of them here. But addressing your "simpler idea": how do > we > > know that the claimed person actually performed that action? If Hackage > is > > hacked, there's no way to verify *any* such log. With a crypto-based > > system, we know specifically which key is tied to which action, and can > > invalidate those actions in the case of a key becoming compromised. > > So amend Carter?s proposal with the requirement that admin/trustee actions > be signed as well. Now we can audit the verification trail. Done. > > But let me pose a more basic question: Assume somebody falsified the log, > but could _not_ falsify any package contents (because the latter were > verified at the use site). And further, assume we had a signing trail for > revisions as well. Now what is the worst that this bad actor could > accomplish? > > This is why it helps to have a ?threat model?. I think there is a > misunderstanding here on what Carter is asking for. A ?threat model? is not > a list of potential vulnerabilities. Rather, it is a statement of what > types of things are important to mitigate against, and from whom. There is > no such thing as a completely secure system, except, perhaps an unplugged > one. So when you say you want something ?safe? and then tell us ways the > current system is ?unsafe? then that?s not enough. We need to have a > criterion by which we _could_ judge a future system at least ?reasonably > safe enough?. > > My sense of a threat model prioritizes package signing (and I guess > revision signing now too) but e.g. doesn?t consider a signed verifiable > audit trail a big deal, because falsifying those logs doesn?t easily > translate into an attack vector. > > You are proposing large, drastic changes. Such changes are likely to get > bogged down and fail, especially to the degree they involve designing > systems in ways that are not in widespread use already. And even if such > changes were feasible, and even if they were a sound approach, it would > take a long time to put the pieces together to carry them out smoothly > across the ecosystem. > > Meanwhile, if we can say ?in fact this problem decomposes into six nearly > unrelated problems? and then prioritize those problems, it is likely that > all can be addressed incrementally, which means less development work, > greater chance of success, and easier rollout. I remain convinced that you > raise some genuine issues, but they decompose into nearly unrelated > problems that can and should be tackled individually. > > Cheers, > Gershom > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From haskell at jschneider.net Wed Apr 15 13:26:30 2015 From: haskell at jschneider.net (Jon Schneider) Date: Wed, 15 Apr 2015 14:26:30 +0100 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> Message-ID: <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> Perhaps I need to be more specific. main = do a <- getLine b <- getLine Can we say "a" absolutely always receives the first line of input and if so what makes this the case rather than "b" receiving it ? Or do things need to be slightly more complicated to achieve this ? Sorry it's just the engineer in me. I think once I've got this clear I'll be happy to move on. Jon From michael at snoyman.com Wed Apr 15 13:27:54 2015 From: michael at snoyman.com (Michael Snoyman) Date: Wed, 15 Apr 2015 13:27:54 +0000 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: On Wed, Apr 15, 2015 at 4:13 PM Gershom B wrote: > On April 15, 2015 at 8:34:07 AM, Michael Snoyman (michael at snoyman.com) > wrote: > > I've given plenty of concrete attack vectors in this thread. I'm not > going > > to repeat all of them here. But addressing your "simpler idea": how do we > > know that the claimed person actually performed that action? If Hackage > is > > hacked, there's no way to verify *any* such log. With a crypto-based > > system, we know specifically which key is tied to which action, and can > > invalidate those actions in the case of a key becoming compromised. > > So amend Carter?s proposal with the requirement that admin/trustee actions > be signed as well. Now we can audit the verification trail. Done. > > But let me pose a more basic question: Assume somebody falsified the log, > but could _not_ falsify any package contents (because the latter were > verified at the use site). And further, assume we had a signing trail for > revisions as well. Now what is the worst that this bad actor could > accomplish? > > This is why it helps to have a ?threat model?. I think there is a > misunderstanding here on what Carter is asking for. A ?threat model? is not > a list of potential vulnerabilities. Rather, it is a statement of what > types of things are important to mitigate against, and from whom. There is > no such thing as a completely secure system, except, perhaps an unplugged > one. So when you say you want something ?safe? and then tell us ways the > current system is ?unsafe? then that?s not enough. We need to have a > criterion by which we _could_ judge a future system at least ?reasonably > safe enough?. > > My sense of a threat model prioritizes package signing (and I guess > revision signing now too) but e.g. doesn?t consider a signed verifiable > audit trail a big deal, because falsifying those logs doesn?t easily > translate into an attack vector. > > You are proposing large, drastic changes. Such changes are likely to get > bogged down and fail, especially to the degree they involve designing > systems in ways that are not in widespread use already. And even if such > changes were feasible, and even if they were a sound approach, it would > take a long time to put the pieces together to carry them out smoothly > across the ecosystem. > > Meanwhile, if we can say ?in fact this problem decomposes into six nearly > unrelated problems? and then prioritize those problems, it is likely that > all can be addressed incrementally, which means less development work, > greater chance of success, and easier rollout. I remain convinced that you > raise some genuine issues, but they decompose into nearly unrelated > problems that can and should be tackled individually. > > > I think you've missed what I've said, so I'll try to say it more clearly: we have no insight right now into how Hackage makes decisions about who's allowed to upload and revise packages. We have no idea how to make a correspondence between a Hackage username and some externally-verifiable identity (like a GPG public key). In that world: how can we externally verify signatures of packages on Hackage? I'm pretty familiar with Chris's package signing work. It's a huge step forward. But by necessity of the weaknesses in what Hackage is exposing, we have no way of fully verifying all signatures. If you see the world differently, please explain. Both you and Carter seem to assume I'm talking about some other problem that's not yet been described. I'm just trying to solve the problem already identified. I think you've missed a few steps necessary to have a proper package signing system in place. You may think that the proposal I've put together is large and a massive shift. It's honestly the minimal number of changes I can see towards having a method to fully verify all signatures of packages that Hackage is publishing. If you see a better way to do it, I'd rather do that, so tell me what it is. Michael * * * I think the above was clear enough, but in case it's not, here's an example. Take the yesod-core package, for which MichaelSnoyman and GregWeber are listed as maintainers. Suppose that we have information from Hackage saying: yesod-core-1.4.0 released by MichaelSnoyman yesod-core-1.4.1 released by FelipeLessa yesod-core-1.4.2 released by GregWeber yesod-core-1.4.2 cabal file revision by HerbertValerioRiedel How do I know: * Which signatures on yesod-core-1.4.0 to trust? Should I trust MichaelSnoyman's and GregWeber's only? What if GregWeber wasn't a maintainer when 1.4.0 was released? * How can 1.4.1 be trusted? It was released by a non-maintainer. In reality, we can guess that FelipeLessa used to be a maintainer but was then removed, but how do we know this? * Similarly, we can guess that HerbertValerioRiedel is granted as a trustee the right to revise a cabal file. * But in any event: how do we get the GPG keys for any of these users? * And since Hackage isn't enforcing any GPG signatures, what should we do when the signatures for a package don't exist? This is just one example of the impediments to adding package signing to the current Hackage system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hesselink at gmail.com Wed Apr 15 13:29:28 2015 From: hesselink at gmail.com (Erik Hesselink) Date: Wed, 15 Apr 2015 15:29:28 +0200 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <552E6238.7020308@gmail.com> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> Message-ID: On Wed, Apr 15, 2015 at 3:06 PM, silvio wrote: > Now you can use that to implement your monads. For the state monad for > instance there is a strict and a "lazy" version. > > newtype State s a = State { runState :: s -> (a,s) } > > instance Monad (State s) where > act1 >>= f2 = State $ \s1 -> runState (f2 input2) s2 where > (input2, s2) = runState act1 s1 > > instance Monad (State s) where > act1 >>= f2 = State $ \s1 -> s2 `seq` runState (f2 input2) s2 where > (input2, s2) = runState act1 s1 Note that these do not correspond to the Strict and Lazy State in transformers. The former (which you call lazy) corresponds to Strict from transformers. The lazier version uses lazy pattern matching in bind. Erik From fryguybob at gmail.com Wed Apr 15 13:31:38 2015 From: fryguybob at gmail.com (Ryan Yates) Date: Wed, 15 Apr 2015 09:31:38 -0400 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> Message-ID: In your example a and b will be ordered as you would expect. On Wed, Apr 15, 2015 at 9:26 AM, Jon Schneider wrote: > Perhaps I need to be more specific. > > main = do > a <- getLine > b <- getLine > > Can we say "a" absolutely always receives the first line of input and if > so what makes this the case rather than "b" receiving it ? Or do things > need to be slightly more complicated to achieve this ? > > Sorry it's just the engineer in me. I think once I've got this clear I'll > be happy to move on. > > Jon > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hyarion at iinet.net.au Wed Apr 15 13:35:31 2015 From: hyarion at iinet.net.au (Ben) Date: Wed, 15 Apr 2015 23:35:31 +1000 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> Message-ID: <14d7321b-532a-4888-bbfd-5149cbef8a50@email.android.com> And this is because the implementation of IO is *specifically* crafted to guarantee this ordering. It is not a property of monads in general, or do syntax in general. On 15 April 2015 11:31:38 pm AEST, Ryan Yates wrote: >In your example a and b will be ordered as you would expect. > >On Wed, Apr 15, 2015 at 9:26 AM, Jon Schneider >wrote: > >> Perhaps I need to be more specific. >> >> main = do >> a <- getLine >> b <- getLine >> >> Can we say "a" absolutely always receives the first line of input and >if >> so what makes this the case rather than "b" receiving it ? Or do >things >> need to be slightly more complicated to achieve this ? >> >> Sorry it's just the engineer in me. I think once I've got this clear >I'll >> be happy to move on. >> >> Jon >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > >------------------------------------------------------------------------ > >_______________________________________________ >Haskell-Cafe mailing list >Haskell-Cafe at haskell.org >http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From fryguybob at gmail.com Wed Apr 15 13:39:04 2015 From: fryguybob at gmail.com (Ryan Yates) Date: Wed, 15 Apr 2015 09:39:04 -0400 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <14d7321b-532a-4888-bbfd-5149cbef8a50@email.android.com> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> <14d7321b-532a-4888-bbfd-5149cbef8a50@email.android.com> Message-ID: > > And this is because the implementation of IO is *specifically* crafted to > guarantee this ordering. It is not a property of monads in general, or do > syntax in general. > Yes, this is an important point. My "as you would expect" was pointed at the IO part of Jon's example, not the monad part :D. Thanks for clarifying. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gershomb at gmail.com Wed Apr 15 13:45:52 2015 From: gershomb at gmail.com (Gershom B) Date: Wed, 15 Apr 2015 09:45:52 -0400 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: On April 15, 2015 at 9:27:55 AM, Michael Snoyman (michael at snoyman.com) wrote: > I think the above was clear enough, but in case it's not, here's an > example. Take the yesod-core package, for which MichaelSnoyman and > GregWeber are listed as maintainers. Suppose that we have information from > Hackage saying: > > yesod-core-1.4.0 released by MichaelSnoyman > yesod-core-1.4.1 released by FelipeLessa > yesod-core-1.4.2 released by GregWeber > yesod-core-1.4.2 cabal file revision by HerbertValerioRiedel > > How do I know: > > * Which signatures on yesod-core-1.4.0 to trust? Should I trust > MichaelSnoyman's and GregWeber's only? What if GregWeber wasn't a > maintainer when 1.4.0 was released? > * How can 1.4.1 be trusted? It was released by a non-maintainer. In > reality, we can guess that FelipeLessa used to be a maintainer but was then > removed, but how do we know this? > * Similarly, we can guess that HerbertValerioRiedel is granted as a trustee > the right to revise a cabal file. > * But in any event: how do we get the GPG keys for any of these users? > * And since Hackage isn't enforcing any GPG signatures, what should we do > when the signatures for a package don't exist? > > This is just one example of the impediments to adding package signing to > the current Hackage system. None of this makes sense to me. You should trust whoever?s keys you choose to trust. That is your problem. I can?t tell you who to trust. How do you get the GPG key? Well that is also your problem. We can?t implement our own service for distributing GPG keys. That?s nuts. Why should your trust in a package be based on if a ?maintainer? or a ?non-maintainer? released it? That?s a bad criteria. How can I trust 1.4.0? Perhaps somebody paid you a lot of money to insert a hole in it. I trust it if I trust _you_, not if I trust that you were listed as ?maintainer? for a fragment of time. I think you are confusing the maintainer field to mean something other than it does ? which is simply the list of people authorized at some point in time to upload a package.? In the future, we can at first optionally, and then later on a stricter basis encourage and then enforce signing. I think this is a good idea. But, and here we apparently disagree completely, it seems to me that everything else is not and should not be the job of a centralized server. Now, on this count: > we have no insight right now into how Hackage makes decisions about who's allowed to upload and revise packages. This is weird. ?Hackage? doesn?t make decisions. People do. Hackage is just a program, run on a machine. It enforces permissioning. Those permissions can be viewed. So I can tell you who the trustees are, who the admins are, and who the maintainers are for any given package. If any of that information about how these permissions are granted, by whom, and when, is not logged (and some that should be currently isn?t I?ll grant) then we can amend the codebase to log it. If we wish, we can make the log verifiable via an audit trail. We can also make admin actions verifiable. This is precisely what carter proposed (with my amendment). On > We have no idea how to make a correspondence between a Hackage username and some externally-verifiable identity (like a GPG public key). In that world: how can we externally verify signatures of packages on Hackage? My proposal again is simple ? treat the hackage username as a convenience, not as anything fundamental to the verification model. Treat verification entirely independently. Assume I have a GPG key for Michael Snoyman. How do I know a certain version of yesod is due to him? I don?t need to ask Hackage at all. I just check that he signed it with his key. That?s all that matters, right? ?Gershom From Andrew.Butterfield at scss.tcd.ie Wed Apr 15 13:51:21 2015 From: Andrew.Butterfield at scss.tcd.ie (Andrew Butterfield) Date: Wed, 15 Apr 2015 14:51:21 +0100 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> Message-ID: Hi Jon, > On 15 Apr 2015, at 14:26, Jon Schneider wrote: > > Perhaps I need to be more specific. > > main = do > a <- getLine > b <- getLine > > Can we say "a" absolutely always receives the first line of input and if > so what makes this the case rather than "b" receiving it ? Or do things > need to be slightly more complicated to achieve this ? No, that is the case. The IO monad acts like a state monad under the hood, and getLine changes that "IO state", to one in which the next line has just been read in. So the contents of 'b' will be line that was read immediately after the line read leading to the contents of 'b' > > Sorry it's just the engineer in me. I think once I've got this clear I'll > be happy to move on. I'm an engineer too - know the feeling. But, imaging a pure function f that does something with strings, and which can fail badly (unhandled runtime exception) if the string is ill-formed. Now consider main = do c <- fmap f $ getLine d <- getLine .... something involving c .... Now, if the first getLine returns an ill-formed string, laziness may mean that the second getLine occurs, and we see the program crash with a runtime error in the evaluation of c after two getLines. In effect the evaluation of f is deferred until it is needed in the 3rd line... Hope this helps! > > Jon > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Andrew Butterfield School of Computer Science & Statistics Trinity College Dublin 2, Ireland From mathieu at fpcomplete.com Wed Apr 15 14:17:31 2015 From: mathieu at fpcomplete.com (Mathieu Boespflug) Date: Wed, 15 Apr 2015 16:17:31 +0200 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: > In the future, we can at first optionally, and then later on a stricter basis encourage and then enforce signing. I think this is a good idea. > > But, and here we apparently disagree completely, it seems to me that everything else is not and should not be the job of a centralized server. Actually, I think you and Michael are in violent *agreement* on this particular point. At the core of the gist that was pointed to earlier in this thread [1], is the idea that we should have some kind of central notepad, where anyone is allowed to scribble anything they like, even add pointers to packages that are completely broken, don't build, or are malicious trojan horses. Then, it's up to end users to filter out the wheat from the chaff. In particular, it's up to the user to pretend those scribbles that were added by untrusted sources were just never there, *according to the users own trust model*. The central notepad does not enforce any particular trust model. It just provides sufficient mechanism so that the information necessary to support common trust models, such as WoT of GPG keys, can be uploaded and/or pointed to and found. In this way, any trust model can be supported. We could refactor Hackage on top of this notepad, and have Hackage upload metadata about those scribbles that *it* thinks are legit, say because Hackage performed the scribble itself on behalf of some user, but only did so after authenticating said user, according to its own notion of authentication. Users are free to say "I trust any scribble to the notepad about any package that was added by an authenticated Hackage user". Or "I only trust scribbles from my Haskell friends whom I have met at ICFP and on that occasion exchanged keys". Or a union of both. Or anything else really. [1] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 From marcin.jan.mrotek at gmail.com Wed Apr 15 15:34:50 2015 From: marcin.jan.mrotek at gmail.com (Marcin Mrotek) Date: Wed, 15 Apr 2015 17:34:50 +0200 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <14d7321b-532a-4888-bbfd-5149cbef8a50@email.android.com> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> <14d7321b-532a-4888-bbfd-5149cbef8a50@email.android.com> Message-ID: > And this is because the implementation of IO is *specifically* crafted to > guarantee this ordering. It is not a property of monads in general, or do > syntax in general. Is it? The example main = do a <- getLine b <- getLine let c = foo a b -- I guess you'd want to do something about a and b eventually isn't really representative of monads, as it could be done just as well with the applicative functor interface (and apparently there are plans for GHC 7.12 to figure it out on its own). Applicative instance for IO indeed does order effects left to right, so this could be used as an example of "specific crafting". Changing it to, let's say main = do a <- getLine b <- getLine c <- foo a b makes it obvious there's no way to evaluate c before a and b, whatever monad that would be, as foo may c can change the shape of the monad anyway it pleases. For example if the monad in question was Maybe, there would be no way to tell whether foo returns Just or Nothing without actually evaluating it. (a and b could still be reordered, but again, this is a feature of an applicative functor, which all monads must derive from, but not of the monadic interface as of itself) Best regards, Marcin Mrotek From haskell at nand.wakku.to Wed Apr 15 15:41:38 2015 From: haskell at nand.wakku.to (Niklas Haas) Date: Wed, 15 Apr 2015 17:41:38 +0200 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> <14d7321b-532a-4888-bbfd-5149cbef8a50@email.android.com> Message-ID: <20150415174138.GB8773@nanodesu.localdomain> On Wed, 15 Apr 2015 17:34:50 +0200, Marcin Mrotek wrote: > Changing it to, let's say > > main = do > a <- getLine > b <- getLine > c <- foo a b > > makes it obvious there's no way to evaluate c before a and b, whatever > monad that would be, as foo may c can change the shape of the monad > anyway it pleases. import Debug.Trace foo _ _ = return () main = do a <- trace "a" <$> getLine b <- trace "b" <$> getLine c <- trace "c" <$> foo a b print c Running this program: first input second input c () So as you can see, ?a? and ?b? never get evaluated. From marcin.jan.mrotek at gmail.com Wed Apr 15 15:50:55 2015 From: marcin.jan.mrotek at gmail.com (Marcin Mrotek) Date: Wed, 15 Apr 2015 17:50:55 +0200 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <20150415174138.GB8773@nanodesu.localdomain> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> <14d7321b-532a-4888-bbfd-5149cbef8a50@email.android.com> <20150415174138.GB8773@nanodesu.localdomain> Message-ID: Sorry, I got it wrong. I should have said "there's no way to evaluate c without evaluating getline twice (<=> evaluating the constructor of the monad type, but not necessarily any further) ", but I guess this is what Jon Schneider meant by: > Can we say "a" absolutely always receives the first line of input and if so what makes this the case rather than "b" receiving it ? Best regards, Marcin Mrotek From silvio.frischi at gmail.com Wed Apr 15 16:44:47 2015 From: silvio.frischi at gmail.com (silvio) Date: Wed, 15 Apr 2015 18:44:47 +0200 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> Message-ID: <552E957F.9050108@gmail.com> > Note that these do not correspond to the Strict and Lazy State in > transformers. The former (which you call lazy) corresponds to Strict > from transformers. The lazier version uses lazy pattern matching in > bind. I knew I was going to make a mistake somewhere :). The problem is they are transformers and i wanted an example without transformers to confuse the issue So what would be a correct lazy version?? instance Monad (State s) where act1 >>= f2 = State $ \s1 -> runState (f2 (fst res)) (snd res) where res = runState act1 s1 instance Monad (State s) where act1 >>= f2 = State $ \s1 -> runState (f2 input2) s2 where ~(input2, s2) = runState act1 s1 These should do the same right? Silvio From silvio.frischi at gmail.com Wed Apr 15 17:47:58 2015 From: silvio.frischi at gmail.com (silvio) Date: Wed, 15 Apr 2015 19:47:58 +0200 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> Message-ID: <552EA44E.7070504@gmail.com> Let's just have a look at the monad instance of IO which is defined in the files ghc-prim/GHC/Types.hs and base/GHC/Base.hs newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #)) instance Monad IO where ... (>>=) = bindIO ... bindIO :: IO a -> (a -> IO b) -> IO b bindIO (IO m) k = IO $ \ s -> case m s of (# new_s, a #) -> unIO (k a) If you can forget for a minute about all the # you will end up with this. newtype IO a = IO (RealWorld -> (RealWorld, a)) bindIO (IO m) k = IO $ \ s -> case m s of (new_s, a) -> unIO (k a) when the following part is evaluated: case m s of (new_s, a) -> unIO (k a) (m s) has to be evaluated first in order to ensure that the result matches the pattern (new_s, a) and is not bottom/some infinite calculation/an error. This is why IO statements are evaluated in order. Silvio From blaze at ruddy.ru Wed Apr 15 18:54:12 2015 From: blaze at ruddy.ru (Andrey Sverdlichenko) Date: Wed, 15 Apr 2015 11:54:12 -0700 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: I think this "public notepad" approach will make security situation worse for most users. It is probably safe to assume that typical developer do not have haskell friends who publish packages, especially popular ones, never met anyone at or even attended ICFP and may have very vague idea about GPG keys at all. In short, they just do not have any basis to build trust model on. Currently, at least some level of security is provided by restrictions on who can upload new packages: if this package is there for a few years, and new version is uploaded by the same maintainer, it gives some assurance that new version does not upload all your files to botnet and wipe out hard drive. There are several assumptions like that maintainer password is not stolen, there is no MITM attack, etc, but however weak this security is, these attacks are usually targeted and at least they require some additional effort from villain. Now, if I get it right, you want to allow anyone to upload foo-1.0.1 to hackage and let user sort it out if he trusts this update. It will never work: for all we know about security, when asked "do you trust this package's signature?" user will either get annoyed, shrug and click "Yes", or if paranoid, get annoyed and go away. He is just does not know enough to make decisions you are asking him to make. And adding vector implementation with something malicious in it's build script just became a matter of "cabal upload". If you build such a system, you have to provide it with reasonable set of defaults, and it is where "we are in business of key distribution" thing raises its head again. On Wed, Apr 15, 2015 at 7:17 AM, Mathieu Boespflug wrote: >> In the future, we can at first optionally, and then later on a stricter basis encourage and then enforce signing. I think this is a good idea. >> >> But, and here we apparently disagree completely, it seems to me that everything else is not and should not be the job of a centralized server. > > Actually, I think you and Michael are in violent *agreement* on this > particular point. At the core of the gist that was pointed to earlier > in this thread [1], is the idea that we should have some kind of > central notepad, where anyone is allowed to scribble anything they > like, even add pointers to packages that are completely broken, don't > build, or are malicious trojan horses. Then, it's up to end users to > filter out the wheat from the chaff. In particular, it's up to the > user to pretend those scribbles that were added by untrusted sources > were just never there, *according to the users own trust model*. The > central notepad does not enforce any particular trust model. It just > provides sufficient mechanism so that the information necessary to > support common trust models, such as WoT of GPG keys, can be uploaded > and/or pointed to and found. > > In this way, any trust model can be supported. We could refactor > Hackage on top of this notepad, and have Hackage upload metadata about > those scribbles that *it* thinks are legit, say because Hackage > performed the scribble itself on behalf of some user, but only did so > after authenticating said user, according to its own notion of > authentication. > > Users are free to say "I trust any scribble to the notepad about any > package that was added by an authenticated Hackage user". Or "I only > trust scribbles from my Haskell friends whom I have met at ICFP and on > that occasion exchanged keys". Or a union of both. Or anything else > really. > > [1] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From mwm at mired.org Wed Apr 15 19:15:30 2015 From: mwm at mired.org (Mike Meyer) Date: Wed, 15 Apr 2015 14:15:30 -0500 Subject: [Haskell-cafe] Coding katas/dojos and functional programming introduction In-Reply-To: References: Message-ID: On Wed, Apr 15, 2015 at 1:34 AM, Gautier DI FOLCO wrote: > Some days ago I have participated to a coding dojo* which aimed to be an introduction to functional programming. I have also facilitate 3 of events like this and do several talks on this subject. But I'm kinda disappointed because each time there is a common pattern: > 1. Unveil the problem which will be treated > 2. Let the attendees solve it > 3. Show the FP-ish solution (generally a bunch of map/fold) > I think it's frustrating for the attendees (because they don't try to solve it) and gives a false illusion of knowledge. > I don't consider myself as a "FP guru" or anything but for me FP is a matter of types and expressions, so when someone illustrate FP via map/fold, it's kind of irritating. > Ironically, the best workshop I have done was on functional generalization (you begin by two hard coded functions, sum and product, and you extract Foldable and Monoid), but again, it's not satisfying. > We can do better, we should do better. > Have you got any feedback/subjet ideas/examples on how to introduce "real" FP to beginners in a short amount of time (like 1-3 hours)? Well, functional programming is very much like an elephant. I started with it doing Scheme, so while I appreciate what the HM type system brings to the table, to me it's not what functional programming is about. To me, functional programming is about thinking about functions as data, and designing them so you can compose and combine them like you would data, as opposed to simply being tools for encapsulating code. As such, maps and folds are simple, easy-to-understand higher order functions that you can plug other functions into. But such discussions tend to highlight map/fold, not how you build the functions that get plugged into them. To me, a better example would be one of the combinator-based parsing libraries. Say Text.XML.Cursor, as in the School of Haskell tutorial at https://www.fpcomplete.com/school/starting-with-haskell/libraries-and-frameworks/text-manipulation/tagsoup . From alex.solla at gmail.com Wed Apr 15 19:23:24 2015 From: alex.solla at gmail.com (Alexander Solla) Date: Wed, 15 Apr 2015 12:23:24 -0700 Subject: [Haskell-cafe] Coding katas/dojos and functional programming introduction In-Reply-To: References: Message-ID: The first exercise I did when I learned Haskell some 8 years ago was re-implement all of the list functions in the Prelude, based on the types and documentation. On Tue, Apr 14, 2015 at 11:34 PM, Gautier DI FOLCO < gautier.difolco at gmail.com> wrote: > Hello, > > Some days ago I have participated to a coding dojo* which aimed to be an > introduction to functional programming. I have also facilitate 3 of events > like this and do several talks on this subject. But I'm kinda disappointed > because each time there is a common pattern: > 1. Unveil the problem which will be treated > 2. Let the attendees solve it > 3. Show the FP-ish solution (generally a bunch of map/fold) > I think it's frustrating for the attendees (because they don't try to > solve it) and gives a false illusion of knowledge. > I don't consider myself as a "FP guru" or anything but for me FP is a > matter of types and expressions, so when someone illustrate FP via > map/fold, it's kind of irritating. > Ironically, the best workshop I have done was on functional generalization > (you begin by two hard coded functions, sum and product, and you extract > Foldable and Monoid), but again, it's not satisfying. > We can do better, we should do better. > Have you got any feedback/subjet ideas/examples on how to introduce "real" > FP to beginners in a short amount of time (like 1-3 hours)? > > Thanks in advance for your help. > Regards. > > * Basically you have 2 to 3 hours, a problem and you try to solve with in > iteration with different constraints > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mathieu at fpcomplete.com Wed Apr 15 20:19:12 2015 From: mathieu at fpcomplete.com (Mathieu Boespflug) Date: Wed, 15 Apr 2015 22:19:12 +0200 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: > Now, if I get it right, you want to allow anyone to upload foo-1.0.1 > to hackage and let user sort it out if he trusts this update. It will > never work: for all we know about security, when asked "do you trust > this package's signature?" user will either get annoyed, shrug and > click "Yes", or if paranoid, get annoyed and go away. He is just does > not know enough to make decisions you are asking him to make. And > adding vector implementation with something malicious in it's build > script just became a matter of "cabal upload". > If you build such a system, you have to provide it with reasonable set > of defaults, and it is where "we are in business of key distribution" > thing raises its head again. The above all sounds reasonable to me. Note however that, not that I'm convinced that this is a good default, but the default here could well be: "trust whatever was added to the notepad by Hackage on behalf of some authenticated user". That gets us back to today's status quo. We can do better of course, but just to say that this "notepad" can be completely transparent to end users, depending only on tooling. One way to know what is added by Hackage is to have Hackage sign whatever it writes in the notepad, using some key or certificate that the tooling trusts by default. That's really the baseline. As I said, we can do much better. At least with the central notepad approach, we're pushing policy about what packages to trust down to end user tooling, which can be swapped in and out at will, without having some central entity dictate a weaker and redundant policy. I agree with Gershom's sentiment that the policy for what to trust should be left open. It should be left up to the user. One family of policies, already discussed in this thread, is a GPG WoT. That family of policies may or may not fly for all users, I don't know, but at least the tooling for that already exists, and it's easy to put in place. Another family is implementations of The Update Framework suggested by Duncan here: https://groups.google.com/d/msg/commercialhaskell/qEEJT2LDTMU/_uj0v5PbIA8J I'm told others are working along similar lines. It'd be great if those people came out of the woodwork to talk about what they're doing in the open. And since clearly there's interest in safer package distribution, formulate proposals with a comment about how it fares to address a specific threat model, such as the 9 attacks listed in this paper (same authors as links posted previously in this thread twice): ftp://ftp.cs.arizona.edu/reports/2008/TR08-02.pdf That list of attacks is a superset of the attacks listed by Michael previously. Not all policies will address all attacks in a way that's entirely satisfactory, but at least with the central notepad we can easily evolve them over time. From gautier.difolco at gmail.com Wed Apr 15 22:28:21 2015 From: gautier.difolco at gmail.com (Gautier DI FOLCO) Date: Wed, 15 Apr 2015 22:28:21 +0000 Subject: [Haskell-cafe] Coding katas/dojos and functional programming introduction In-Reply-To: References: Message-ID: 2015-04-15 19:15 GMT+00:00 Mike Meyer : > Well, functional programming is very much like an elephant. > I have the same thought about OOP some years ago, them I discovered then first meaning of it and all was so clear and simple. My goal isn't to teach the full power of FP, my goal is to give them inspiration, to suggest that there is a wider world to explore. > As such, maps and folds are simple, easy-to-understand higher order > functions that you can plug other functions into. But such discussions > tend to highlight map/fold, not how you build the functions that get > plugged into them. > That my main concern. > To me, a better example would be one of the combinator-based parsing > libraries. Say Text.XML.Cursor, as in the School of Haskell tutorial > at > https://www.fpcomplete.com/school/starting-with-haskell/libraries-and-frameworks/text-manipulation/tagsoup > I'll have a look but parsing is a bit too "magic" I think. 2015-04-15 19:23 GMT+00:00 Alexander Solla : > The first exercise I did when I learned Haskell some 8 years ago was > re-implement all of the list functions in the Prelude, based on the types > and documentation. > Good idea, I have also done the NICTA repository and if was a nice training. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mwm at mired.org Wed Apr 15 22:40:40 2015 From: mwm at mired.org (Mike Meyer) Date: Wed, 15 Apr 2015 17:40:40 -0500 Subject: [Haskell-cafe] Coding katas/dojos and functional programming introduction In-Reply-To: References: Message-ID: On Wed, Apr 15, 2015 at 5:28 PM, Gautier DI FOLCO wrote: > 2015-04-15 19:15 GMT+00:00 Mike Meyer : > >> Well, functional programming is very much like an elephant. >> > > I have the same thought about OOP some years ago, them I discovered then > first meaning of it and all was so clear and simple. My goal isn't to teach > the full power of FP, my goal is to give them inspiration, to suggest that > there is a wider world to explore. > Just clarify, this is a reference to the fable of the blind men and the elephant. What you think it is like will depend on how you approach it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff at datalinktech.com.au Thu Apr 16 00:57:41 2015 From: jeff at datalinktech.com.au (Jeff) Date: Thu, 16 Apr 2015 10:57:41 +1000 Subject: [Haskell-cafe] Advice needed on how to improve some code Message-ID: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> Hello, I am seeking some advice on how I might improve a bit of code. The function in question reads and parses part of a binary protocol, storing the parsed info as it proceeds. parseDeviceData is called by parseDevice (shown further down). It looks to me like there should be a more concise, less repetitive way to do what parseDeviceData does. Any advice on this would be greatly appreciated. parseDeviceData :: P.Payload -> Parser P.Payload parseDeviceData pl = let mdm = P.dataMask ( P.payloadData pl ) in ( let pld = P.payloadData pl in if testBit mdm ( fromEnum D.Sys ) then parseDeviceSysData >>= ( \s -> return ( pl { P.payloadData = pld { P.sysData = Just s } } ) ) else return pl ) >>= ( \pl' -> let pld = P.payloadData pl' in if testBit mdm ( fromEnum D.GPS ) then parseDeviceGPSData >>= ( \s -> return ( pl' { P.payloadData = pld { P.gpsData = Just s } } ) ) else return pl' ) >>= ( \pl' -> let pld = P.payloadData pl' in if testBit mdm ( fromEnum D.GSM ) then parseDeviceGSMData >>= ( \s -> return ( pl' { P.payloadData = pld { P.gsmData = Just s } } ) ) else return pl' ) >>= ( \pl' -> let pld = P.payloadData pl' in if testBit mdm ( fromEnum D.COT ) then parseDeviceCOTData >>= ( \s -> return ( pl' { P.payloadData = pld { P.cotData = Just s } } ) ) else return pl' ) >>= ( \pl' -> let pld = P.payloadData pl' in if testBit mdm ( fromEnum D.ADC ) then parseDeviceADCData >>= ( \s -> return ( pl' { P.payloadData = pld { P.adcData = Just s } } ) ) else return pl' ) >>= ( \pl' -> let pld = P.payloadData pl' in if testBit mdm ( fromEnum D.DTT ) then parseDeviceDTTData >>= ( \s -> return ( pl' { P.payloadData = pld { P.dttData = Just s } } ) ) else return pl' ) >>= ( \pl' -> let pld = P.payloadData pl' in if testBit mdm ( fromEnum D.OneWire ) then parseDeviceOneWireData >>= ( \s -> return ( pl' { P.payloadData = pld { P.iwdData = Just s } } ) ) else return pl' ) >>= ( \pl' -> if testBit mdm ( fromEnum D.ETD ) then parseDeviceEventData pl' else return pl' ) The Parser above is a Data.Binary.Strict.Get wrapped in a StateT, where the state is a top-level structure for holding the parsed packet. parseDevice :: Bool -> Parser () parseDevice _hasEvent = parseTimestamp >>= ( \ts -> if _hasEvent then lift getWord8 >>= ( \e -> lift getWord16be >>= ( \mdm -> return ( P.Payload "" ( Just ts ) $ P.blankDevicePayloadData { P.dataMask = mdm , P.eventID = toEnum ( fromIntegral e .&. 0x7f ) , P.deviceStatusFlag = testBit e 7 , P.hasEvent = True } ) ) ) else lift getWord16be >>= ( \mdm -> return ( P.Payload "" ( Just ts ) $ P.blankDevicePayloadData { P.dataMask = mdm } ) ) ) >>= parseDeviceData >>= ( \dpl -> get >>= ( \p -> put ( p { P.payloads = dpl : P.payloads p } ) ) ) Here are the data types for the Packet and Payload: data Payload = Payload { imei :: !BS.ByteString , timestamp :: Maybe Word64 , payloadData :: PayloadData } data PayloadData = HeartBeatPL | SMSFwdPL { smsMesg :: !BS.ByteString } | SerialPL { auxData :: !Word8 , fixFlag :: !Word8 , gpsCoord :: !GPSCoord , serialData :: !BS.ByteString } | DevicePL { hasEvent :: !Bool , deviceStatusFlag :: !Bool , eventID :: !E.EventID , dataMask :: !Word16 , sysData :: Maybe DS.SysData , gpsData :: Maybe DGP.GPSData , gsmData :: Maybe DGS.GSMData , cotData :: Maybe DC.COTData , adcData :: Maybe DA.ADCData , dttData :: Maybe DD.DTTData , iwdData :: Maybe DO.OneWireData , etdSpd :: Maybe ES.SpeedEvent , etdGeo :: Maybe EG.GeoEvent , etdHealth :: Maybe EH.HealthEvent , etdHarsh :: Maybe EHD.HarshEvent , etdOneWire :: Maybe EO.OneWireEvent , etdADC :: Maybe EA.ADCEvent } deriving ( Show ) data Packet = Packet { protocolVersion :: !Word8 , packetType :: !PT.PacketType , deviceID :: Maybe BS.ByteString , payloads :: ![ Payload ] , crc :: !Word16 } deriving ( Show ) Lastly, here is the Parser monad transformer: module G6S.Parser where import Control.Monad.State.Strict import Data.Binary.Strict.Get import qualified Data.ByteString as BS import qualified G6S.Packet as GP type Parser = StateT GP.Packet Get runParser :: Parser a -> BS.ByteString -> Maybe a runParser p bs = let ( result, _ ) = runGet ( runStateT p GP.initPacket ) bs in case result of Right tup -> Just $ fst tup Left _ -> Nothing I hope there is enough info here. Thanks, Jeff From david.feuer at gmail.com Thu Apr 16 03:19:30 2015 From: david.feuer at gmail.com (David Feuer) Date: Wed, 15 Apr 2015 23:19:30 -0400 Subject: [Haskell-cafe] Advice needed on how to improve some code In-Reply-To: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> References: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> Message-ID: I haven't dug into the guts of this *at all*, but why don't you start by using `do` notation instead of a million >>= invocations? It also looks like you may have some common patterns you can exploit by defining some more functions. On Wed, Apr 15, 2015 at 8:57 PM, Jeff wrote: > Hello, > > I am seeking some advice on how I might improve a bit of code. > The function in question reads and parses part of a binary protocol, > storing the parsed info as it proceeds. > > parseDeviceData is called by parseDevice (shown further down). > > It looks to me like there should be a more concise, less repetitive way to > do what > parseDeviceData does. Any advice on this would be greatly appreciated. > > parseDeviceData :: P.Payload -> Parser P.Payload > parseDeviceData pl = > let > mdm = P.dataMask ( P.payloadData pl ) > in > ( let pld = P.payloadData pl in > if testBit mdm ( fromEnum D.Sys ) > then > parseDeviceSysData >>= > ( \s -> return ( pl { P.payloadData = pld { P.sysData = Just s > } } ) ) > else > return pl ) >>= > ( \pl' -> let pld = P.payloadData pl' in > if testBit mdm ( fromEnum D.GPS ) > then > parseDeviceGPSData >>= > ( \s -> return ( pl' { P.payloadData = pld { > P.gpsData = Just s } } ) ) > else > return pl' ) >>= > ( \pl' -> let pld = P.payloadData pl' in > if testBit mdm ( fromEnum D.GSM ) > then > parseDeviceGSMData >>= > ( \s -> return ( pl' { P.payloadData = pld { > P.gsmData = Just s } } ) ) > else > return pl' ) >>= > ( \pl' -> let pld = P.payloadData pl' in > if testBit mdm ( fromEnum D.COT ) > then > parseDeviceCOTData >>= > ( \s -> return ( pl' { P.payloadData = pld { > P.cotData = Just s } } ) ) > else > return pl' ) >>= > ( \pl' -> let pld = P.payloadData pl' in > if testBit mdm ( fromEnum D.ADC ) > then > parseDeviceADCData >>= > ( \s -> return ( pl' { P.payloadData = pld { > P.adcData = Just s } } ) ) > else > return pl' ) >>= > ( \pl' -> let pld = P.payloadData pl' in > if testBit mdm ( fromEnum D.DTT ) > then > parseDeviceDTTData >>= > ( \s -> return ( pl' { P.payloadData = pld { > P.dttData = Just s } } ) ) > else > return pl' ) >>= > ( \pl' -> let pld = P.payloadData pl' in > if testBit mdm ( fromEnum D.OneWire ) > then > parseDeviceOneWireData >>= > ( \s -> return ( pl' { P.payloadData = pld { > P.iwdData = Just s } } ) ) > else > return pl' ) >>= > ( \pl' -> if testBit mdm ( fromEnum D.ETD ) > then > parseDeviceEventData pl' > else > return pl' ) > > The Parser above is a Data.Binary.Strict.Get wrapped in a StateT, where > the state is a top-level > structure for holding the parsed packet. > > parseDevice :: Bool -> Parser () > parseDevice _hasEvent = > parseTimestamp >>= > ( \ts -> > if _hasEvent > then > lift getWord8 >>= > ( \e -> lift getWord16be >>= > ( \mdm -> > return ( P.Payload "" ( Just ts ) $ > P.blankDevicePayloadData { P.dataMask = mdm > , P.eventID = toEnum ( > fromIntegral e .&. 0x7f ) > , P.deviceStatusFlag = > testBit e 7 > , P.hasEvent = True > } ) ) ) > else > lift getWord16be >>= > ( \mdm -> > return ( P.Payload "" ( Just ts ) $ > P.blankDevicePayloadData { P.dataMask = mdm } ) ) > ) >>= > parseDeviceData >>= > ( \dpl -> get >>= ( \p -> put ( p { P.payloads = dpl : P.payloads p } > ) ) ) > > > Here are the data types for the Packet and Payload: > > > data Payload = Payload { imei :: !BS.ByteString > , timestamp :: Maybe Word64 > , payloadData :: PayloadData > } > > data PayloadData = HeartBeatPL > | SMSFwdPL { smsMesg :: !BS.ByteString } > | SerialPL { auxData :: !Word8 > , fixFlag :: !Word8 > , gpsCoord :: !GPSCoord > , serialData :: !BS.ByteString > } > | DevicePL { hasEvent :: !Bool > , deviceStatusFlag :: !Bool > , eventID :: !E.EventID > , dataMask :: !Word16 > , sysData :: Maybe DS.SysData > , gpsData :: Maybe DGP.GPSData > , gsmData :: Maybe DGS.GSMData > , cotData :: Maybe DC.COTData > , adcData :: Maybe DA.ADCData > , dttData :: Maybe DD.DTTData > , iwdData :: Maybe DO.OneWireData > , etdSpd :: Maybe ES.SpeedEvent > , etdGeo :: Maybe EG.GeoEvent > , etdHealth :: Maybe EH.HealthEvent > , etdHarsh :: Maybe EHD.HarshEvent > , etdOneWire :: Maybe EO.OneWireEvent > , etdADC :: Maybe EA.ADCEvent > } > deriving ( Show ) > > data Packet = Packet { protocolVersion :: !Word8 > , packetType :: !PT.PacketType > , deviceID :: Maybe BS.ByteString > , payloads :: ![ Payload ] > , crc :: !Word16 > } > deriving ( Show ) > > Lastly, here is the Parser monad transformer: > > module G6S.Parser where > > import Control.Monad.State.Strict > import Data.Binary.Strict.Get > import qualified Data.ByteString as BS > > import qualified G6S.Packet as GP > > type Parser = StateT GP.Packet Get > > runParser :: Parser a -> BS.ByteString -> Maybe a > runParser p bs = > let > ( result, _ ) = runGet ( runStateT p GP.initPacket ) bs > in > case result of > Right tup -> Just $ fst tup > Left _ -> Nothing > > > I hope there is enough info here. > > Thanks, > Jeff > > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From claude at mathr.co.uk Thu Apr 16 04:18:48 2015 From: claude at mathr.co.uk (Claude Heiland-Allen) Date: Thu, 16 Apr 2015 05:18:48 +0100 Subject: [Haskell-cafe] Advice needed on how to improve some code In-Reply-To: References: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> Message-ID: <552F3828.2040001@mathr.co.uk> On 16/04/15 04:19, David Feuer wrote: > I haven't dug into the guts of this *at all*, but why don't you start by > using `do` notation instead of a million >>= invocations? It also looks > like you may have some common patterns you can exploit by defining some > more functions. > > On Wed, Apr 15, 2015 at 8:57 PM, Jeff wrote: > >> Hello, >> >> I am seeking some advice on how I might improve a bit of code. >> The function in question reads and parses part of a binary protocol, >> storing the parsed info as it proceeds. >> >> parseDeviceData is called by parseDevice (shown further down). >> >> It looks to me like there should be a more concise, less repetitive way to >> do what >> parseDeviceData does. Any advice on this would be greatly appreciated. Lens[0] might help abstract the common pattern of nested record updates. You should be able to get it into something that looks more like this: whenBit flag parser setter pld | view dataMask pld `testBit` fromEnum flag = do s <- parser return $ set setter (Just s) pld | otherwise = return pld parseDevicePayloadData = foldr (>=>) return [ whenBit Sys parseDeviceSysData sysData , whenBit GPS parseDeviceGPSData gpsData ... ] [0] http://hackage.haskell.org/package/lens Claude >> >> parseDeviceData :: P.Payload -> Parser P.Payload >> parseDeviceData pl = >> let >> mdm = P.dataMask ( P.payloadData pl ) >> in >> ( let pld = P.payloadData pl in >> if testBit mdm ( fromEnum D.Sys ) >> then >> parseDeviceSysData >>= >> ( \s -> return ( pl { P.payloadData = pld { P.sysData = Just s >> } } ) ) >> else >> return pl ) >>= >> ( \pl' -> let pld = P.payloadData pl' in >> if testBit mdm ( fromEnum D.GPS ) >> then >> parseDeviceGPSData >>= >> ( \s -> return ( pl' { P.payloadData = pld { >> P.gpsData = Just s } } ) ) >> else >> return pl' ) >>= >> ( \pl' -> let pld = P.payloadData pl' in >> if testBit mdm ( fromEnum D.GSM ) >> then >> parseDeviceGSMData >>= >> ( \s -> return ( pl' { P.payloadData = pld { >> P.gsmData = Just s } } ) ) >> else >> return pl' ) >>= >> ( \pl' -> let pld = P.payloadData pl' in >> if testBit mdm ( fromEnum D.COT ) >> then >> parseDeviceCOTData >>= >> ( \s -> return ( pl' { P.payloadData = pld { >> P.cotData = Just s } } ) ) >> else >> return pl' ) >>= >> ( \pl' -> let pld = P.payloadData pl' in >> if testBit mdm ( fromEnum D.ADC ) >> then >> parseDeviceADCData >>= >> ( \s -> return ( pl' { P.payloadData = pld { >> P.adcData = Just s } } ) ) >> else >> return pl' ) >>= >> ( \pl' -> let pld = P.payloadData pl' in >> if testBit mdm ( fromEnum D.DTT ) >> then >> parseDeviceDTTData >>= >> ( \s -> return ( pl' { P.payloadData = pld { >> P.dttData = Just s } } ) ) >> else >> return pl' ) >>= >> ( \pl' -> let pld = P.payloadData pl' in >> if testBit mdm ( fromEnum D.OneWire ) >> then >> parseDeviceOneWireData >>= >> ( \s -> return ( pl' { P.payloadData = pld { >> P.iwdData = Just s } } ) ) >> else >> return pl' ) >>= >> ( \pl' -> if testBit mdm ( fromEnum D.ETD ) >> then >> parseDeviceEventData pl' >> else >> return pl' ) >> >> The Parser above is a Data.Binary.Strict.Get wrapped in a StateT, where >> the state is a top-level >> structure for holding the parsed packet. >> >> parseDevice :: Bool -> Parser () >> parseDevice _hasEvent = >> parseTimestamp >>= >> ( \ts -> >> if _hasEvent >> then >> lift getWord8 >>= >> ( \e -> lift getWord16be >>= >> ( \mdm -> >> return ( P.Payload "" ( Just ts ) $ >> P.blankDevicePayloadData { P.dataMask = mdm >> , P.eventID = toEnum ( >> fromIntegral e .&. 0x7f ) >> , P.deviceStatusFlag = >> testBit e 7 >> , P.hasEvent = True >> } ) ) ) >> else >> lift getWord16be >>= >> ( \mdm -> >> return ( P.Payload "" ( Just ts ) $ >> P.blankDevicePayloadData { P.dataMask = mdm } ) ) >> ) >>= >> parseDeviceData >>= >> ( \dpl -> get >>= ( \p -> put ( p { P.payloads = dpl : P.payloads p } >> ) ) ) >> >> >> Here are the data types for the Packet and Payload: >> >> >> data Payload = Payload { imei :: !BS.ByteString >> , timestamp :: Maybe Word64 >> , payloadData :: PayloadData >> } >> >> data PayloadData = HeartBeatPL >> | SMSFwdPL { smsMesg :: !BS.ByteString } >> | SerialPL { auxData :: !Word8 >> , fixFlag :: !Word8 >> , gpsCoord :: !GPSCoord >> , serialData :: !BS.ByteString >> } >> | DevicePL { hasEvent :: !Bool >> , deviceStatusFlag :: !Bool >> , eventID :: !E.EventID >> , dataMask :: !Word16 >> , sysData :: Maybe DS.SysData >> , gpsData :: Maybe DGP.GPSData >> , gsmData :: Maybe DGS.GSMData >> , cotData :: Maybe DC.COTData >> , adcData :: Maybe DA.ADCData >> , dttData :: Maybe DD.DTTData >> , iwdData :: Maybe DO.OneWireData >> , etdSpd :: Maybe ES.SpeedEvent >> , etdGeo :: Maybe EG.GeoEvent >> , etdHealth :: Maybe EH.HealthEvent >> , etdHarsh :: Maybe EHD.HarshEvent >> , etdOneWire :: Maybe EO.OneWireEvent >> , etdADC :: Maybe EA.ADCEvent >> } >> deriving ( Show ) >> >> data Packet = Packet { protocolVersion :: !Word8 >> , packetType :: !PT.PacketType >> , deviceID :: Maybe BS.ByteString >> , payloads :: ![ Payload ] >> , crc :: !Word16 >> } >> deriving ( Show ) >> >> Lastly, here is the Parser monad transformer: >> >> module G6S.Parser where >> >> import Control.Monad.State.Strict >> import Data.Binary.Strict.Get >> import qualified Data.ByteString as BS >> >> import qualified G6S.Packet as GP >> >> type Parser = StateT GP.Packet Get >> >> runParser :: Parser a -> BS.ByteString -> Maybe a >> runParser p bs = >> let >> ( result, _ ) = runGet ( runStateT p GP.initPacket ) bs >> in >> case result of >> Right tup -> Just $ fst tup >> Left _ -> Nothing >> >> >> I hope there is enough info here. >> >> Thanks, >> Jeff -- http://mathr.co.uk From dasuraga at gmail.com Thu Apr 16 05:24:59 2015 From: dasuraga at gmail.com (Raphael Gaschignard) Date: Thu, 16 Apr 2015 05:24:59 +0000 Subject: [Haskell-cafe] Coding katas/dojos and functional programming introduction In-Reply-To: References: Message-ID: Is this aimed for FP beginners who already know something like Java? I think the thing to do here would be to come up with some tasks that are genuinely tedious to write in a Java-esque (or Pascal-like) language, and then present how FP solutions are simpler. I'm of the opinion that FP succeeds not just because of the tenants of FP, but because most of the languages are terse and have code that is "pretty". Showing some quick things involving quick manipulation of tuples (basically a bunch of list processing) could show that things don't have to be complicated with a bunch of anonymous classes. Anyways, I think the essential thing is to present a problem that they, as programmers, have already experienced. The big one being "well these two functions are *almost* the same but the inner-part of the function has different logic" (basically, looking at things like map). Open up the world of possibilities. It's not things that are only possible in Haskell/Scheme (after all, all of these languages are turing complete so..), but they're so much easier to write in these languages. On Thu, Apr 16, 2015 at 7:41 AM Mike Meyer wrote: > On Wed, Apr 15, 2015 at 5:28 PM, Gautier DI FOLCO < > gautier.difolco at gmail.com> wrote: > >> 2015-04-15 19:15 GMT+00:00 Mike Meyer : >> >>> Well, functional programming is very much like an elephant. >>> >> >> I have the same thought about OOP some years ago, them I discovered then >> first meaning of it and all was so clear and simple. My goal isn't to teach >> the full power of FP, my goal is to give them inspiration, to suggest that >> there is a wider world to explore. >> > > Just clarify, this is a reference to the fable of the blind men and the > elephant. What you think it is like will depend on how you approach it. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Thu Apr 16 06:03:14 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 16 Apr 2015 07:03:14 +0100 Subject: [Haskell-cafe] Advice needed on how to improve some code In-Reply-To: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> References: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> Message-ID: <20150416060314.GR31520@weber> On Thu, Apr 16, 2015 at 10:57:41AM +1000, Jeff wrote: > ( \pl' -> let pld = P.payloadData pl' in > if testBit mdm ( fromEnum D.GPS ) > then > parseDeviceGPSData >>= > ( \s -> return ( pl' { P.payloadData = pld { P.gpsData = Just s } } ) ) > else > return pl' ) >>= > ( \pl' -> let pld = P.payloadData pl' in > if testBit mdm ( fromEnum D.GSM ) > then > parseDeviceGSMData >>= > ( \s -> return ( pl' { P.payloadData = pld { P.gsmData = Just s } } ) ) > else > return pl' ) >>= > ( \pl' -> let pld = P.payloadData pl' in > if testBit mdm ( fromEnum D.COT ) > then > parseDeviceCOTData >>= > ( \s -> return ( pl' { P.payloadData = pld { P.cotData = Just s } } ) ) The first thing you should do is define parseDeviceGPSDataOf constructor parser setField = ( \pl' -> let pld = P.payloadData pl' in if testBit mdm ( fromEnum constructor ) then parser >>= ( \s -> return ( pl' { P.payloadData = setField pld (Just s) } } ) ) else return pl' ) and your chain of binds will become setgpsData pld = pld { P.gpsData = Just s } ... parseDeviceDataOf D.GPS parseDeviceGPSData setgpsData >>= parseDeviceDataOf D.GSM parseDeviceDSMData setgsmData >>= parseDeviceDataOf D.COT parseDeviceCOTData setcotData >>= ... Then I would probably write deviceSpecs = [ (D.GPS, parseDeviceGPSData, setgpsData) , (D.GSM, parseDeviceDSMData, setgsmData) , (D.COT, parseDeviceCOTData, setcotData) ] and turn the chain of binds into a fold. Tom From jeff at datalinktech.com.au Thu Apr 16 06:37:03 2015 From: jeff at datalinktech.com.au (Jeff) Date: Thu, 16 Apr 2015 16:37:03 +1000 Subject: [Haskell-cafe] Advice needed on how to improve some code In-Reply-To: <20150416060314.GR31520@weber> References: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> <20150416060314.GR31520@weber> Message-ID: <0D59FF2A-55DA-44EA-9718-F131E0DFDA30@datalinktech.com.au> Thanks Tom, David and Claude for your replies. > On 16 Apr 2015, at 4:03 pm, Tom Ellis wrote: > > The first thing you should do is define > > parseDeviceGPSDataOf constructor parser setField = > ( \pl' -> let pld = P.payloadData pl' in > if testBit mdm ( fromEnum constructor ) > then > parser >>= > ( \s -> return ( pl' { P.payloadData = setField pld (Just s) } } ) ) > else > return pl' ) > > and your chain of binds will become > > setgpsData pld = pld { P.gpsData = Just s } > ... > > parseDeviceDataOf D.GPS parseDeviceGPSData setgpsData >>= > parseDeviceDataOf D.GSM parseDeviceDSMData setgsmData >>= > parseDeviceDataOf D.COT parseDeviceCOTData setcotData >>= > ... > > Then I would probably write > > deviceSpecs = [ (D.GPS, parseDeviceGPSData, setgpsData) > , (D.GSM, parseDeviceDSMData, setgsmData) > , (D.COT, parseDeviceCOTData, setcotData) ] > > and turn the chain of binds into a fold. > I?ll do as you have suggested Tom. Thanks. Jeff From zilinc.dev at gmail.com Thu Apr 16 07:33:07 2015 From: zilinc.dev at gmail.com (Zilin Chen) Date: Thu, 16 Apr 2015 17:33:07 +1000 Subject: [Haskell-cafe] cabal install glade In-Reply-To: <9992f880-81e3-4041-8307-3b5857687c97@googlegroups.com> References: <9992f880-81e3-4041-8307-3b5857687c97@googlegroups.com> Message-ID: <552F65B3.6010004@gmail.com> Hi Jean, Simply do `$ cabal sandbox add-source ' and then `$ cabal install --only-dependencies' as normal. I think it should work. Cheers, Zilin On 15/04/15 22:01, Jean Lopes wrote: > I will try to use your branch before going back to GHC 7.8... > > But, how exactly should I do that ? > Clone your branch; > Build from local source code with cabal ? (I just scrolled this part > while reading cabal tutorials, guess I'll have to take a look now) > What about dependencies ? I should use $ cabal install glade > --only-dependencies and than install glade from your branch ? > > Em quarta-feira, 15 de abril de 2015 05:48:42 UTC-3, Matthew Pickering > escreveu: > > Hi Jean, > > You can try cloning my branch until a push gets accepted upstream. > > https://github.com/mpickering/glade > > > The fixes to get it working with 7.10 were fairly minimal. > > Matt > > On Wed, Apr 15, 2015 at 4:33 AM, Jean Lopes > wrote: > > Hello, I am trying to install the Glade package from hackage, and I > > keep getting exit failure... > > > > Hope someone can help me solve it! > > > > What I did: > > $ mkdir ~/haskell/project > > $ cd ~/haskell/project > > $ cabal sandbox init > > $ cabal update > > $ cabal install alex > > $ cabal install happy > > $ cabal install gtk2hs-buildtools > > $ cabal install gtk #successful until here > > $ cabal install glade > > > > The last statement gave me the following error: > > > > $ [1 of 2] Compiling SetupWrapper ( > > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs, > > > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o > > ) > > $ > > $ /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:91:17: > > $ Ambiguous occurrence ?die? > > $ It could refer to either ?Distribution.Simple.Utils.die?, > > $ imported from > > ?Distribution.Simple.Utils? at > > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 > > $ or ?System.Exit.die?, > > $ imported from ?System.Exit? at > > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 > > $ Failed to install cairo-0.12.5.3 > > $ [1 of 2] Compiling SetupWrapper ( > > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs, > > > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o > > ) > > $ > > $ /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:91:17: > > $ Ambiguous occurrence ?die? > > $ It could refer to either ?Distribution.Simple.Utils.die?, > > $ imported from > > ?Distribution.Simple.Utils? at > > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:8:1-32 > > $ or ?System.Exit.die?, > > $ imported from ?System.Exit? at > > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:21:1-18 > > $ Failed to install glib-0.12.5.4 > > $ cabal: Error: some packages failed to install: > > $ cairo-0.12.5.3 failed during the configure step. The exception > was: > > $ ExitFailure 1 > > $ gio-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > > $ glade-0.12.5.0 depends on glib-0.12.5.4 which failed to install. > > $ glib-0.12.5.4 failed during the configure step. The exception was: > > $ ExitFailure 1 > > $ gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to install. > > $ pango-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > > > > Important: You can assume I don't know much. I'm rather new to > Haskell/cabal > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskel... at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > _______________________________________________ > Haskell-Cafe mailing list > Haskel... at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Thu Apr 16 08:22:34 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 16 Apr 2015 09:22:34 +0100 Subject: [Haskell-cafe] Advice needed on how to improve some code In-Reply-To: References: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> Message-ID: <20150416082234.GS31520@weber> I rather like the >>= invocations. `do` notation would require naming intermediate variables. On Wed, Apr 15, 2015 at 11:19:30PM -0400, David Feuer wrote: > I haven't dug into the guts of this *at all*, but why don't you start by > using `do` notation instead of a million >>= invocations? It also looks > like you may have some common patterns you can exploit by defining some > more functions. > > On Wed, Apr 15, 2015 at 8:57 PM, Jeff wrote: > > > Hello, > > > > I am seeking some advice on how I might improve a bit of code. > > The function in question reads and parses part of a binary protocol, > > storing the parsed info as it proceeds. > > > > parseDeviceData is called by parseDevice (shown further down). > > > > It looks to me like there should be a more concise, less repetitive way to > > do what > > parseDeviceData does. Any advice on this would be greatly appreciated. > > > > parseDeviceData :: P.Payload -> Parser P.Payload > > parseDeviceData pl = > > let > > mdm = P.dataMask ( P.payloadData pl ) > > in > > ( let pld = P.payloadData pl in > > if testBit mdm ( fromEnum D.Sys ) > > then > > parseDeviceSysData >>= > > ( \s -> return ( pl { P.payloadData = pld { P.sysData = Just s > > } } ) ) > > else > > return pl ) >>= > > ( \pl' -> let pld = P.payloadData pl' in > > if testBit mdm ( fromEnum D.GPS ) > > then > > parseDeviceGPSData >>= > > ( \s -> return ( pl' { P.payloadData = pld { > > P.gpsData = Just s } } ) ) > > else > > return pl' ) >>= > > ( \pl' -> let pld = P.payloadData pl' in > > if testBit mdm ( fromEnum D.GSM ) > > then > > parseDeviceGSMData >>= > > ( \s -> return ( pl' { P.payloadData = pld { > > P.gsmData = Just s } } ) ) > > else > > return pl' ) >>= > > ( \pl' -> let pld = P.payloadData pl' in > > if testBit mdm ( fromEnum D.COT ) > > then > > parseDeviceCOTData >>= > > ( \s -> return ( pl' { P.payloadData = pld { > > P.cotData = Just s } } ) ) > > else > > return pl' ) >>= > > ( \pl' -> let pld = P.payloadData pl' in > > if testBit mdm ( fromEnum D.ADC ) > > then > > parseDeviceADCData >>= > > ( \s -> return ( pl' { P.payloadData = pld { > > P.adcData = Just s } } ) ) > > else > > return pl' ) >>= > > ( \pl' -> let pld = P.payloadData pl' in > > if testBit mdm ( fromEnum D.DTT ) > > then > > parseDeviceDTTData >>= > > ( \s -> return ( pl' { P.payloadData = pld { > > P.dttData = Just s } } ) ) > > else > > return pl' ) >>= > > ( \pl' -> let pld = P.payloadData pl' in > > if testBit mdm ( fromEnum D.OneWire ) > > then > > parseDeviceOneWireData >>= > > ( \s -> return ( pl' { P.payloadData = pld { > > P.iwdData = Just s } } ) ) > > else > > return pl' ) >>= > > ( \pl' -> if testBit mdm ( fromEnum D.ETD ) > > then > > parseDeviceEventData pl' > > else > > return pl' ) > > > > The Parser above is a Data.Binary.Strict.Get wrapped in a StateT, where > > the state is a top-level > > structure for holding the parsed packet. > > > > parseDevice :: Bool -> Parser () > > parseDevice _hasEvent = > > parseTimestamp >>= > > ( \ts -> > > if _hasEvent > > then > > lift getWord8 >>= > > ( \e -> lift getWord16be >>= > > ( \mdm -> > > return ( P.Payload "" ( Just ts ) $ > > P.blankDevicePayloadData { P.dataMask = mdm > > , P.eventID = toEnum ( > > fromIntegral e .&. 0x7f ) > > , P.deviceStatusFlag = > > testBit e 7 > > , P.hasEvent = True > > } ) ) ) > > else > > lift getWord16be >>= > > ( \mdm -> > > return ( P.Payload "" ( Just ts ) $ > > P.blankDevicePayloadData { P.dataMask = mdm } ) ) > > ) >>= > > parseDeviceData >>= > > ( \dpl -> get >>= ( \p -> put ( p { P.payloads = dpl : P.payloads p } > > ) ) ) > > > > > > Here are the data types for the Packet and Payload: > > > > > > data Payload = Payload { imei :: !BS.ByteString > > , timestamp :: Maybe Word64 > > , payloadData :: PayloadData > > } > > > > data PayloadData = HeartBeatPL > > | SMSFwdPL { smsMesg :: !BS.ByteString } > > | SerialPL { auxData :: !Word8 > > , fixFlag :: !Word8 > > , gpsCoord :: !GPSCoord > > , serialData :: !BS.ByteString > > } > > | DevicePL { hasEvent :: !Bool > > , deviceStatusFlag :: !Bool > > , eventID :: !E.EventID > > , dataMask :: !Word16 > > , sysData :: Maybe DS.SysData > > , gpsData :: Maybe DGP.GPSData > > , gsmData :: Maybe DGS.GSMData > > , cotData :: Maybe DC.COTData > > , adcData :: Maybe DA.ADCData > > , dttData :: Maybe DD.DTTData > > , iwdData :: Maybe DO.OneWireData > > , etdSpd :: Maybe ES.SpeedEvent > > , etdGeo :: Maybe EG.GeoEvent > > , etdHealth :: Maybe EH.HealthEvent > > , etdHarsh :: Maybe EHD.HarshEvent > > , etdOneWire :: Maybe EO.OneWireEvent > > , etdADC :: Maybe EA.ADCEvent > > } > > deriving ( Show ) > > > > data Packet = Packet { protocolVersion :: !Word8 > > , packetType :: !PT.PacketType > > , deviceID :: Maybe BS.ByteString > > , payloads :: ![ Payload ] > > , crc :: !Word16 > > } > > deriving ( Show ) > > > > Lastly, here is the Parser monad transformer: > > > > module G6S.Parser where > > > > import Control.Monad.State.Strict > > import Data.Binary.Strict.Get > > import qualified Data.ByteString as BS > > > > import qualified G6S.Packet as GP > > > > type Parser = StateT GP.Packet Get > > > > runParser :: Parser a -> BS.ByteString -> Maybe a > > runParser p bs = > > let > > ( result, _ ) = runGet ( runStateT p GP.initPacket ) bs > > in > > case result of > > Right tup -> Just $ fst tup > > Left _ -> Nothing > > > > > > I hope there is enough info here. > > > > Thanks, > > Jeff > > > > > > > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From hsyl20 at gmail.com Thu Apr 16 08:24:45 2015 From: hsyl20 at gmail.com (Sylvain Henry) Date: Thu, 16 Apr 2015 10:24:45 +0200 Subject: [Haskell-cafe] Advice needed on how to improve some code In-Reply-To: <0D59FF2A-55DA-44EA-9718-F131E0DFDA30@datalinktech.com.au> References: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> <20150416060314.GR31520@weber> <0D59FF2A-55DA-44EA-9718-F131E0DFDA30@datalinktech.com.au> Message-ID: I don't think you need to update record fields of a blank record: you can create the record using applicative operators instead. Something like: parseDevicePLData :: Bool -> Get PayloadData parseDevicePLData hasEv = do rawEvId <- if hasEv then getWord8 else return 0 -- I guessed the 0 value let evId = toEnum (fromIntegral rawEvId .&. 0x7f) let statusFlag = testBit rawEvId 7 mask <- getWord16be let parseMaybe e p = if testBit mask (fromEnum e) then Just <$> p else return Nothing DevicePL hasEv statusFlag evId mask <$> parseMaybe D.GPS parseDeviceGPSData <*> parseMaybe D.GSM parseDeviceGSMData <*> parseMaybe D.COT parseDeviceCotData <*> ... parseDevicePL :: Bool -> Get Payload parseDevicePL hasEv = do ts <- parseTimestamp P.Payload "" (Just ts) <$> parseDevicePLData hasEv Then you can lift these "parsers" into you Parser monad only when you need it. -- Sylvain 2015-04-16 8:37 GMT+02:00 Jeff : > Thanks Tom, David and Claude for your replies. > > > >> On 16 Apr 2015, at 4:03 pm, Tom Ellis wrote: >> >> The first thing you should do is define >> >> parseDeviceGPSDataOf constructor parser setField = >> ( \pl' -> let pld = P.payloadData pl' in >> if testBit mdm ( fromEnum constructor ) >> then >> parser >>= >> ( \s -> return ( pl' { P.payloadData = setField pld (Just s) } } ) ) >> else >> return pl' ) >> >> and your chain of binds will become >> >> setgpsData pld = pld { P.gpsData = Just s } >> ... >> >> parseDeviceDataOf D.GPS parseDeviceGPSData setgpsData >>= >> parseDeviceDataOf D.GSM parseDeviceDSMData setgsmData >>= >> parseDeviceDataOf D.COT parseDeviceCOTData setcotData >>= >> ... >> >> Then I would probably write >> >> deviceSpecs = [ (D.GPS, parseDeviceGPSData, setgpsData) >> , (D.GSM, parseDeviceDSMData, setgsmData) >> , (D.COT, parseDeviceCOTData, setcotData) ] >> >> and turn the chain of binds into a fold. >> > > > I?ll do as you have suggested Tom. Thanks. > > Jeff > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From Andrew.Butterfield at scss.tcd.ie Thu Apr 16 08:32:45 2015 From: Andrew.Butterfield at scss.tcd.ie (Andrew Butterfield) Date: Thu, 16 Apr 2015 09:32:45 +0100 Subject: [Haskell-cafe] Advice needed on how to improve some code In-Reply-To: <20150416082234.GS31520@weber> References: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> <20150416082234.GS31520@weber> Message-ID: <0E3E78EA-C3DD-440D-AE7B-7714CDF6E423@scss.tcd.ie> > On 16 Apr 2015, at 09:22, Tom Ellis wrote: > > I rather like the >>= invocations. `do` notation would require naming > intermediate variables. But the >>= requires such intermediate variables anyway: all the pl' after the \ > >> On Wed, Apr 15, 2015 at 8:57 PM, Jeff wrote: >> >>> return pl ) >>= >>> ( \pl' -> let pld = P.payloadData pl' in >>> if testBit mdm ( fromEnum D.GPS ) >>> then >>> parseDeviceGPSData >>= >>> ( \s -> return ( pl' { P.payloadData = pld { >>> P.gpsData = Just s } } ) ) >>> else >>> return pl' ) >>= >>> ( \pl' -> let pld = P.payloadData pl' in >>> if testBit mdm ( fromEnum D.GSM ) >>> then >>> parseDeviceGSMData >>= >>> ( \s -> return ( pl' { P.payloadData = pld { >>> P.gsmData = Just s } } ) ) >>> else >>> return pl' ) >>= >>> ( \pl' -> let pld = P.payloadData pl' in >>> if testBit mdm ( fromEnum D.COT ) >>> then >>> parseDeviceCOTData >>= >>> ( \s -> return ( pl' { P.payloadData = pld { >>> P.cotData = Just s } } ) ) >>> else >>> return pl' ) >>= Do notation is just syntax sugar for the above do pl' <- ..... pl' <- .... pl' <- ... No additional variable invention required ! Andrew Butterfield School of Computer Science & Statistics Trinity College Dublin 2, Ireland From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Thu Apr 16 08:33:05 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 16 Apr 2015 09:33:05 +0100 Subject: [Haskell-cafe] Advice needed on how to improve some code In-Reply-To: <20150416082234.GS31520@weber> References: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> <20150416082234.GS31520@weber> Message-ID: <20150416083304.GT31520@weber> Oh sorry, I now see you were talking (at least) about other uses of >>= where do notation would be very helpful. On Thu, Apr 16, 2015 at 09:22:34AM +0100, Tom Ellis wrote: > I rather like the >>= invocations. `do` notation would require naming > intermediate variables. > > On Wed, Apr 15, 2015 at 11:19:30PM -0400, David Feuer wrote: > > I haven't dug into the guts of this *at all*, but why don't you start by > > using `do` notation instead of a million >>= invocations? It also looks > > like you may have some common patterns you can exploit by defining some > > more functions. > > > > On Wed, Apr 15, 2015 at 8:57 PM, Jeff wrote: > > > > > Hello, > > > > > > I am seeking some advice on how I might improve a bit of code. > > > The function in question reads and parses part of a binary protocol, > > > storing the parsed info as it proceeds. > > > > > > parseDeviceData is called by parseDevice (shown further down). > > > > > > It looks to me like there should be a more concise, less repetitive way to > > > do what > > > parseDeviceData does. Any advice on this would be greatly appreciated. > > > > > > parseDeviceData :: P.Payload -> Parser P.Payload > > > parseDeviceData pl = > > > let > > > mdm = P.dataMask ( P.payloadData pl ) > > > in > > > ( let pld = P.payloadData pl in > > > if testBit mdm ( fromEnum D.Sys ) > > > then > > > parseDeviceSysData >>= > > > ( \s -> return ( pl { P.payloadData = pld { P.sysData = Just s > > > } } ) ) > > > else > > > return pl ) >>= > > > ( \pl' -> let pld = P.payloadData pl' in > > > if testBit mdm ( fromEnum D.GPS ) > > > then > > > parseDeviceGPSData >>= > > > ( \s -> return ( pl' { P.payloadData = pld { > > > P.gpsData = Just s } } ) ) > > > else > > > return pl' ) >>= > > > ( \pl' -> let pld = P.payloadData pl' in > > > if testBit mdm ( fromEnum D.GSM ) > > > then > > > parseDeviceGSMData >>= > > > ( \s -> return ( pl' { P.payloadData = pld { > > > P.gsmData = Just s } } ) ) > > > else > > > return pl' ) >>= > > > ( \pl' -> let pld = P.payloadData pl' in > > > if testBit mdm ( fromEnum D.COT ) > > > then > > > parseDeviceCOTData >>= > > > ( \s -> return ( pl' { P.payloadData = pld { > > > P.cotData = Just s } } ) ) > > > else > > > return pl' ) >>= > > > ( \pl' -> let pld = P.payloadData pl' in > > > if testBit mdm ( fromEnum D.ADC ) > > > then > > > parseDeviceADCData >>= > > > ( \s -> return ( pl' { P.payloadData = pld { > > > P.adcData = Just s } } ) ) > > > else > > > return pl' ) >>= > > > ( \pl' -> let pld = P.payloadData pl' in > > > if testBit mdm ( fromEnum D.DTT ) > > > then > > > parseDeviceDTTData >>= > > > ( \s -> return ( pl' { P.payloadData = pld { > > > P.dttData = Just s } } ) ) > > > else > > > return pl' ) >>= > > > ( \pl' -> let pld = P.payloadData pl' in > > > if testBit mdm ( fromEnum D.OneWire ) > > > then > > > parseDeviceOneWireData >>= > > > ( \s -> return ( pl' { P.payloadData = pld { > > > P.iwdData = Just s } } ) ) > > > else > > > return pl' ) >>= > > > ( \pl' -> if testBit mdm ( fromEnum D.ETD ) > > > then > > > parseDeviceEventData pl' > > > else > > > return pl' ) > > > > > > The Parser above is a Data.Binary.Strict.Get wrapped in a StateT, where > > > the state is a top-level > > > structure for holding the parsed packet. > > > > > > parseDevice :: Bool -> Parser () > > > parseDevice _hasEvent = > > > parseTimestamp >>= > > > ( \ts -> > > > if _hasEvent > > > then > > > lift getWord8 >>= > > > ( \e -> lift getWord16be >>= > > > ( \mdm -> > > > return ( P.Payload "" ( Just ts ) $ > > > P.blankDevicePayloadData { P.dataMask = mdm > > > , P.eventID = toEnum ( > > > fromIntegral e .&. 0x7f ) > > > , P.deviceStatusFlag = > > > testBit e 7 > > > , P.hasEvent = True > > > } ) ) ) > > > else > > > lift getWord16be >>= > > > ( \mdm -> > > > return ( P.Payload "" ( Just ts ) $ > > > P.blankDevicePayloadData { P.dataMask = mdm } ) ) > > > ) >>= > > > parseDeviceData >>= > > > ( \dpl -> get >>= ( \p -> put ( p { P.payloads = dpl : P.payloads p } > > > ) ) ) > > > > > > > > > Here are the data types for the Packet and Payload: > > > > > > > > > data Payload = Payload { imei :: !BS.ByteString > > > , timestamp :: Maybe Word64 > > > , payloadData :: PayloadData > > > } > > > > > > data PayloadData = HeartBeatPL > > > | SMSFwdPL { smsMesg :: !BS.ByteString } > > > | SerialPL { auxData :: !Word8 > > > , fixFlag :: !Word8 > > > , gpsCoord :: !GPSCoord > > > , serialData :: !BS.ByteString > > > } > > > | DevicePL { hasEvent :: !Bool > > > , deviceStatusFlag :: !Bool > > > , eventID :: !E.EventID > > > , dataMask :: !Word16 > > > , sysData :: Maybe DS.SysData > > > , gpsData :: Maybe DGP.GPSData > > > , gsmData :: Maybe DGS.GSMData > > > , cotData :: Maybe DC.COTData > > > , adcData :: Maybe DA.ADCData > > > , dttData :: Maybe DD.DTTData > > > , iwdData :: Maybe DO.OneWireData > > > , etdSpd :: Maybe ES.SpeedEvent > > > , etdGeo :: Maybe EG.GeoEvent > > > , etdHealth :: Maybe EH.HealthEvent > > > , etdHarsh :: Maybe EHD.HarshEvent > > > , etdOneWire :: Maybe EO.OneWireEvent > > > , etdADC :: Maybe EA.ADCEvent > > > } > > > deriving ( Show ) > > > > > > data Packet = Packet { protocolVersion :: !Word8 > > > , packetType :: !PT.PacketType > > > , deviceID :: Maybe BS.ByteString > > > , payloads :: ![ Payload ] > > > , crc :: !Word16 > > > } > > > deriving ( Show ) > > > > > > Lastly, here is the Parser monad transformer: > > > > > > module G6S.Parser where > > > > > > import Control.Monad.State.Strict > > > import Data.Binary.Strict.Get > > > import qualified Data.ByteString as BS > > > > > > import qualified G6S.Packet as GP > > > > > > type Parser = StateT GP.Packet Get > > > > > > runParser :: Parser a -> BS.ByteString -> Maybe a > > > runParser p bs = > > > let > > > ( result, _ ) = runGet ( runStateT p GP.initPacket ) bs > > > in > > > case result of > > > Right tup -> Just $ fst tup > > > Left _ -> Nothing > > > > > > > > > I hope there is enough info here. > > > > > > Thanks, > > > Jeff > > > > > > > > > > > > > > > _______________________________________________ > > > Haskell-Cafe mailing list > > > Haskell-Cafe at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Thu Apr 16 08:34:43 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 16 Apr 2015 09:34:43 +0100 Subject: [Haskell-cafe] Advice needed on how to improve some code In-Reply-To: <0E3E78EA-C3DD-440D-AE7B-7714CDF6E423@scss.tcd.ie> References: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> <20150416082234.GS31520@weber> <0E3E78EA-C3DD-440D-AE7B-7714CDF6E423@scss.tcd.ie> Message-ID: <20150416083443.GU31520@weber> On Thu, Apr 16, 2015 at 09:32:45AM +0100, Andrew Butterfield wrote: > > > On 16 Apr 2015, at 09:22, Tom Ellis wrote: > > > > I rather like the >>= invocations. `do` notation would require naming > > intermediate variables. > > But the >>= requires such intermediate variables anyway: all the pl' after the \ Not if the pl' doesn't exist because it became part of the body of an abstracted function. Anyway, I now suspect David Feuer was speaking about use of >>= elsewhere in Jeff's code. > >> On Wed, Apr 15, 2015 at 8:57 PM, Jeff wrote: > >> > >>> return pl ) >>= > >>> ( \pl' -> let pld = P.payloadData pl' in > >>> if testBit mdm ( fromEnum D.GPS ) > >>> then > >>> parseDeviceGPSData >>= > >>> ( \s -> return ( pl' { P.payloadData = pld { > >>> P.gpsData = Just s } } ) ) > >>> else > >>> return pl' ) >>= > >>> ( \pl' -> let pld = P.payloadData pl' in > >>> if testBit mdm ( fromEnum D.GSM ) > >>> then > >>> parseDeviceGSMData >>= > >>> ( \s -> return ( pl' { P.payloadData = pld { > >>> P.gsmData = Just s } } ) ) > >>> else > >>> return pl' ) >>= > >>> ( \pl' -> let pld = P.payloadData pl' in > >>> if testBit mdm ( fromEnum D.COT ) > >>> then > >>> parseDeviceCOTData >>= > >>> ( \s -> return ( pl' { P.payloadData = pld { > >>> P.cotData = Just s } } ) ) > >>> else > >>> return pl' ) >>= From haskell at jschneider.net Thu Apr 16 09:00:31 2015 From: haskell at jschneider.net (Jon Schneider) Date: Thu, 16 Apr 2015 10:00:31 +0100 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <552EA44E.7070504@gmail.com> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> <552EA44E.7070504@gmail.com> Message-ID: <502a5b27fa54187432681ad316434d49.squirrel@mail.jschneider.net> I think this is the thing under the bonnet I was after though to be perfectly honest is slightly beyond me at the time of writing. Thank you all. Jon > Let's just have a look at the monad instance of IO which is defined in > the files ghc-prim/GHC/Types.hs and base/GHC/Base.hs > > newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #)) > > instance Monad IO where > ... > (>>=) = bindIO > ... > > bindIO :: IO a -> (a -> IO b) -> IO b > bindIO (IO m) k = IO $ \ s -> case m s of (# new_s, a #) -> unIO (k a) > > If you can forget for a minute about all the # you will end up with this. > > newtype IO a = IO (RealWorld -> (RealWorld, a)) > > bindIO (IO m) k = IO $ \ s -> case m s of (new_s, a) -> unIO (k a) > > > when the following part is evaluated: > > case m s of (new_s, a) -> unIO (k a) > > (m s) has to be evaluated first in order to ensure that the result > matches the pattern (new_s, a) and is not bottom/some infinite > calculation/an error. > > This is why IO statements are evaluated in order. > > Silvio > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Thu Apr 16 09:27:47 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 16 Apr 2015 10:27:47 +0100 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> Message-ID: <20150416092747.GV31520@weber> Hi Jon, On Wed, Apr 15, 2015 at 10:07:24AM +0100, Jon Schneider wrote: > With lazy evaluation where is it written that if you write things with no > dependencies with a "do" things will be done in order ? Or isn't it ? I'm not sure where this is written, but it's certainly a property of the IO type. In the expression do x1 <- action1 x2 <- action2 ... then the IO action of the expression `action1` will occur before that of `action2`. (As a caveat, one has to be careful about the concept of "when an action occurs". If `action1` involved reading a file lazily with `System.IO.readFile`[1], say, then the actual read may not take place until `action2` has already finished. However, from the point of view behaviour we consider "observable", a lazy read is indistinguishable from a strict read. Lazy IO is rather counterintuitive. I suggest you stay away from it!) As a side point, appeals to the "real world" in attempts to explain this are probably unhelpful at best. GHC may well implement IO using a fake value of type `RealWorld` but that's beside the point. A conforming Haskell implementation is free to implement IO however it sees fit. > Is it a feature of the language we're supposed to accept ? Sort of. It's a property of the IO type. > Is it something in the implementation of IO ? Yes. > Is the do keyword more than just a syntactic sugar for a string of binds > and lambdas ? No. Tom [1] http://hackage.haskell.org/package/base-4.8.0.0/docs/System-IO.html#v:readFile From duncan at well-typed.com Thu Apr 16 09:33:55 2015 From: duncan at well-typed.com (Duncan Coutts) Date: Thu, 16 Apr 2015 10:33:55 +0100 Subject: [Haskell-cafe] Ongoing IHG work to improve Hackage security Message-ID: <1429176835.25663.30.camel@dunky.localdomain> All, The IHG members identified Hackage security as an important issue some time ago and myself and my colleague Austin have been working on a design and implementation. The details are in this blog post: http://www.well-typed.com/blog/2015/04/improving-hackage-security We should have made more noise earlier about the fact that we're working on this. We saw that it was important to finally write this up now because other similar ideas are under active discussion and we don't want to cause too much unnecessary duplication. The summary is this: We're implementing a system to significantly improve Hackage security. It's based on a sensible design (The Update Framework) by proper crypto experts. The basic system is fully automatic and covers all packages on Hackage. A proposed extension would give further security improvements for individual packages at the cost of a modest effort from package authors. http://theupdateframework.com/ It will also allow the secure use of untrusted public Hackage mirrors, which is the simplest route to better Hackage reliability. As a bonus we're including incremental index downloads to reduce `cabal update` wait times. And it's all fully backwards compatible. I should also note that our IHG funding covers the first phase of the design, and for the second phase we would very much welcome others to get involved with the detailed design and implementation (or join the IHG and contribute further funding). -- Duncan Coutts, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From duncan at well-typed.com Thu Apr 16 09:34:03 2015 From: duncan at well-typed.com (Duncan Coutts) Date: Thu, 16 Apr 2015 10:34:03 +0100 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: Message-ID: <1429176843.25663.31.camel@dunky.localdomain> Hi folks, As I mentioned previously on the commercialhaskell list, we're working on Hackage security for the IHG at the moment. We've finally written up the design for that as a blog post: http://www.well-typed.com/blog/2015/04/improving-hackage-security It includes a section at the end comparing in general terms to this proposal (specifically Chris's part on package signing). The design is basically "The Update Framework" for Hackage. Our current implementation effort for the IHG covers the first part of that design. http://theupdateframework.com/ I think TUF addresses many of the concerns that have been raised in this thread, e.g. about threat models, what signatures actually mean etc. It also covers the question of making the "who's allowed to upload what" information transparent, with proper cryptographic evidence (albeit that's in the second part of the design). So if collectively we can also implement the second part of TUF for Hackage then I think we can address these issues properly. Other things worth noting: * This will finally allow us to have untrusted public mirrors, which is the traditional approach to improving repository reliability. * We're incorporating an existing design for incremental updates of the package index to significantly improve "cabal update" times. I'll chip in elsewhere in this thread with more details about how TUF (or our adaptation of it for hackage) solves some of the problems raised here. Duncan On Mon, 2015-04-13 at 10:02 +0000, Michael Snoyman wrote: > Many of you saw the blog post Mathieu wrote[1] about having more composable > community infrastructure, which in particular focused on improvements to > Hackage. I've been discussing some of these ideas with both Mathieu and > others in the community working on some similar thoughts. I've also > separately spent some time speaking with Chris about package signing[2]. > Through those discussions, it's become apparent to me that there are in > fact two core pieces of functionality we're relying on Hackage for today: > > * A centralized location for accessing package metadata (i.e., the cabal > files) and the package contents themselves (i.e., the sdist tarballs) > * A central authority for deciding who is allowed to make releases of > packages, and make revisions to cabal files > > In my opinion, fixing the first problem is in fact very straightforward to > do today using existing tools. FP Complete already hosts a full Hackage > mirror[3] backed by S3, for instance, and having the metadata mirrored to a > Git repository as well is not a difficult technical challenge. This is the > core of what Mathieu was proposing as far as composable infrastructure, > corresponding to next actions 1 and 3 at the end of his blog post (step 2, > modifying Hackage, is not a prerequesite). In my opinion, such a system > would far surpass in usability, reliability, and extensibility our current > infrastructure, and could be rolled out in a few days at most. > > However, that second point- the central authority- is the more interesting > one. As it stands, our entire package ecosystem is placing a huge level of > trust in Hackage, without any serious way to vet what's going on there. > Attack vectors abound, e.g.: > > * Man in the middle attacks: as we are all painfully aware, cabal-install > does not support HTTPS, so a MITM attack on downloads from Hackage is > trivial > * A breach of the Hackage Server codebase would allow anyone to upload > nefarious code[4] > * Any kind of system level vulnerability could allow an attacker to > compromise the server in the same way > > Chris's package signing work addresses most of these vulnerabilities, by > adding a layer of cryptographic signatures on top of Hackage as the central > authority. I'd like to propose taking this a step further: removing Hackage > as the central authority, and instead relying entirely on cryptographic > signatures to release new packages. > > I wrote up a strawman proposal last week[5] which clearly needs work to be > a realistic option. My question is: are people interested in moving forward > on this? If there's no interest, and everyone is satisfied with continuing > with the current Hackage-central-authority, then we can proceed with having > reliable and secure services built around Hackage. But if others- like me- > would like to see a more secure system built from the ground up, please say > so and let's continue that conversation. > > [1] > https://www.fpcomplete.com/blog/2015/03/composable-community-infrastructure > > [2] > https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal > > [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror > [4] I don't think this is just a theoretical possibility for some point in > the future. I have reported an easily trigerrable DoS attack on the current > Hackage Server codebase, which has been unresolved for 1.5 months now > [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 > -- Duncan Coutts, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From michael at snoyman.com Thu Apr 16 09:52:58 2015 From: michael at snoyman.com (Michael Snoyman) Date: Thu, 16 Apr 2015 09:52:58 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <1429176843.25663.31.camel@dunky.localdomain> References: <1429176843.25663.31.camel@dunky.localdomain> Message-ID: Thanks for responding, I intend to go read up on TUF and your blog post now. One question: * We're incorporating an existing design for incremental updates of the package index to significantly improve "cabal update" times. Can you give any details about what you're planning here? I put together a Git repo already that has all of the cabal files from Hackage and which updates every 30 minutes, and it seems that, instead of reinventing anything, simply using `git pull` would be the right solution here: https://github.com/commercialhaskell/all-cabal-files On Thu, Apr 16, 2015 at 12:34 PM Duncan Coutts wrote: > Hi folks, > > As I mentioned previously on the commercialhaskell list, we're working > on Hackage security for the IHG at the moment. > > We've finally written up the design for that as a blog post: > > http://www.well-typed.com/blog/2015/04/improving-hackage-security > > It includes a section at the end comparing in general terms to this > proposal (specifically Chris's part on package signing). > > The design is basically "The Update Framework" for Hackage. Our current > implementation effort for the IHG covers the first part of that design. > > http://theupdateframework.com/ > > I think TUF addresses many of the concerns that have been raised in this > thread, e.g. about threat models, what signatures actually mean etc. > > It also covers the question of making the "who's allowed to upload what" > information transparent, with proper cryptographic evidence (albeit > that's in the second part of the design). > > So if collectively we can also implement the second part of TUF for > Hackage then I think we can address these issues properly. > > Other things worth noting: > * This will finally allow us to have untrusted public mirrors, > which is the traditional approach to improving repository > reliability. > * We're incorporating an existing design for incremental updates > of the package index to significantly improve "cabal update" > times. > > I'll chip in elsewhere in this thread with more details about how TUF > (or our adaptation of it for hackage) solves some of the problems raised > here. > > Duncan > > On Mon, 2015-04-13 at 10:02 +0000, Michael Snoyman wrote: > > Many of you saw the blog post Mathieu wrote[1] about having more > composable > > community infrastructure, which in particular focused on improvements to > > Hackage. I've been discussing some of these ideas with both Mathieu and > > others in the community working on some similar thoughts. I've also > > separately spent some time speaking with Chris about package signing[2]. > > Through those discussions, it's become apparent to me that there are in > > fact two core pieces of functionality we're relying on Hackage for today: > > > > * A centralized location for accessing package metadata (i.e., the cabal > > files) and the package contents themselves (i.e., the sdist tarballs) > > * A central authority for deciding who is allowed to make releases of > > packages, and make revisions to cabal files > > > > In my opinion, fixing the first problem is in fact very straightforward > to > > do today using existing tools. FP Complete already hosts a full Hackage > > mirror[3] backed by S3, for instance, and having the metadata mirrored > to a > > Git repository as well is not a difficult technical challenge. This is > the > > core of what Mathieu was proposing as far as composable infrastructure, > > corresponding to next actions 1 and 3 at the end of his blog post (step > 2, > > modifying Hackage, is not a prerequesite). In my opinion, such a system > > would far surpass in usability, reliability, and extensibility our > current > > infrastructure, and could be rolled out in a few days at most. > > > > However, that second point- the central authority- is the more > interesting > > one. As it stands, our entire package ecosystem is placing a huge level > of > > trust in Hackage, without any serious way to vet what's going on there. > > Attack vectors abound, e.g.: > > > > * Man in the middle attacks: as we are all painfully aware, cabal-install > > does not support HTTPS, so a MITM attack on downloads from Hackage is > > trivial > > * A breach of the Hackage Server codebase would allow anyone to upload > > nefarious code[4] > > * Any kind of system level vulnerability could allow an attacker to > > compromise the server in the same way > > > > Chris's package signing work addresses most of these vulnerabilities, by > > adding a layer of cryptographic signatures on top of Hackage as the > central > > authority. I'd like to propose taking this a step further: removing > Hackage > > as the central authority, and instead relying entirely on cryptographic > > signatures to release new packages. > > > > I wrote up a strawman proposal last week[5] which clearly needs work to > be > > a realistic option. My question is: are people interested in moving > forward > > on this? If there's no interest, and everyone is satisfied with > continuing > > with the current Hackage-central-authority, then we can proceed with > having > > reliable and secure services built around Hackage. But if others- like > me- > > would like to see a more secure system built from the ground up, please > say > > so and let's continue that conversation. > > > > [1] > > > https://www.fpcomplete.com/blog/2015/03/composable-community-infrastructure > > > > [2] > > > https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-detailed-propsal > > > > [3] https://www.fpcomplete.com/blog/2015/03/hackage-mirror > > [4] I don't think this is just a theoretical possibility for some point > in > > the future. I have reported an easily trigerrable DoS attack on the > current > > Hackage Server codebase, which has been unresolved for 1.5 months now > > [5] https://gist.github.com/snoyberg/732aa47a5dd3864051b9 > > > > > -- > Duncan Coutts, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > > -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/1429176843.25663.31.camel%40dunky.localdomain > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duncan at well-typed.com Thu Apr 16 10:12:49 2015 From: duncan at well-typed.com (Duncan Coutts) Date: Thu, 16 Apr 2015 11:12:49 +0100 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> Message-ID: <1429179169.25663.59.camel@dunky.localdomain> On Thu, 2015-04-16 at 09:52 +0000, Michael Snoyman wrote: > Thanks for responding, I intend to go read up on TUF and your blog post > now. One question: > > * We're incorporating an existing design for incremental updates > of the package index to significantly improve "cabal update" > times. > > Can you give any details about what you're planning here? Sure, it's partially explained in the blog post. > I put together a > Git repo already that has all of the cabal files from Hackage and which > updates every 30 minutes, and it seems that, instead of reinventing > anything, simply using `git pull` would be the right solution here: > > https://github.com/commercialhaskell/all-cabal-files It's great that we can mirror to lots of different formats so easily :-). I see that we now have two hackage mirror tools, one for mirroring to a hackage-server instance and one for S3. The bit I think is missing is mirroring to a simple directory based archive, e.g. to be served by a normal http server. >From the blog post: The trick is that the tar format was originally designed to be append only (for tape drives) and so if the server simply updates the index in an append only way then the clients only need to download the tail (with appropriate checks and fallback to a full update). Effectively the index becomes an append only transaction log of all the package metadata changes. This is also fully backwards compatible. The extra detail is that we can use HTTP range requests. These are supported on pretty much all dumb/passive http servers, so it's still possible to host a hackage archive on a filesystem or ordinary web server (this has always been a design goal of the repository format). We use a HTTP range request to get the tail of the tarball, so we only have to download the data that has been added since the client last fetched the index. This is obviously much much smaller than the whole index. For safety (and indeed security) the final tarball content is checked to make sure it matches up with what is expected. Resetting and changing files earlier in the tarball is still possible: if the content check fails then we have to revert to downloading the whole index from scratch. In practice we would not expect this to happen except when completely blowing away a repository and starting again. The advantage of this approach compared to others like rsync or git is that it's fully compatible with the existing format and existing clients. It's also in the typical case a smaller download than rsync and probably similar or smaller than git. It also doesn't need much new from the clients, they just need the same tar, zlib and HTTP features as they have now (e.g. in cabal-install) and don't have to distribute rsync/git/etc binaries on other platforms (e.g. windows). That said, I have no problem whatsoever with there being git or rsync based mirrors. Indeed the central hackage server could provide an rsync point for easy setup for public mirrors (including the package files). -- Duncan Coutts, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From marcin.jan.mrotek at gmail.com Thu Apr 16 10:31:52 2015 From: marcin.jan.mrotek at gmail.com (Marcin Mrotek) Date: Thu, 16 Apr 2015 12:31:52 +0200 Subject: [Haskell-cafe] ODP: Execution order in IO In-Reply-To: <20150416092747.GV31520@weber> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <20150416092747.GV31520@weber> Message-ID: <552f8f9c.e853700a.1e8d.ffff92dd@mx.google.com> Just for the record, lazy IO uses black magic called unsafeInterleaveIO under the hood, which, as the name suggests, deliberately interferes with the execution order imposed by binds. Best regards, Marcin Mrotek -----Wiadomo?? oryginalna----- Od: "Tom Ellis" Wys?ano: ?2015-?04-?16 11:28 Do: "haskell-cafe at haskell.org" Temat: Re: [Haskell-cafe] Execution order in IO Hi Jon, On Wed, Apr 15, 2015 at 10:07:24AM +0100, Jon Schneider wrote: > With lazy evaluation where is it written that if you write things with no > dependencies with a "do" things will be done in order ? Or isn't it ? I'm not sure where this is written, but it's certainly a property of the IO type. In the expression do x1 <- action1 x2 <- action2 ... then the IO action of the expression `action1` will occur before that of `action2`. (As a caveat, one has to be careful about the concept of "when an action occurs". If `action1` involved reading a file lazily with `System.IO.readFile`[1], say, then the actual read may not take place until `action2` has already finished. However, from the point of view behaviour we consider "observable", a lazy read is indistinguishable from a strict read. Lazy IO is rather counterintuitive. I suggest you stay away from it!) As a side point, appeals to the "real world" in attempts to explain this are probably unhelpful at best. GHC may well implement IO using a fake value of type `RealWorld` but that's beside the point. A conforming Haskell implementation is free to implement IO however it sees fit. > Is it a feature of the language we're supposed to accept ? Sort of. It's a property of the IO type. > Is it something in the implementation of IO ? Yes. > Is the do keyword more than just a syntactic sugar for a string of binds > and lambdas ? No. Tom [1] http://hackage.haskell.org/package/base-4.8.0.0/docs/System-IO.html#v:readFile _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Thu Apr 16 10:32:06 2015 From: michael at snoyman.com (Michael Snoyman) Date: Thu, 16 Apr 2015 10:32:06 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <1429179169.25663.59.camel@dunky.localdomain> References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> Message-ID: On Thu, Apr 16, 2015 at 1:12 PM Duncan Coutts wrote: > On Thu, 2015-04-16 at 09:52 +0000, Michael Snoyman wrote: > > Thanks for responding, I intend to go read up on TUF and your blog post > > now. One question: > > > > * We're incorporating an existing design for incremental updates > > of the package index to significantly improve "cabal update" > > times. > > > > Can you give any details about what you're planning here? > > Sure, it's partially explained in the blog post. > > > I put together a > > Git repo already that has all of the cabal files from Hackage and which > > updates every 30 minutes, and it seems that, instead of reinventing > > anything, simply using `git pull` would be the right solution here: > > > > https://github.com/commercialhaskell/all-cabal-files > > It's great that we can mirror to lots of different formats so > easily :-). > > I see that we now have two hackage mirror tools, one for mirroring to a > hackage-server instance and one for S3. The bit I think is missing is > mirroring to a simple directory based archive, e.g. to be served by a > normal http server. > > From the blog post: > > The trick is that the tar format was originally designed to be > append only (for tape drives) and so if the server simply > updates the index in an append only way then the clients only > need to download the tail (with appropriate checks and fallback > to a full update). Effectively the index becomes an append only > transaction log of all the package metadata changes. This is > also fully backwards compatible. > > The extra detail is that we can use HTTP range requests. These are > supported on pretty much all dumb/passive http servers, so it's still > possible to host a hackage archive on a filesystem or ordinary web > server (this has always been a design goal of the repository format). > > We use a HTTP range request to get the tail of the tarball, so we only > have to download the data that has been added since the client last > fetched the index. This is obviously much much smaller than the whole > index. For safety (and indeed security) the final tarball content is > checked to make sure it matches up with what is expected. Resetting and > changing files earlier in the tarball is still possible: if the content > check fails then we have to revert to downloading the whole index from > scratch. In practice we would not expect this to happen except when > completely blowing away a repository and starting again. > > The advantage of this approach compared to others like rsync or git is > that it's fully compatible with the existing format and existing > clients. It's also in the typical case a smaller download than rsync and > probably similar or smaller than git. It also doesn't need much new from > the clients, they just need the same tar, zlib and HTTP features as they > have now (e.g. in cabal-install) and don't have to distribute > rsync/git/etc binaries on other platforms (e.g. windows). > > That said, I have no problem whatsoever with there being git or rsync > based mirrors. Indeed the central hackage server could provide an rsync > point for easy setup for public mirrors (including the package files). > > > I don't like this approach at all. There are many tools out there that do a good job of dealing with incremental updates. Instead of using any of those, the idea is to create a brand new approach, implement it in both Hackage Server and cabal-install (two projects that already have a massive bug deficit), and roll it out hoping for the best. There's no explanation here as to how you'll deal with things like cabal file revisions, which are very common these days and seem to necessitate redownloading the entire database in your proposal. Here's my proposal: use Git. If Git isn't available on the host, then revert to the current codepath and download the index. We can roll that out in an hour of work and everyone gets the benefits, without the detriments of creating a new incremental update framework. Also: it seems like your biggest complaint about Git is "distributing Git." Making Git an optional upgrade is one way of solving that. Another approach is: don't use the official Git command line tool, but one of the many other implementations out there that implement the necessary subset of functionality. I'd guess writing that functionality from scratch in Cabal would be a comparable amount of code to what you're proposing. Comments on package signing to be continued later, I haven't finished reading it yet. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From apfelmus at quantentunnel.de Thu Apr 16 10:54:10 2015 From: apfelmus at quantentunnel.de (Heinrich Apfelmus) Date: Thu, 16 Apr 2015 12:54:10 +0200 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> Message-ID: Jon Schneider wrote: > Good morning all, > > I think I've got the hang of the way state is carried and fancy operators > work in monads but still have a major sticky issue. > > With lazy evaluation where is it written that if you write things with no > dependencies with a "do" things will be done in order ? Or isn't it ? > > Is it a feature of the language we're supposed to accept ? > > Is it something in the implementation of IO ? > > Is the do keyword more than just a syntactic sugar for a string of binds > and lambdas ? You have to distinguish between *evaluation order*, which dictates how a Haskell expression is evaluated, and something I'd like to call *execution order*, which specifies how the IO monad works. The point is that the latter is very much independent of the former. Evaluating the expression `getLine :: IO String` and "executing" the expression `getLine :: IO String` are two entirely different things. I recommend the tutorial Simon Peyton Jones. "Tackling the awkward squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell" http://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/ for more on this. Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com From duncan at well-typed.com Thu Apr 16 10:57:54 2015 From: duncan at well-typed.com (Duncan Coutts) Date: Thu, 16 Apr 2015 11:57:54 +0100 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> Message-ID: <1429181874.25663.80.camel@dunky.localdomain> On Thu, 2015-04-16 at 10:32 +0000, Michael Snoyman wrote: > On Thu, Apr 16, 2015 at 1:12 PM Duncan Coutts wrote: > > > On Thu, 2015-04-16 at 09:52 +0000, Michael Snoyman wrote: > > > Thanks for responding, I intend to go read up on TUF and your blog post > > > now. One question: > > > > > > * We're incorporating an existing design for incremental updates > > > of the package index to significantly improve "cabal update" > > > times. > > > > > > Can you give any details about what you're planning here? > > > > Sure, it's partially explained in the blog post. > > > > > I put together a > > > Git repo already that has all of the cabal files from Hackage and which > > > updates every 30 minutes, and it seems that, instead of reinventing > > > anything, simply using `git pull` would be the right solution here: > > > > > > https://github.com/commercialhaskell/all-cabal-files > > > > It's great that we can mirror to lots of different formats so > > easily :-). > > > > I see that we now have two hackage mirror tools, one for mirroring to a > > hackage-server instance and one for S3. The bit I think is missing is > > mirroring to a simple directory based archive, e.g. to be served by a > > normal http server. > > > > From the blog post: > > > > The trick is that the tar format was originally designed to be > > append only (for tape drives) and so if the server simply > > updates the index in an append only way then the clients only > > need to download the tail (with appropriate checks and fallback > > to a full update). Effectively the index becomes an append only > > transaction log of all the package metadata changes. This is > > also fully backwards compatible. > > > > The extra detail is that we can use HTTP range requests. These are > > supported on pretty much all dumb/passive http servers, so it's still > > possible to host a hackage archive on a filesystem or ordinary web > > server (this has always been a design goal of the repository format). > > > > We use a HTTP range request to get the tail of the tarball, so we only > > have to download the data that has been added since the client last > > fetched the index. This is obviously much much smaller than the whole > > index. For safety (and indeed security) the final tarball content is > > checked to make sure it matches up with what is expected. Resetting and > > changing files earlier in the tarball is still possible: if the content > > check fails then we have to revert to downloading the whole index from > > scratch. In practice we would not expect this to happen except when > > completely blowing away a repository and starting again. > > > > The advantage of this approach compared to others like rsync or git is > > that it's fully compatible with the existing format and existing > > clients. It's also in the typical case a smaller download than rsync and > > probably similar or smaller than git. It also doesn't need much new from > > the clients, they just need the same tar, zlib and HTTP features as they > > have now (e.g. in cabal-install) and don't have to distribute > > rsync/git/etc binaries on other platforms (e.g. windows). > > > > That said, I have no problem whatsoever with there being git or rsync > > based mirrors. Indeed the central hackage server could provide an rsync > > point for easy setup for public mirrors (including the package files). > > > > > > > I don't like this approach at all. There are many tools out there that do a > good job of dealing with incremental updates. Instead of using any of > those, the idea is to create a brand new approach, implement it in both > Hackage Server and cabal-install (two projects that already have a massive > bug deficit), and roll it out hoping for the best. I looked at other incremental HTTP update approaches that would be compatible with the existing format and work with passive http servers. There's one rsync-like thing over http but the update sizes for our case would be considerably larger than this very simple "get the tail, check the secure hash is still right". This approach is minimally disruptive, compatible with the existing format and clients. > There's no explanation here as to how you'll deal with things like > cabal file revisions, which are very common these days and seem to > necessitate redownloading the entire database in your proposal. The tarball becomes append only. The tar format works in this way; updated files are simply appended. (This is how incremental backups to tape drives worked in the old days, using the tar format). So no, cabal file revisions will be handled just fine, as will other updates to other metadata. Indeed we get the full transaction history. > Here's my proposal: use Git. If Git isn't available on the host, then > revert to the current codepath and download the index. We can roll that out > in an hour of work and everyone gets the benefits, without the detriments > of creating a new incremental update framework. I was not proposing to change the repository format significantly (and only in a backwards compatible way). The existing format is pretty simple, using standard old well understood formats and protocols with wide tool support. The incremental update is fairly unobtrusive. Passive http servers don't need to know about it, and clients that don't know about it can just download the whole index as they do now. The security extensions for TUF are also compatible with the existing format and clients. -- Duncan Coutts, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From michael at snoyman.com Thu Apr 16 11:18:29 2015 From: michael at snoyman.com (Michael Snoyman) Date: Thu, 16 Apr 2015 11:18:29 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <1429181874.25663.80.camel@dunky.localdomain> References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> Message-ID: On Thu, Apr 16, 2015 at 1:57 PM Duncan Coutts wrote: > On Thu, 2015-04-16 at 10:32 +0000, Michael Snoyman wrote: > > On Thu, Apr 16, 2015 at 1:12 PM Duncan Coutts > wrote: > > > > > On Thu, 2015-04-16 at 09:52 +0000, Michael Snoyman wrote: > > > > Thanks for responding, I intend to go read up on TUF and your blog > post > > > > now. One question: > > > > > > > > * We're incorporating an existing design for incremental > updates > > > > of the package index to significantly improve "cabal update" > > > > times. > > > > > > > > Can you give any details about what you're planning here? > > > > > > Sure, it's partially explained in the blog post. > > > > > > > I put together a > > > > Git repo already that has all of the cabal files from Hackage and > which > > > > updates every 30 minutes, and it seems that, instead of reinventing > > > > anything, simply using `git pull` would be the right solution here: > > > > > > > > https://github.com/commercialhaskell/all-cabal-files > > > > > > It's great that we can mirror to lots of different formats so > > > easily :-). > > > > > > I see that we now have two hackage mirror tools, one for mirroring to a > > > hackage-server instance and one for S3. The bit I think is missing is > > > mirroring to a simple directory based archive, e.g. to be served by a > > > normal http server. > > > > > > From the blog post: > > > > > > The trick is that the tar format was originally designed to be > > > append only (for tape drives) and so if the server simply > > > updates the index in an append only way then the clients only > > > need to download the tail (with appropriate checks and fallback > > > to a full update). Effectively the index becomes an append only > > > transaction log of all the package metadata changes. This is > > > also fully backwards compatible. > > > > > > The extra detail is that we can use HTTP range requests. These are > > > supported on pretty much all dumb/passive http servers, so it's still > > > possible to host a hackage archive on a filesystem or ordinary web > > > server (this has always been a design goal of the repository format). > > > > > > We use a HTTP range request to get the tail of the tarball, so we only > > > have to download the data that has been added since the client last > > > fetched the index. This is obviously much much smaller than the whole > > > index. For safety (and indeed security) the final tarball content is > > > checked to make sure it matches up with what is expected. Resetting and > > > changing files earlier in the tarball is still possible: if the content > > > check fails then we have to revert to downloading the whole index from > > > scratch. In practice we would not expect this to happen except when > > > completely blowing away a repository and starting again. > > > > > > The advantage of this approach compared to others like rsync or git is > > > that it's fully compatible with the existing format and existing > > > clients. It's also in the typical case a smaller download than rsync > and > > > probably similar or smaller than git. It also doesn't need much new > from > > > the clients, they just need the same tar, zlib and HTTP features as > they > > > have now (e.g. in cabal-install) and don't have to distribute > > > rsync/git/etc binaries on other platforms (e.g. windows). > > > > > > That said, I have no problem whatsoever with there being git or rsync > > > based mirrors. Indeed the central hackage server could provide an rsync > > > point for easy setup for public mirrors (including the package files). > > > > > > > > > > > I don't like this approach at all. There are many tools out there that > do a > > good job of dealing with incremental updates. Instead of using any of > > those, the idea is to create a brand new approach, implement it in both > > Hackage Server and cabal-install (two projects that already have a > massive > > bug deficit), and roll it out hoping for the best. > > I looked at other incremental HTTP update approaches that would be > compatible with the existing format and work with passive http servers. > There's one rsync-like thing over http but the update sizes for our case > would be considerably larger than this very simple "get the tail, check > the secure hash is still right". This approach is minimally disruptive, > compatible with the existing format and clients. > > > There's no explanation here as to how you'll deal with things like > > cabal file revisions, which are very common these days and seem to > > necessitate redownloading the entire database in your proposal. > > The tarball becomes append only. The tar format works in this way; > updated files are simply appended. (This is how incremental backups to > tape drives worked in the old days, using the tar format). So no, cabal > file revisions will be handled just fine, as will other updates to other > metadata. Indeed we get the full transaction history. > > > Here's my proposal: use Git. If Git isn't available on the host, then > > revert to the current codepath and download the index. We can roll that > out > > in an hour of work and everyone gets the benefits, without the detriments > > of creating a new incremental update framework. > > I was not proposing to change the repository format significantly (and > only in a backwards compatible way). The existing format is pretty > simple, using standard old well understood formats and protocols with > wide tool support. > > The incremental update is fairly unobtrusive. Passive http servers don't > need to know about it, and clients that don't know about it can just > download the whole index as they do now. > > The security extensions for TUF are also compatible with the existing > format and clients. > > > The theme you seem to be creating here is "compatible with current format." You didn't say it directly, but you've strongly implied that, somehow, Git isn't compatible with existing tooling. Let me make clear that that is, in fact, false[1]: ``` #!/bin/bash set -e set -x DIR=$HOME/.cabal/packages/hackage.haskell.org TAR=$DIR/00-index.tar TARGZ=$TAR.gz git pull mkdir -p "$DIR" rm -f $TAR $TARGZ git archive --format=tar -o "$TAR" master gzip -k "$TAR" ``` I wrote this in 5 minutes. My official proposal is to add code to `cabal` which does the following: 1. Check for the presence of the `git` executable. If not present, download the current tarball 2. Check for existence of ~/.cabal/all-cabal-files (or similar). If present, run `git pull` inside of it. If absent, clone it 3. Run the equivalent of the above shell script to produce the 00-index.tar file (not sure if the .gz is also used by cabal) This seems like such a drastically simpler solution than using byte ranges, modifying Hackage to produce tarballs in an append-only manner, and setting up cabal-install to stitch together and check various pieces of a downloaded file. I was actually planning on proposing this some time next week. Can you tell me the downsides of using Git here, which seems to fit all the benefits you touted of: > pretty simple, using standard old well understood formats and protocols with wide tool support. Unless Git at 10 years old isn't old enough yet. Michael [1] https://github.com/commercialhaskell/all-cabal-files/commit/133cd026f8a1f99d719d97fcf884372ded173655 -------------- next part -------------- An HTML attachment was scrubbed... URL: From duncan at well-typed.com Thu Apr 16 11:58:41 2015 From: duncan at well-typed.com (Duncan Coutts) Date: Thu, 16 Apr 2015 12:58:41 +0100 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> Message-ID: <1429185521.25663.103.camel@dunky.localdomain> On Thu, 2015-04-16 at 11:18 +0000, Michael Snoyman wrote: > On Thu, Apr 16, 2015 at 1:57 PM Duncan Coutts wrote: > > I was not proposing to change the repository format significantly (and > > only in a backwards compatible way). The existing format is pretty > > simple, using standard old well understood formats and protocols with > > wide tool support. > > > > The incremental update is fairly unobtrusive. Passive http servers don't > > need to know about it, and clients that don't know about it can just > > download the whole index as they do now. > > > > The security extensions for TUF are also compatible with the existing > > format and clients. > > > The theme you seem to be creating here is "compatible with current format." > You didn't say it directly, but you've strongly implied that, somehow, Git > isn't compatible with existing tooling. Let me make clear that that is, in > fact, false[1]: Sure, one can use git or rsync or other methods to transfer the set of files that makes up a repository or repository index. The point is, existing clients expect both this format and this (http) protocol. There's a number of other minor arguments to be made here about what's simpler and more backwards compatible, but here are two more significant and positive arguments: 1. This incremental update approach works well with the TUF security design 2. This approach to transferring the repository index and files has a much lower security attack surface For 1, the basic TUF approach is based on a simple http server serving a set of files. Because we are implementing TUF for Hackage we picked this update method to go with it. It's really not exotic, the HTTP spec says about byte range requests: "Range supports efficient recovery from partially failed transfers, and supports efficient partial retrieval of large entities." We're doing an efficient partial retrieval of a large entity. For 2, Mathieu elsewhere in this thread pointed to an academic paper about attacks on package repositories and update systems. A surprising number of these are attacks on the download mechanism itself, before you even get to trying to verify individual package signatures. If you read the TUF papers you see that they also list these attacks and address them in various ways. One of them is that the download mechanism needs to know in advance the size (and content hash) of entities it is going to download. Also, we should strive to minimise the amount of complex unaudited code that has to run before we get to checking the signature of the package index (or individual package tarballs). In the TUF design, the only code that runs before verification is downloading two files over HTTP (one that's known to be very small, and the other we already know the length and signed content hash). If we're being paranoid we shouldn't even run any decompression before signature verification. With our implementation the C code that runs before signature verification is either none, or just zlib decompression if we want to do on-the-fly http transport compression, but that's optional if we don't want to trust zlib's security record (though it's extremely widely used). By contrast, if we use rsync or git then there's a massive amount of unaudited C code that is running with your user credentials prior to signature verification. In addition it is likely vulnerable to endless data and slow download attacks (see the papers). -- Duncan Coutts, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From duncan at well-typed.com Thu Apr 16 12:02:31 2015 From: duncan at well-typed.com (Duncan Coutts) Date: Thu, 16 Apr 2015 13:02:31 +0100 Subject: [Haskell-cafe] [haskell-infrastructure] Improvements to package hosting and security In-Reply-To: References: <4487776e-b862-429c-adae-477813e560f3@googlegroups.com> <33c89d4a-12b9-495b-a151-7e317177b061@googlegroups.com> Message-ID: <1429185751.25663.106.camel@dunky.localdomain> On Wed, 2015-04-15 at 00:07 -0400, Gershom B wrote: > So I want to focus just on the idea of a ?trust model? to hackage > packages. Good. I think TUF has a good answer here. > Now, how does security fit into this? Well, at the moment we can > prevent packages from being uploaded by people who are not authorized. > And whoever is authorized is the first person who uploaded the > package, or people they delegate to, or people otherwise added by > hackage admins via e.g. the orphaned package takeover process. As Michael rightly points out, though the hackage server does this, it doesn't generate any cryptographic evidence for it. TUF solves that part with its "target key delegation" information. It's the formal metadata for who is allowed to upload what. So if we implement this part of TUF then we no longer have to rely on the hackage server not getting hacked to ensure this bit. [...] > that attempts a _much simpler_ guarantee ? that e.g. the person who > signed a package as being ?theirs? is either the same person that > signed the prior version of the package, or was delegated by them (or > hackage admins). That's what TUF's target key system provides. There's a target key held by the hackage admins (and signed by the root keys) that is used to sign individual author keys and delegation information to say that this key is allowed to sign this package. So it's not a guarantee that the package is good, or that the author is a sensible person, but it is formal evidence that that person should be in the maintainer group for that package. Then because TUF makes it this relatively lightweight it's fully automatic for end users because the chain (not web) of trust is trivial. > In my mind, the key elements of such a system are that it is > orthogonal to how code is distributed and that it is opt-in/out. Yes, our TUF adaptation for Hackage includes the author keys being optional (and TUF is designed to be adapted in this way). Once you opt-in for a package then the delegation information makes clear to clients that they must expect to see an individual package signature. So you can have a mixture of author-signed packages and not, without downgrade attacks. The target key delegation information makes it clear. -- Duncan Coutts, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From michael at snoyman.com Thu Apr 16 12:18:38 2015 From: michael at snoyman.com (Michael Snoyman) Date: Thu, 16 Apr 2015 12:18:38 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <1429185521.25663.103.camel@dunky.localdomain> References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> Message-ID: On Thu, Apr 16, 2015 at 2:58 PM Duncan Coutts wrote: > On Thu, 2015-04-16 at 11:18 +0000, Michael Snoyman wrote: > > On Thu, Apr 16, 2015 at 1:57 PM Duncan Coutts > wrote: > > > > I was not proposing to change the repository format significantly (and > > > only in a backwards compatible way). The existing format is pretty > > > simple, using standard old well understood formats and protocols with > > > wide tool support. > > > > > > The incremental update is fairly unobtrusive. Passive http servers > don't > > > need to know about it, and clients that don't know about it can just > > > download the whole index as they do now. > > > > > > The security extensions for TUF are also compatible with the existing > > > format and clients. > > > > > The theme you seem to be creating here is "compatible with current > format." > > You didn't say it directly, but you've strongly implied that, somehow, > Git > > isn't compatible with existing tooling. Let me make clear that that is, > in > > fact, false[1]: > > Sure, one can use git or rsync or other methods to transfer the set of > files that makes up a repository or repository index. The point is, > existing clients expect both this format and this (http) protocol. > > There's a number of other minor arguments to be made here about what's > simpler and more backwards compatible, but here are two more significant > and positive arguments: > > 1. This incremental update approach works well with the TUF > security design > 2. This approach to transferring the repository index and files has > a much lower security attack surface > > For 1, the basic TUF approach is based on a simple http server serving a > set of files. Because we are implementing TUF for Hackage we picked this > update method to go with it. It's really not exotic, the HTTP spec says > about byte range requests: "Range supports efficient recovery from > partially failed transfers, and supports efficient partial retrieval of > large entities." We're doing an efficient partial retrieval of a large > entity. > > For 2, Mathieu elsewhere in this thread pointed to an academic paper > about attacks on package repositories and update systems. A surprising > number of these are attacks on the download mechanism itself, before you > even get to trying to verify individual package signatures. If you read > the TUF papers you see that they also list these attacks and address > them in various ways. One of them is that the download mechanism needs > to know in advance the size (and content hash) of entities it is going > to download. Also, we should strive to minimise the amount of complex > unaudited code that has to run before we get to checking the signature > of the package index (or individual package tarballs). In the TUF > design, the only code that runs before verification is downloading two > files over HTTP (one that's known to be very small, and the other we > already know the length and signed content hash). If we're being > paranoid we shouldn't even run any decompression before signature > verification. With our implementation the C code that runs before > signature verification is either none, or just zlib decompression if we > want to do on-the-fly http transport compression, but that's optional if > we don't want to trust zlib's security record (though it's extremely > widely used). By contrast, if we use rsync or git then there's a massive > amount of unaudited C code that is running with your user credentials > prior to signature verification. In addition it is likely vulnerable to > endless data and slow download attacks (see the papers). > > > I never claimed nor intended to imply that range requests are non-standard. In fact, I'm quite familiar with them, given that I implemented that feature of Warp myself! What I *am* claiming as non-standard is using range requests to implement an incremental update protocol of a tar file. Is there any prior art to this working correctly? Do you know that web servers will do what you need and server the byte offsets from the uncompressed tar file instead of the compressed tar.gz? Where are you getting the signatures for, and how does this interact with 00-index.tar.gz files served by non-Hackage systems? On the security front: it seems that we have two options here: 1. Use a widely used piece of software (Git), likely already in use by the vast majority of people reading this mailing list, relied on by countless companies and individuals, holding source code for the kernel of likely every mail server between my fingertips and the people reading this email, to distribute incremental updates. And as an aside: that software has built in support for securely signing commits and verifying those signatures. 2. Write brand new code deep inside two Haskell codebases with little scrutiny to implement a download/update protocol that (to my knowledge) has never been tested anywhere else in the world. Have I misrepresented the two options at all? I get that you've been working on this TUF-based system in private for a while, and are probably heavily invested already in the solutions you came up with in private. But I'm finding it very difficult to see the reasoning to reinventing wheels that need to reinventing. MIchael -------------- next part -------------- An HTML attachment was scrubbed... URL: From mboes at tweag.net Thu Apr 16 12:39:30 2015 From: mboes at tweag.net (Mathieu Boespflug) Date: Thu, 16 Apr 2015 14:39:30 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <1429176843.25663.31.camel@dunky.localdomain> References: <1429176843.25663.31.camel@dunky.localdomain> Message-ID: I'd like to step back from the technical discussion here for a moment and expand a bit on a point at the end of my previous email, which is really about process. After I first uploaded a blog post about service architectures and package distribution that was a recent interest of mine, I was very surprised and happy to hear that actually several parties had not only been already thinking about these very topics but moreover already have various small prototypes lying around. This was also the case for *secure* package distribution. What puzzled me, however, is that this came in the form of multiple private messages from mutiple sources sometimes referring to multiple said parties only vaguely and without identifying them. A similar story occurred when folks first started evoking package signing some years ago. Be it on robust identification of the provenance of packages, distribution packages and their metadata, more robust sandboxes or any other topic that touches upon our core infrastructure and tooling, it would be really great if people made themselves known and came forth with a) the requirements they seek to work against, b) their ideas to solve them and c) the resources they need or are themselves willing to bring to bear. It ultimately hurts the community when people repeatedly say things to the effect of, "yep, I hear you, interesting topic, I have a really cool solution to all of what you're saying - will be done Real Soon Now(tm)", or are happy to share details but only within a limited circle of cognoscenti. Because the net result is that other interested parties either unknowingly duplicate effort, or stall thinking that others are tackling the issue, sometimes for years. I know that the IHG has been interested in more secure package distribution for a very long time now, so it's really great that Duncan and Austin have now ("finally") taken the time to write up their current plan, moreover with a discussion of how it addresses a specific threat model, and make it known to the rest of the community that they have secured partial funding from the IHG. I know there other efforts out there, it would be great if they all came out of the woodwork. And in the future, if we could all be mindful to *publish* proposals and intents *upfront* when it comes to our shared community infrastructure and community tooling (rather than months or years later). I believe that's what is at the core of an *open* process for community developments. Ok, end of meta point, I for one am keen to dive back into the technical points that have been brought up in this thread already. :) From michael at snoyman.com Thu Apr 16 12:53:31 2015 From: michael at snoyman.com (Michael Snoyman) Date: Thu, 16 Apr 2015 12:53:31 +0000 Subject: [Haskell-cafe] Ongoing IHG work to improve Hackage security In-Reply-To: <1429176835.25663.30.camel@dunky.localdomain> References: <1429176835.25663.30.camel@dunky.localdomain> Message-ID: I've read the blog post, and am still trying to understand the implications of TUF. However, it's incredibly difficult to give solid review of a system which is stated to be "based on TUF," without knowing what the delta implied by that is. For now, I can only ask simple questions: 1. Is there any timeline for the changes needed to Hackage and cabal-install? 2. Is there any idea of how much extra code will need to be maintained going forward for this? This is an important point, given that both Hackage and Cabal are having trouble keeping up with demand already. 3. Is there any mitigation of eavesdropping attacks on the authorization headers sent to Hackage by uploaders for digest authentication? 4. Is there any mitigation against a compromise of the Hackage server itself, either the code base or the system? Overall, I'm quite wary of a solution stated as "experts devised this, it's good, we'll implement it, everyone stop worrying." I take your point about crypto-humility, but I'm not confident that an approach based on TUF addresses that concern since it involves a new implementation (or copy-pasted implementation) of crypto primitives together with unspecified changes to TUF. Note: wary is *not* a code word for opposed, but I strongly believe that anything we do here warrants far more discussion, and that can't happen until more details are explained. On Thu, Apr 16, 2015 at 12:33 PM Duncan Coutts wrote: > All, > > The IHG members identified Hackage security as an important issue some > time ago and myself and my colleague Austin have been working on a > design and implementation. > > The details are in this blog post: > > http://www.well-typed.com/blog/2015/04/improving-hackage-security > > We should have made more noise earlier about the fact that we're working > on this. We saw that it was important to finally write this up now > because other similar ideas are under active discussion and we don't > want to cause too much unnecessary duplication. > > The summary is this: > > We're implementing a system to significantly improve Hackage security. > It's based on a sensible design (The Update Framework) by proper crypto > experts. The basic system is fully automatic and covers all packages on > Hackage. A proposed extension would give further security improvements > for individual packages at the cost of a modest effort from package > authors. > > http://theupdateframework.com/ > > It will also allow the secure use of untrusted public Hackage mirrors, > which is the simplest route to better Hackage reliability. As a bonus > we're including incremental index downloads to reduce `cabal update` > wait times. And it's all fully backwards compatible. > > > I should also note that our IHG funding covers the first phase of the > design, and for the second phase we would very much welcome others to > get involved with the detailed design and implementation (or join the > IHG and contribute further funding). > > -- > Duncan Coutts, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > > -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/1429176835.25663.30.camel%40dunky.localdomain > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.dead.shall.rise at gmail.com Thu Apr 16 13:14:47 2015 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Thu, 16 Apr 2015 15:14:47 +0200 Subject: [Haskell-cafe] Ongoing IHG work to improve Hackage security In-Reply-To: <1429176835.25663.30.camel@dunky.localdomain> References: <1429176835.25663.30.camel@dunky.localdomain> Message-ID: Hi, On 16 April 2015 at 11:33, Duncan Coutts wrote: > All, > > The IHG members identified Hackage security as an important issue some > time ago and myself and my colleague Austin have been working on a > design and implementation. > > The details are in this blog post: > > http://www.well-typed.com/blog/2015/04/improving-hackage-security Thank you, this is very exciting. But won't the post-release .cabal update feature interfere with "package index as an append-only log" concept? IIUC, right now it is implemented as a destructive update of the corresponding package index entry, so making the package index immutable will break backwards compatibility. From duncan at well-typed.com Thu Apr 16 13:34:38 2015 From: duncan at well-typed.com (Duncan Coutts) Date: Thu, 16 Apr 2015 14:34:38 +0100 Subject: [Haskell-cafe] Ongoing IHG work to improve Hackage security In-Reply-To: References: <1429176835.25663.30.camel@dunky.localdomain> Message-ID: <1429191278.25663.116.camel@dunky.localdomain> On Thu, 2015-04-16 at 15:14 +0200, Mikhail Glushenkov wrote: > Hi, > > On 16 April 2015 at 11:33, Duncan Coutts wrote: > > All, > > > > The IHG members identified Hackage security as an important issue some > > time ago and myself and my colleague Austin have been working on a > > design and implementation. > > > > The details are in this blog post: > > > > http://www.well-typed.com/blog/2015/04/improving-hackage-security > > Thank you, this is very exciting. But won't the post-release .cabal > update feature interfere with "package index as an append-only log" > concept? IIUC, right now it is implemented as a destructive update of > the corresponding package index entry, so making the package index > immutable will break backwards compatibility. Yes, we can use the tar file in an append-only way while allowing metadata updates because that's the tar file format supports that. The tar file format was originally designed for tape drives where rewinding and updating old entries was far too expensive. So the tar file format allows appending updated file entries to the end of the archive. Compliant tar tools (including the standard unix tools, and cabal-install) understand this and take the last entry in the archive as the current file content. -- Duncan Coutts, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From spam at scientician.net Thu Apr 16 13:35:35 2015 From: spam at scientician.net (Bardur Arantsson) Date: Thu, 16 Apr 2015 15:35:35 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> Message-ID: On 16-04-2015 14:18, Michael Snoyman wrote: [--snip--] > I never claimed nor intended to imply that range requests are non-standard. > In fact, I'm quite familiar with them, given that I implemented that > feature of Warp myself! What I *am* claiming as non-standard is using range > requests to implement an incremental update protocol of a tar file. Is > there any prior art to this working correctly? Do you know that web servers > will do what you need and server the byte offsets from the uncompressed tar > file instead of the compressed tar.gz? Why would HTTP servers serve anything other than the raw contents of the file? You usually need special configuration for that sort of thing, e.g. mapping based on requested content type. (Which the client should always supply correctly, regardless.) "Dumb" HTTP servers certainly don't do anything weird here. [--snip--] > On the security front: it seems that we have two options here: > > 1. Use a widely used piece of software (Git), likely already in use by the > vast majority of people reading this mailing list, relied on by countless > companies and individuals, holding source code for the kernel of likely > every mail server between my fingertips and the people reading this email, > to distribute incremental updates. And as an aside: that software has built > in support for securely signing commits and verifying those signatures. > I think the point that was being made was that it might not have been hardened sufficiently against mailicious servers (being much more complicated than a HTTP client, for good reasons). I honestly don't know how much such hardening it has received, but I doubt that it's anywhere close to HTTP clients in general. (As to the HTTP client Cabal uses, I wouldn't know.) [--snip--] > I get that you've been working on this TUF-based system in private for a > while, and are probably heavily invested already in the solutions you came > up with in private. But I'm finding it very difficult to see the reasoning > to reinventing wheels that need to reinventing. > That's pretty... uncharitable. Especially given that you also have a horse in this race. (Especially, also considering that your proposal *doesn't* address some of the vulnerabilities mitigated by the TUF work.) Regards, From the.dead.shall.rise at gmail.com Thu Apr 16 13:56:39 2015 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Thu, 16 Apr 2015 15:56:39 +0200 Subject: [Haskell-cafe] Ongoing IHG work to improve Hackage security In-Reply-To: <1429191278.25663.116.camel@dunky.localdomain> References: <1429176835.25663.30.camel@dunky.localdomain> <1429191278.25663.116.camel@dunky.localdomain> Message-ID: Hi, On 16 April 2015 at 15:34, Duncan Coutts wrote: > Compliant tar tools (including the standard unix tools, and > cabal-install) understand this and take the last entry in the archive as > the current file content. Thanks. I looked at the code again, and while this is not explicitly mentioned in comments, we get this behaviour for free by relying on Map.fromList. From duncan at well-typed.com Thu Apr 16 14:25:00 2015 From: duncan at well-typed.com (Duncan Coutts) Date: Thu, 16 Apr 2015 15:25:00 +0100 Subject: [Haskell-cafe] Ongoing IHG work to improve Hackage security In-Reply-To: <87618w9ndf.fsf@gnu.org> References: <1429176835.25663.30.camel@dunky.localdomain> <87618w9ndf.fsf@gnu.org> Message-ID: <1429194300.25663.125.camel@dunky.localdomain> On Thu, 2015-04-16 at 16:08 +0200, Herbert Valerio Riedel wrote: > On 2015-04-16 at 15:14:47 +0200, Mikhail Glushenkov wrote: > > [...] > > > Thank you, this is very exciting. But won't the post-release .cabal > > update feature interfere with "package index as an append-only log" > > concept? IIUC, right now it is implemented as a destructive update of > > the corresponding package index entry, so making the package index > > immutable will break backwards compatibility. > > Being historically a tape-originating archive format, `tar` does support > such destructive updates as a standard-feature by appended content. See > also the '--update' operation in the tar(1) man-page: > > | -u, --update > | Append files which are newer than the corresponding copy in > | the archive. Arguments have the same meaning as with -c and > | -r options. > > Using this, we can actually have *all* historic .cabal revisions in the > 00-index.tar file as hidden entries. This would in theory give > `cabal-install` access to all cabal revisions (if we'd ever want to have > a feature in cabal that requires access to previous cabal revisions) Yes, we can choose to do that. That's sort-of what I was getting at when I mentioned that you could then view the tar file as a transaction log. > I actually find Duncan's approach rather elegant, as it simply builds on > existing standard features of both .tar files and the HTTP protocol > (resuming HTTP downloads). And they don't even require any custom HTTP > server implementation to work, and there's a graceful fallback. And we > don't need to introduce a completely new protocol/tool into the mix > (such as Git, which has dump-http(s), smart-http(s), git, as > transport-layers to chose from, and has various other moving parts such > as a ~/.gitconfig that can affect how Git operates, like redirecting > clone-urls transparently, also it avoids having to handle subtle > platform-specific issues with shelling out to an external command; from > this POV the HTTP/tar approach is the more KISS-compliant one, IMHO) > > Cheers, > hvr > From the.dead.shall.rise at gmail.com Thu Apr 16 14:34:04 2015 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Thu, 16 Apr 2015 16:34:04 +0200 Subject: [Haskell-cafe] Ongoing IHG work to improve Hackage security In-Reply-To: <87618w9ndf.fsf@gnu.org> References: <1429176835.25663.30.camel@dunky.localdomain> <87618w9ndf.fsf@gnu.org> Message-ID: Hi, On 16 April 2015 at 16:08, Herbert Valerio Riedel wrote: > Being historically a tape-originating archive format, `tar` does support > such destructive updates as a standard-feature by appended content. Thanks, I understand this. My question was about whether cabal-install supports this feature of tar (turns out, it does). > Using this, we can actually have *all* historic .cabal revisions in the > 00-index.tar file as hidden entries. I don't think we do this right now, though. From gershomb at gmail.com Thu Apr 16 15:06:46 2015 From: gershomb at gmail.com (Gershom B) Date: Thu, 16 Apr 2015 11:06:46 -0400 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> Message-ID: On April 16, 2015 at 8:39:40 AM, Mathieu Boespflug (mboes at tweag.net) wrote: > It ultimately hurts the community when people repeatedly say things to > the effect of, "yep, I hear you, interesting topic, I have a really > cool solution to all of what you're saying - will be done Real Soon > Now(tm)", or are happy to share details but only within a limited > circle of cognoscenti. Because the net result is that other interested > parties either unknowingly duplicate effort, or stall thinking that > others are tackling the issue, sometimes for years. I think this is a valid concern. Let me make a suggestion as to why this does not happen as much as we might like as well (other than not-enough-time which is always a common reason). Knowing a little about different people?s style of working on open source projects, I have observed that some people are keen to throw out lots of ideas and blog while their projects are in the very early stages of formation. Sometimes this leads to useful discussions, sometimes it leads to lots of premature bikeshedding. But, often, other people don?t feel comfortable throwing out what they know are rough and unfinished thoughts to the world. They would rather either polish the proposal more fully, or would like to have a sufficient proof-of-concept that they feel confident the idea is actually tractable. I do not mean to suggest one or the other style is ?better? ? just that these are different ways that people are comfortable working, and they are hardwired rather deeply into their habits. In a single commercial development environment, these things are relatively more straightforward to mediate, because project plans are often set top down, and there are in fact people whose job it is to amalgamate information between different developers and teams. In an open source community things are necessarily looser. There are going to be a range of such styles and approaches, and while it is sort of a pain to negotiate between all of them, I don?t really see an alternative. So let me pose the opposite thing too: if there is a set of concerns/ideas involving core infrastructure and possible future plans, it would be good to reach out to the people most involved with that work and check if they have any projects underway but perhaps not widely announced that you might want to be aware of. I know that it feels it would be better to have more frequent updates on what projects are kicking around and what timetables. But contrariwise, it also feels it would be better to have more people investigate more as they start to pursue such projects. Also, it is good to have different proposals on the table, so that we can compare them and stack up what they do and don?t solve more clearly. So, to an extent, I welcome duplication of proposals as long as the discussion doesn?t fragment too far. And it is also good to have a few proofs-of-concept floating about to help pin down the issues better. All this is also very much in the open source spirit. One idea I have been thinking about, is a Birds of a Feather meeting at the upcoming ICFP in Vancouver focused just on Haskell Open-Source Infrastructure. That way a variety of people with a range of different ideas/projects/etc. could all get together in one room and share what they?re worried about and what they?re working on and what they?re maybe vaguely contemplating on working on. It?s great to see so much interest from so many quarters in various systems and improvements. Now to try and facilitate a bit more (loose) coordination between these endeavors! Cheers, Gershom P.S. as a general point to bystanders in this conversation ? it seems to me one of the best ways to help the pace of ?big ticket? cabal/hackage-server work would be to take a look at their outstanding lists of tracker issues and see if you feel comfortable jumping in on the smaller stuff. The more we can keep the little stuff under control, the better for the developers as a whole to start to implement more sweeping changes. From michael at snoyman.com Thu Apr 16 15:28:10 2015 From: michael at snoyman.com (Michael Snoyman) Date: Thu, 16 Apr 2015 15:28:10 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <1429185521.25663.103.camel@dunky.localdomain> References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> Message-ID: Minor update. Some of your points about checking signatures before unpacking made me curious about what Git had to offer in these circumstances. For those like me who were unaware of the functionality, it turns out that Git has the option to reject non-signed commits, just run: git pull --verify-signatures I've set up the Travis job that pulls from Hackage to sign its commits with the GPG key I've attached to this email (fingerprint E595 AD42 14AF A6BB 1552 0B23 E40D 74D6 D6CF 60FD). On Thu, Apr 16, 2015 at 2:58 PM Duncan Coutts wrote: > On Thu, 2015-04-16 at 11:18 +0000, Michael Snoyman wrote: > > On Thu, Apr 16, 2015 at 1:57 PM Duncan Coutts > wrote: > > > > I was not proposing to change the repository format significantly (and > > > only in a backwards compatible way). The existing format is pretty > > > simple, using standard old well understood formats and protocols with > > > wide tool support. > > > > > > The incremental update is fairly unobtrusive. Passive http servers > don't > > > need to know about it, and clients that don't know about it can just > > > download the whole index as they do now. > > > > > > The security extensions for TUF are also compatible with the existing > > > format and clients. > > > > > The theme you seem to be creating here is "compatible with current > format." > > You didn't say it directly, but you've strongly implied that, somehow, > Git > > isn't compatible with existing tooling. Let me make clear that that is, > in > > fact, false[1]: > > Sure, one can use git or rsync or other methods to transfer the set of > files that makes up a repository or repository index. The point is, > existing clients expect both this format and this (http) protocol. > > There's a number of other minor arguments to be made here about what's > simpler and more backwards compatible, but here are two more significant > and positive arguments: > > 1. This incremental update approach works well with the TUF > security design > 2. This approach to transferring the repository index and files has > a much lower security attack surface > > For 1, the basic TUF approach is based on a simple http server serving a > set of files. Because we are implementing TUF for Hackage we picked this > update method to go with it. It's really not exotic, the HTTP spec says > about byte range requests: "Range supports efficient recovery from > partially failed transfers, and supports efficient partial retrieval of > large entities." We're doing an efficient partial retrieval of a large > entity. > > For 2, Mathieu elsewhere in this thread pointed to an academic paper > about attacks on package repositories and update systems. A surprising > number of these are attacks on the download mechanism itself, before you > even get to trying to verify individual package signatures. If you read > the TUF papers you see that they also list these attacks and address > them in various ways. One of them is that the download mechanism needs > to know in advance the size (and content hash) of entities it is going > to download. Also, we should strive to minimise the amount of complex > unaudited code that has to run before we get to checking the signature > of the package index (or individual package tarballs). In the TUF > design, the only code that runs before verification is downloading two > files over HTTP (one that's known to be very small, and the other we > already know the length and signed content hash). If we're being > paranoid we shouldn't even run any decompression before signature > verification. With our implementation the C code that runs before > signature verification is either none, or just zlib decompression if we > want to do on-the-fly http transport compression, but that's optional if > we don't want to trust zlib's security record (though it's extremely > widely used). By contrast, if we use rsync or git then there's a massive > amount of unaudited C code that is running with your user credentials > prior to signature verification. In addition it is likely vulnerable to > endless data and slow download attacks (see the papers). > > -- > Duncan Coutts, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > > -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/1429185521.25663.103.camel%40dunky.localdomain > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v1 mQENBFUvs4EBCADOl99iNq7EijkoKfsPjOC5Dq2tLWe5jdWh/slJPhEK1oadenDB mKDU1BO8ysbQxJwxkCRw5/iEPIwAFBjjVnKTRuH6jf9LjPIsfunCQjIARTTFGIX4 RyhchbIRv4qCCyceDfKxLtuy9NU7GYnJSijITX39X2al6BhAMcf7fNW1ztsQMQbC O4EoT5isnHS+wQFs8APuAt5xatRAnkF8fMCdDPDKAUm4oCnojBXLAr2Z03B6sL5B XFygA71CMqOrkQIecLj86YKM4DebluDG24NjQv3NKdX2b/mTSzr3v0nCwaDR07Kj GNmRRNi0W6HJqbHjAyTcwpUHIuvPUpJqryDlABEBAAG0n0NvbW1lcmNpYWwgSGFz a2VsbCBhbGwtY2FiYWwtZmlsZXMgVHJhdmlzIGpvYiAoVXNlZCBleGNsdXNpdmVs eSBvbjogaHR0cHM6Ly9naXRodWIuY29tL2NvbW1lcmNpYWxoYXNrZWxsL2FsbC1j YWJhbC1maWxlcykgPG1pY2hhZWwrYWxsLWNhYmFsLWZpbGVzQHNub3ltYW4uY29t PokBOAQTAQIAIgUCVS+zgQIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQ 5A101tbPYP1kNgf+O/cVwcsE7sgpAuqtg/MItwiN+uz5wjizfWP+oEFaVrjL0r+H 2Zh4Hrj/ztPww1LPp/v0nYhkPqTl36dQKpniUHwLt2fNtjBpmF0ccCcHJI7ZcZVk D28vPnPRPVHl2gj67dP/jcR5tVmXtYHFzwc4Y5hwdbvov+UXbG1rG0Kh40v5Wtmg nqSWVH2QMpYD6KDlxkDVOKhbFOeWP2OEsjM57LWJvQ467CCI9dW3Hx0enXAlazeS ocwbbxfFzDJKbZJ5ZwNVBcoNFOBtrksptJ2GkXuq7WHLQ63akRxwEx9Q85yVv5it TNOUd+wgykyxLdKydefpfpzX//vFqbAccixAZYkBHAQQAQIABgUCVS+60AAKCRCg SOjAV+hodicLB/9mwfV4QI2lOjU+eTONwnKIePSNcYxD4rZmdzRXx3eiN57LtqKd xm7f49IyByIErzJaUXwZdag+InEWJmKgY2a0p1y+bLmnMzbM0GsFwzEFzC3ysQ5u Os8J/mT0jFb1W69panXSlPs5e9MeOQ5fRL82JEn5ymNa81+5mmDqOxu2CffK5rqr eFln+GjmXCrXCFA7SrKSNr8RH+luumhE5Lk3fOVrUOp6aY5l7nMEhLmUnOLH7jZK HgBxxbxm0gi5dlH+CeDK6PChORsZm5aytub87xFUqyXHOOihPQTTBDZVVpBbkyJ+ K1YJOIX5hKoM2vti0BJmeWh15KDD6PyasHstuQENBFUvs4EBCAC3Tz9BhlTWD8ge N3UDDG1mYqoXF+u5AvtVIKG20nsCCTjM3PX/I4JjkFJyV6oqGo5J1CwlnMO9kZ/r MT3aSUipJdx5vGcOophLB/+1HPzCd52mCOW+kpaD/GSKqKRgcxM4tjz1bBPXpTbb LjC1KJEAn/q5Rcv0SmYpc9n6fAsoyas1f4vTOhNL8NXzjhsJBoPImb58/ce0o12D zHhRklueydQauwY/wlglN2XaC6B85rv8JteFYwF6SNmGG0ghRBAFG8ymKdh56xpy 0I+9kZ4DAnLJbFNzNfTSZxwq1+8jU1jApe+XdLEXr/lcrM8LJ1tqKej+ZdYj4O/p MC16KWkLABEBAAGJAR8EGAECAAkFAlUvs4ECGwwACgkQ5A101tbPYP07GwgAo9xg VZeNt1ft80c8KfW0+7Xvs0LtkRSVEJFgcNb2MWettFf/JEFr8CRM/Y6eeeaD/UwQ zYNqB3EPI/sCMZ1rrrfF8zYTxN6MuOamoL0L9fmd7/m0xS4VCDmj1X7mm6BaLtT6 ecLqVvjQ/ni8YiBPX963owdDHvuk7n+UVpR8W7mBS5O8sQ7X9DEy8FVDVdy7SLjs 3CpV93eQh9L9spxcB/ODx7FauUKuid5CboFyJVonlcFiL4MCTu/EA4Qk62E4yV6U dpq1y+1lBUcv/2GSIrTwA8sFge8AT5UIoCgIDF9yO3mldYsU4mzsiM9ulWvSLJBM 2VhVqxW9Mh1gYUkz5g== =O7Sx -----END PGP PUBLIC KEY BLOCK----- From duncan at well-typed.com Thu Apr 16 16:02:38 2015 From: duncan at well-typed.com (Duncan Coutts) Date: Thu, 16 Apr 2015 17:02:38 +0100 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> Message-ID: <1429200158.25663.164.camel@dunky.localdomain> On Thu, 2015-04-16 at 14:39 +0200, Mathieu Boespflug wrote: > I'd like to step back from the technical discussion here for a moment > and expand a bit on a point at the end of my previous email, which is > really about process. I should apologise for not publishing our design earlier. To be fair I did mention several times on the commercialhaskell mailing list earlier this year that we were working on an index signing based approach. Early on in the design process we did not appreciate how much TUF overlaps with a GPG-author-signing based approach, we had thought they were much more orthogonal. My other excuse is that I was on holiday while much of the recent design discussion on Chris and your proposals had been going on. And finally, writing up comprehensible explanations is tricky and time consuming. By ultimately these are just excuses. We do always intend to do things openly in a collaborative way, the Cabal and hackage development is certainly open in that way, and we certainly never hold things back as closed source. In this case Austin and I have been doing intensive design work, and it was easier for us to do that between ourselves initially given that we're doing it on work time. I accept that we should have got this out earlier, especially since it turns out the other designs do have some overlap in terms of goals and guarantees. > Ok, end of meta point, I for one am keen to dive back into the > technical points that have been brought up in this thread already. :) Incidentally, having read your post on splitting things up a bit when I got back from holiday, I agree there are certainly valid complaints there. I'm not at all averse to factoring the hackage-server implementation slightly differently, perhaps so that the core index and package serving is handled by a smaller component (e.g. a dumb http server). For 3rd party services, the goal has always been for the hackage-server impl to provide all of its data in useful formats. No doubt that can be improved. Pull requests gratefully accepted. I see this security stuff as a big deal for the reliability because it will allow us to use public untrusted mirrors. That's why it's important to cover every package. That and perhaps a bit of refactoring of the hackage server should give us a very reliable system. -- Duncan Coutts, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From amindfv at gmail.com Thu Apr 16 18:32:48 2015 From: amindfv at gmail.com (amindfv at gmail.com) Date: Thu, 16 Apr 2015 14:32:48 -0400 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <1429200158.25663.164.camel@dunky.localdomain> References: <1429176843.25663.31.camel@dunky.localdomain> <1429200158.25663.164.camel@dunky.localdomain> Message-ID: <5D961A0F-409A-43CC-A72A-A95D360A4441@gmail.com> A couple quick points: - The IHG proposal is partly motivated by staying backwards-compatible, but I think we shouldn't put a premium on this. Non-https versions of cabal should imo be deprecated once there's an alternative, so most people will need to upgrade anyway. We can run an "old-hackage" instance for those who can't. - To the extent that it doesn't become an impedence mismatch, deduplicating effort with revision control systems (e.g. git) seems very desirable: a) It can reduce maintainers' work -- e.g. It's probably a long road getting maintainers to all sign their packages and others' revisions, but the majority are already doing this with git. b) It's easier to trust -- the amount of vetting and hardening is orders of magnitude more, and Cabal/cabal-install is already a complex machine c) It's already been built -- including (conservatively) hundreds of security corner-cases that would need to be built/maintained/trusted (see "it's easier to trust") Nix is an example of a package manager that very succesfully defers part of its workload to git and other vcs. Tom El Apr 16, 2015, a las 12:02, Duncan Coutts escribi?: > On Thu, 2015-04-16 at 14:39 +0200, Mathieu Boespflug wrote: >> I'd like to step back from the technical discussion here for a moment >> and expand a bit on a point at the end of my previous email, which is >> really about process. > > I should apologise for not publishing our design earlier. To be fair I > did mention several times on the commercialhaskell mailing list earlier > this year that we were working on an index signing based approach. > > Early on in the design process we did not appreciate how much TUF > overlaps with a GPG-author-signing based approach, we had thought they > were much more orthogonal. > > My other excuse is that I was on holiday while much of the recent design > discussion on Chris and your proposals had been going on. > > And finally, writing up comprehensible explanations is tricky and time > consuming. > > By ultimately these are just excuses. We do always intend to do things > openly in a collaborative way, the Cabal and hackage development is > certainly open in that way, and we certainly never hold things back as > closed source. In this case Austin and I have been doing intensive > design work, and it was easier for us to do that between ourselves > initially given that we're doing it on work time. I accept that we > should have got this out earlier, especially since it turns out the > other designs do have some overlap in terms of goals and guarantees. > >> Ok, end of meta point, I for one am keen to dive back into the >> technical points that have been brought up in this thread already. :) > > Incidentally, having read your post on splitting things up a bit when I > got back from holiday, I agree there are certainly valid complaints > there. I'm not at all averse to factoring the hackage-server > implementation slightly differently, perhaps so that the core index and > package serving is handled by a smaller component (e.g. a dumb http > server). For 3rd party services, the goal has always been for the > hackage-server impl to provide all of its data in useful formats. No > doubt that can be improved. Pull requests gratefully accepted. > > I see this security stuff as a big deal for the reliability because it > will allow us to use public untrusted mirrors. That's why it's important > to cover every package. That and perhaps a bit of refactoring of the > hackage server should give us a very reliable system. > > -- > Duncan Coutts, Haskell Consultant > Well-Typed LLP, http://www.well-typed.com/ > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From kyle.marek.spartz at gmail.com Thu Apr 16 18:41:14 2015 From: kyle.marek.spartz at gmail.com (Kyle Marek-Spartz) Date: Thu, 16 Apr 2015 13:41:14 -0500 Subject: [Haskell-cafe] Coding katas/dojos and functional programming introduction In-Reply-To: References: Message-ID: <2fcp3sr3rjq5kl.fsf@kmarekszmbp3743.stp01.office.gdi> A little out of date, and unsure what level it is aimed at, but there are a few sets ready to go: https://github.com/HaskVan/HaskellKoans https://wiki.haskell.org/H-99:_Ninety-Nine_Haskell_Problems -- Kyle Marek-Spartz From mboes at tweag.net Thu Apr 16 20:40:00 2015 From: mboes at tweag.net (Mathieu Boespflug) Date: Thu, 16 Apr 2015 22:40:00 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> Message-ID: Thank you for that Gershom. I think everything that you're saying in that last email is very much on the mark. Multiple proposals is certainly a good thing for diversity, a good thing to help take our infrastructure in a good direction, and a good thing to help it evolve over time. It's true that most of us are volunteer contributors, working on improving infrastructure only so long as it's fun. So it's not always easy to ask for more upfront clarity and pitch perfect coordination. Then again as a community we make more progress faster when a little bit of process is followed. While a millions different tools or libraries to do the same thing can coexist just fine, with infrastructure that's much more difficult. A single global view of all code that people choose to contribute as open source is much healthier than a fragmented set of sub communities each working with their own infrastructure. So the degree of coordination required to make infrastructure evolve is much higher. To this end, I'd like to strongly encourage all interested parties to publish into the open proposals covering one or both of the topics that are currently hot infrastructure topics in the community: 1. reliable and efficient distribution of package metadata, package content and of incremental updates thereof. 2. robust and convenient checking of the provenance of a package version and policies for rejecting such package versions as potentially unsafe. These two topics overlap of course, so as has been the case so far often folks will be addressing both simultaneously. I submit that it would be most helpful if these proposals were structured as follows: * Requirements addressed by the proposal (including *thread model* where relevant) * Technical details * Ideally, some indication of the resources needed and a timeline. I know that the last point is of particular interest to commercial users, who like predictability in order to decide whether or not they need to be chipping in their own meagre resources to make the proposal happen and happen soon. But to some extent so does everyone else: no one likes to see the same discussions drag on for 2+ years. Openness really helps here - if things end up dragging out others can pick up the baton where it was left lying. So far we have at least 2 proposals that cover at least the first two sections above: * Chris Done's package signing proposal: https://github.com/commercialhaskell/commercialhaskell/wiki/Package-signing-proposal * Duncan Coutts and Austin Seipp's proposal for improving Hackage security: http://www.well-typed.com/blog/2015/04/improving-hackage-security/ There are other draft (or "strawman") proposals (including one of mine) floating around out there, mentioned earlier in this thread. And then some (including prototype implementations) that I can say I and others have engaged with via private communication, but it would really help this discussion move forward if they were public. > One idea I have been thinking about, is a Birds of a Feather meeting at the upcoming ICFP in Vancouver focused just on Haskell Open-Source Infrastructure. I think that's a great idea. Best, Mathieu On 16 April 2015 at 17:06, Gershom B wrote: > On April 16, 2015 at 8:39:40 AM, Mathieu Boespflug (mboes at tweag.net) wrote: > >> It ultimately hurts the community when people repeatedly say things to >> the effect of, "yep, I hear you, interesting topic, I have a really >> cool solution to all of what you're saying - will be done Real Soon >> Now(tm)", or are happy to share details but only within a limited >> circle of cognoscenti. Because the net result is that other interested >> parties either unknowingly duplicate effort, or stall thinking that >> others are tackling the issue, sometimes for years. > > I think this is a valid concern. Let me make a suggestion as to why this does not happen as much as we might like as well (other than not-enough-time which is always a common reason). Knowing a little about different people?s style of working on open source projects, I have observed that some people are keen to throw out lots of ideas and blog while their projects are in the very early stages of formation. Sometimes this leads to useful discussions, sometimes it leads to lots of premature bikeshedding. But, often, other people don?t feel comfortable throwing out what they know are rough and unfinished thoughts to the world. They would rather either polish the proposal more fully, or would like to have a sufficient proof-of-concept that they feel confident the idea is actually tractable. I do not mean to suggest one or the other style is ?better? ? just that these are different ways that people are comfortable working, and they are hardwired rather deeply into their habits. > > In a single commercial development environment, these things are relatively more straightforward to mediate, because project plans are often set top down, and there are in fact people whose job it is to amalgamate information between different developers and teams. In an open source community things are necessarily looser. There are going to be a range of such styles and approaches, and while it is sort of a pain to negotiate between all of them, I don?t really see an alternative. > > So let me pose the opposite thing too: if there is a set of concerns/ideas involving core infrastructure and possible future plans, it would be good to reach out to the people most involved with that work and check if they have any projects underway but perhaps not widely announced that you might want to be aware of. I know that it feels it would be better to have more frequent updates on what projects are kicking around and what timetables. But contrariwise, it also feels it would be better to have more people investigate more as they start to pursue such projects. > > Also, it is good to have different proposals on the table, so that we can compare them and stack up what they do and don?t solve more clearly. So, to an extent, I welcome duplication of proposals as long as the discussion doesn?t fragment too far. And it is also good to have a few proofs-of-concept floating about to help pin down the issues better. All this is also very much in the open source spirit. > > One idea I have been thinking about, is a Birds of a Feather meeting at the upcoming ICFP in Vancouver focused just on Haskell Open-Source Infrastructure. That way a variety of people with a range of different ideas/projects/etc. could all get together in one room and share what they?re worried about and what they?re working on and what they?re maybe vaguely contemplating on working on. It?s great to see so much interest from so many quarters in various systems and improvements. Now to try and facilitate a bit more (loose) coordination between these endeavors! > > Cheers, > Gershom > > P.S. as a general point to bystanders in this conversation ? it seems to me one of the best ways to help the pace of ?big ticket? cabal/hackage-server work would be to take a look at their outstanding lists of tracker issues and see if you feel comfortable jumping in on the smaller stuff. The more we can keep the little stuff under control, the better for the developers as a whole to start to implement more sweeping changes. > > From mboes at tweag.net Thu Apr 16 21:03:02 2015 From: mboes at tweag.net (Mathieu Boespflug) Date: Thu, 16 Apr 2015 23:03:02 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <1429200158.25663.164.camel@dunky.localdomain> References: <1429176843.25663.31.camel@dunky.localdomain> <1429200158.25663.164.camel@dunky.localdomain> Message-ID: > Incidentally, having read your post on splitting things up a bit when I > got back from holiday, I agree there are certainly valid complaints > there. I'm not at all averse to factoring the hackage-server > implementation slightly differently, perhaps so that the core index and > package serving is handled by a smaller component (e.g. a dumb http > server). For 3rd party services, the goal has always been for the > hackage-server impl to provide all of its data in useful formats. No > doubt that can be improved. Pull requests gratefully accepted. Awesome. Sounds like we're in broad agreement. > I see this security stuff as a big deal for the reliability because it > will allow us to use public untrusted mirrors. That's why it's important > to cover every package. That and perhaps a bit of refactoring of the > hackage server should give us a very reliable system. Indeed - availability by both reliability and redundancy. I still have some catching up to do on the technical content of your proposal and others - let me comment on that later. But either way I can certainly agree with the goal of reducing the size of the trusted base while simultaneously expanding the number of points of distribution. In the meantime, mirrors already exist (e.g. http://hackage.fpcomplete.com/), but as you say, they need to be trusted, in addition to having to trust Hackage. Thanks again for your detailed blog post and the context it provides. Best, Mathieu From hjgtuyl at chello.nl Thu Apr 16 21:41:04 2015 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Thu, 16 Apr 2015 23:41:04 +0200 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped below 50 Message-ID: L.S., From the Tiobe index page[0] of this month: Another interesting move this month concerns Scala. The functional programming language jumps to position 25 after having been between position 30 and 50 for many years. Scala seems to be ready to enter the top 20 for the first time in history. Haskell dropped from the top 50 last month and hasn't come back. I suppose, if Haskell compiled to JVM, Haskell would have a much wider audience. Regards, Henk-Jan van Tuyl [0] http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From magnus at therning.org Thu Apr 16 22:01:22 2015 From: magnus at therning.org (Magnus Therning) Date: Fri, 17 Apr 2015 00:01:22 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> Message-ID: <20150416220122.GA4126@tatooine> On Thu, Apr 16, 2015 at 03:28:10PM +0000, Michael Snoyman wrote: > Minor update. Some of your points about checking signatures before > unpacking made me curious about what Git had to offer in these > circumstances. For those like me who were unaware of the > functionality, it turns out that Git has the option to reject > non-signed commits, just run: > > git pull --verify-signatures > > I've set up the Travis job that pulls from Hackage to sign its > commits with the GPG key I've attached to this email (fingerprint > E595 AD42 14AF A6BB 1552 0B23 E40D 74D6 D6CF 60FD). Nice one! One thing I, as a developer of a tool that consumes the Hackage index[1], would like to see is a bit more meta data, in particular - alternative download URLs for the source - hashes of the source (probably needs to be per URL) I thought I saw something about this in the thread, but going through it again I can't seem to find it. Would this sort of thing also be included in "improvements to package hosting"? /M [1]: http://hackage.haskell.org/package/cblrepo -- Magnus Therning OpenPGP: 0xAB4DFBA4 email: magnus at therning.org jabber: magnus at therning.org twitter: magthe http://therning.org/magnus There's a big difference between making something easy to use and making it productive. -- Adam Bosworth -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From gautier.difolco at gmail.com Thu Apr 16 22:31:10 2015 From: gautier.difolco at gmail.com (Gautier DI FOLCO) Date: Thu, 16 Apr 2015 22:31:10 +0000 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped below 50 In-Reply-To: References: Message-ID: 2015-04-16 21:41 GMT+00:00 Henk-Jan van Tuyl : > > L.S., > > From the Tiobe index page[0] of this month: > Another interesting move this month concerns Scala. The functional > programming language jumps to position 25 after having been between > position 30 and 50 for many years. Scala seems to be ready to enter the top > 20 for the first time in history. > > Haskell dropped from the top 50 last month and hasn't come back. I > suppose, if Haskell compiled to JVM, Haskell would have a much wider > audience. > > Regards, > Henk-Jan van Tuyl > > > [0] http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html > > > -- > Folding at home > What if you could share your unused computer power to help find a cure? In > just 5 minutes you can join the world's biggest networked computer and get > us closer sooner. Watch the video. > http://folding.stanford.edu/ > > > http://Van.Tuyl.eu/ > http://members.chello.nl/hjgtuyl/tourdemonad.html > Haskell programming > -- > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > First of all do you know how this ranking is build? It's an agglomeration of nonsense values which aim to give the language trend. Maybe Scala has benefited of the JVM or maybe it has profited of the auto-assigned "functional" because some communities, like the Haskell community, have heavily worked for that during decades. The only side-effect of a growth in this ranking will attract some addicted to Resum? Driven-Development. Popular or not, you can do anything you want with Haskell and no ranking will change that. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gautier.difolco at gmail.com Thu Apr 16 22:47:41 2015 From: gautier.difolco at gmail.com (Gautier DI FOLCO) Date: Thu, 16 Apr 2015 22:47:41 +0000 Subject: [Haskell-cafe] Coding katas/dojos and functional programming introduction In-Reply-To: <2fcp3sr3rjq5kl.fsf@kmarekszmbp3743.stp01.office.gdi> References: <2fcp3sr3rjq5kl.fsf@kmarekszmbp3743.stp01.office.gdi> Message-ID: 2015-04-15 22:40 GMT+00:00 Mike Meyer : > Just clarify, this is a reference to the fable of the blind men and the > elephant. > I obviously lack of culture, thanks for the precision. What you think it is like will depend on how you approach it. > Exactly, each paradigms I have learned (OOP, FP, Logic, Actor, Data-Flow) seemed to be a giant mess until I found a good approach and see a big and coherent unit. I miss a approach to broadcast it for FP. 2015-04-16 5:24 GMT+00:00 Raphael Gaschignard : > Is this aimed for FP beginners who already know something like Java? I > think the thing to do here would be to come up with some tasks that are > genuinely tedious to write in a Java-esque (or Pascal-like) language, and > then present how FP solutions are simpler. > > I'm of the opinion that FP succeeds not just because of the tenants of > FP, but because most of the languages are terse and have code that is > "pretty". Showing some quick things involving quick manipulation of tuples > (basically a bunch of list processing) could show that things don't have to > be complicated with a bunch of anonymous classes. > That's currently that we tend to do, but these are too "toy examples", they doesn't stick to the day-to-day problems. > Anyways, I think the essential thing is to present a problem that they, > as programmers, have already experienced. The big one being "well these two > functions are *almost* the same but the inner-part of the function has > different logic" (basically, looking at things like map). Open up the world > of possibilities. It's not things that are only possible in Haskell/Scheme > (after all, all of these languages are turing complete so..), but they're > so much easier to write in these languages. > Good hint. 2015-04-16 18:41 GMT+00:00 Kyle Marek-Spartz : > > A little out of date, and unsure what level it is aimed at, but there > are a few sets ready to go: > > https://github.com/HaskVan/HaskellKoans > > https://wiki.haskell.org/H-99:_Ninety-Nine_Haskell_Problems > I totally forgot the last one, but I think it doesn't emphasize enough on the type part. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeff at datalinktech.com.au Thu Apr 16 23:06:50 2015 From: jeff at datalinktech.com.au (Jeff) Date: Fri, 17 Apr 2015 09:06:50 +1000 Subject: [Haskell-cafe] Advice needed on how to improve some code In-Reply-To: References: <5742D054-1833-4C53-909B-2016481C768F@datalinktech.com.au> <20150416060314.GR31520@weber> <0D59FF2A-55DA-44EA-9718-F131E0DFDA30@datalinktech.com.au> Message-ID: <304BEF64-C67D-4625-95C2-EBEA451DE4CF@datalinktech.com.au> Thanks Sylvain, I had a suspicion that Applicative might be applicable ( ;-) ) I like Tom Ellis? suggestions but I might try this out. Jeff > On 16 Apr 2015, at 6:24 pm, Sylvain Henry wrote: > > I don't think you need to update record fields of a blank record: you > can create the record using applicative operators instead. Something > like: > > parseDevicePLData :: Bool -> Get PayloadData > parseDevicePLData hasEv = do > rawEvId <- if hasEv then getWord8 else return 0 -- I guessed the 0 value > let evId = toEnum (fromIntegral rawEvId .&. 0x7f) > let statusFlag = testBit rawEvId 7 > mask <- getWord16be > > let parseMaybe e p = if testBit mask (fromEnum e) then Just <$> p > else return Nothing > > DevicePL hasEv statusFlag evId mask > <$> parseMaybe D.GPS parseDeviceGPSData > <*> parseMaybe D.GSM parseDeviceGSMData > <*> parseMaybe D.COT parseDeviceCotData > <*> ... > > parseDevicePL :: Bool -> Get Payload > parseDevicePL hasEv = do > ts <- parseTimestamp > P.Payload "" (Just ts) <$> parseDevicePLData hasEv > > Then you can lift these "parsers" into you Parser monad only when you need it. > > -- Sylvain > > 2015-04-16 8:37 GMT+02:00 Jeff : >> Thanks Tom, David and Claude for your replies. >> >> >> >>> On 16 Apr 2015, at 4:03 pm, Tom Ellis wrote: >>> >>> The first thing you should do is define >>> >>> parseDeviceGPSDataOf constructor parser setField = >>> ( \pl' -> let pld = P.payloadData pl' in >>> if testBit mdm ( fromEnum constructor ) >>> then >>> parser >>= >>> ( \s -> return ( pl' { P.payloadData = setField pld (Just s) } } ) ) >>> else >>> return pl' ) >>> >>> and your chain of binds will become >>> >>> setgpsData pld = pld { P.gpsData = Just s } >>> ... >>> >>> parseDeviceDataOf D.GPS parseDeviceGPSData setgpsData >>= >>> parseDeviceDataOf D.GSM parseDeviceDSMData setgsmData >>= >>> parseDeviceDataOf D.COT parseDeviceCOTData setcotData >>= >>> ... >>> >>> Then I would probably write >>> >>> deviceSpecs = [ (D.GPS, parseDeviceGPSData, setgpsData) >>> , (D.GSM, parseDeviceDSMData, setgsmData) >>> , (D.COT, parseDeviceCOTData, setcotData) ] >>> >>> and turn the chain of binds into a fold. >>> >> >> >> I?ll do as you have suggested Tom. Thanks. >> >> Jeff >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From trebla at vex.net Fri Apr 17 00:35:19 2015 From: trebla at vex.net (Albert Y. C. Lai) Date: Thu, 16 Apr 2015 20:35:19 -0400 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> Message-ID: <55305547.3050003@vex.net> On 2015-04-15 05:07 AM, Jon Schneider wrote: > With lazy evaluation where is it written that if you write things with no > dependencies with a "do" things will be done in order ? Or isn't it ? > > Is it a feature of the language we're supposed to accept ? It is an axiomatic feature. It is unfortunately that the Haskell Report is only half-explicit on this. But here it goes. In Chapter 7, opening: "The order of evaluation of expressions in Haskell is constrained only by data dependencies; an implementation has a great deal of freedom in choosing this order. Actions, however, must be ordered in a well-defined manner for program execution ? and I/O in particular ? to be meaningful. Haskell?s I/O monad provides the user with a way to specify the sequential chaining of actions, and an implementation is obliged to preserve this order." It does not say clearly how you specify an order, but it is going to be the >>= operator. For example, main = getLine >>= \_ -> putStrLn "bye" specifies to stall for your input, and then, to tell you "bye". In that order. (Perform an experiment to confirm or refute it!) * It stalls for your input, even if your input is not needed. * It tells you "bye", even if you don't need to hear it. * And it stalls for your input before outputting, not the other way round. There is no laziness or optimizer re-ordering for this. "An implementation is obliged to preserve the order." In the rest of Chapter 7, several I/O actions from the library are described. A few are decribed as "read lazily" --- these are in fact the odd men out who postpone inputting, not the common case. The common case, where it does not say "read lazily", is to grab input here-and-now and produce output here-and-now. Lastly, throughout the Haskell Report, apart from the few I/O actions that "read lazily", there is no other laziness specified. That is, lazy evaluation is *not* specified. "An implementation has a great deal of freedom in choosing this order." From david.feuer at gmail.com Fri Apr 17 01:06:51 2015 From: david.feuer at gmail.com (David Feuer) Date: Thu, 16 Apr 2015 21:06:51 -0400 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> Message-ID: Others have already discussed this in terms of GHC's model of IO, but as Tom Ellis indicates, this model is a bit screwy, and not really the best way to think about it. I think it is much more useful to think of it in terms of a "free monad". That is, think about the `IO` type as a *data structure*. An `IO a` value is a sort of recipe for producing a value of type `a`. That is, data IO :: * -> * where ReturnIO :: a -> IO a BindIO :: IO a -> (a -> IO b) -> IO b HPutStr :: Handle -> String -> IO () HGetStr :: Handle -> IO String .... And then think about the runtime system as an interpreter whose job is to run the programs represented by these IO values. Perhaps I need to be more specific. main = do a <- getLine b <- getLine Can we say "a" absolutely always receives the first line of input and if so what makes this the case rather than "b" receiving it ? Or do things need to be slightly more complicated to achieve this ? Sorry it's just the engineer in me. I think once I've got this clear I'll be happy to move on. Jon _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From erkokl at gmail.com Fri Apr 17 01:13:25 2015 From: erkokl at gmail.com (Levent Erkok) Date: Thu, 16 Apr 2015 18:13:25 -0700 Subject: [Haskell-cafe] Execution order in IO In-Reply-To: References: <9038cf9603619d4466c4e95aed1ccde4.squirrel@mail.jschneider.net> <552E6238.7020308@gmail.com> <3d77ffb3686ac509e60b0c415bb835b0.squirrel@mail.jschneider.net> Message-ID: Simon PJ's "Tackling the awkward squad" has an excellent (and highly readable) account of IO, if you want a more precise treatment. (It also covers concurrency, exceptions, and FFI to a degree.) http://research.microsoft.com/en-us/um/people/simonpj/papers/marktoberdorf/ It's hard to choose a favorite amongst Simon's writings, but this one stands out in my opinion in its lucidity, and how clear it makes these "awkward" parts of Haskell, without giving up any rigor. -Levent. On Thu, Apr 16, 2015 at 6:06 PM, David Feuer wrote: > Others have already discussed this in terms of GHC's model of IO, but as > Tom Ellis indicates, this model is a bit screwy, and not really the best > way to think about it. I think it is much more useful to think of it in > terms of a "free monad". That is, think about the `IO` type as a *data > structure*. An `IO a` value is a sort of recipe for producing a value of > type `a`. That is, > > data IO :: * -> * where > ReturnIO :: a -> IO a > BindIO :: IO a -> (a -> IO b) -> IO b > HPutStr :: Handle -> String -> IO () > HGetStr :: Handle -> IO String > .... > > And then think about the runtime system as an interpreter whose job is to > run the programs represented by these IO values. > Perhaps I need to be more specific. > > main = do > a <- getLine > b <- getLine > > Can we say "a" absolutely always receives the first line of input and if > so what makes this the case rather than "b" receiving it ? Or do things > need to be slightly more complicated to achieve this ? > > Sorry it's just the engineer in me. I think once I've got this clear I'll > be happy to move on. > > Jon > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan.miljenovic at gmail.com Fri Apr 17 01:23:22 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Fri, 17 Apr 2015 11:23:22 +1000 Subject: [Haskell-cafe] Graph diagram tools? In-Reply-To: References: Message-ID: Wow, you really have resurrected an old thread! On 17 April 2015 at 03:12, Ivan Zakharyaschev wrote: > Hi, > > I have some feedback on the API of the graphviz library's monadic API > (resulting form my explorations written down at > http://mathoverflow.net/a/203099/13991 ). > > Le jeudi 23 juin 2011 04:38:21 UTC+4, Ivan Lazar Miljenovic a ?crit : >> >> On 23 June 2011 02:48, Stephen Tetley wrote: >> > Or Andy Gill's Dotgen - simple and stable: >> > >> > http://hackage.haskell.org/package/dotgen >> >> Within the next month, I should hopefully finally finish the new >> version of graphviz. Various improvements include: >> >> As such, I would greatly appreciate knowing what it is that makes you >> >> want to use a different library (admittedly the graphviz API isn't as >> stable as the others, but that's because I keep trying to improve it, >> and typically state in the Changelog exactly what has changed). >> >> > > ### graphviz Haskell library and other ones > > An alternative to "graphviz" Haskell package mentioned in > [haskell-cafe](https://groups.google.com/d/msg/haskell-cafe/ZfZaw2E9a18/xZ0OeHCGzVgJ) > is [dotgen](http://hackage.haskell.org/package/dotgen). > > In [a > follow-up](https://groups.google.com/d/msg/haskell-cafe/ZfZaw2E9a18/9P-dazcd0FsJ) > to the post mentioning `dotgen`, the author of graphviz gives some > comparison between them (and other similar Haskell libs). I assume his > "plans" (about a monadic interface) have been implemented already: Yup: http://hackage.haskell.org/package/graphviz-2999.17.0.2/docs/Data-GraphViz-Types-Monadic.html > >> Within the next month, I should hopefully finally finish the new >> version of graphviz. Various improvements include: >> >> ... >> >> * A Dot graph representation based loosely upon **dotgen**'s monadic >> interface (with Andy's blessing) but with the various Attributes being >> used rather than (String, String). I think I'm going to be able to >> make it such that you can define a graph using the monadic interface >> that will almost look identical to actual Dot code. >> >> ... >> >> I would like to stress to people considering using other bindings to >> Graphviz/Dot (such as **dotgen**, language-dot, or their own >> cobbled-together interface): be very careful about quoting, etc. I >> have spent a _lot_ of time checking how to properly escape different >> values and ensuring correctness under the hood (i.e. there is no need >> to pre-escape your Text/String values; graphviz will do that for you >> when generating the actual Dot code). This, after all, is the point >> of having existing libraries rather than rolling your own each time. > > Both points are related. (So, graphviz's monadic iterface is a safer > improvement upon dotgen's one.) > > ### Considering dotgen vs graphviz closer > > But looking into the examples, I see that `dotgen` can use "Haskell > ids" to identify created nodes, whereas in graphviz's monad (see the > example above) one must supply extra strings as the unique ids (by > which we refer to the nodes). I used Strings as an example, as I was directly converting an existing piece of Dot code; the original can be found here: http://hackage.haskell.org/package/graphviz-2999.17.0.2/docs/Data-GraphViz-Types.html But, you can use any type you like for the node identifiers, as long as you make them an instance of the PrintDot class. That's where the `n` in the `Dot n` type comes in. > > I like the first approach more ("Haskell ids"). I admittedly don't have any ability in graphviz to create new identifiers for you. I could (just add a StateT to the internal monadic stack which keeps track of the next unused node identifier) but I think that would _reduce_ the flexibility of being able to use your own type (it would either only work for `Dot Int`, or even if you could apply a mapping function to use something like `GraphID`, but that has a problem if you have a `Double` with the same value - and hence same textual representation - as your Int). The way I see it, graphviz is usually used for converting existing Haskell values into Dot code and then processing with dot, neato, etc. the Monadic interface exists so that you can still use the library for static pre-specified graphs (I wrote the module for a specific use case, but in practice found it not as useful as I thought it would be as I typically don't have a need for static graphs in my Haskell code). > > Cf. dotgen (from > ): > > module Main (main) where > import Text.Dot > -- data Animation = Start > src, box, diamond :: String -> Dot NodeId > src label = node $ [ ("shape","none"),("label",label) ] > box label = node $ [ ("shape","box"),("style","rounded"),("label",label) > ] > diamond label = node $ > [("shape","diamond"),("label",label),("fontsize","10")] > main :: IO () > main = putStrLn $ showDot $ do > attribute ("size","40,15") > attribute ("rankdir","LR") > refSpec <- src "S" > tarSpec <- src "T" > same [refSpec,tarSpec] > c1 <- box "S" > c2 <- box "C" > c3 <- box "F" > same [c1,c2,c3] > refSpec .->. c1 > tarSpec .->. c2 > tarSpec .->. c3 > m1 <- box "x" > m2 <- box "y" > ntm <- box "z" > same [m1,m2,ntm] > c1 .->. m1 > c2 .->. m2 > xilinxSynthesis <- box "x" > c3 .->. xilinxSynthesis > gns <- box "G" > xilinxSynthesis .->. gns > gns .->. ntm > ecs <- sequence > [ diamond "E" > , diamond "E" > , diamond "Eq" > ] > same ecs > m1 .->. (ecs !! 0) > m1 .->. (ecs !! 1) > m2 .->. (ecs !! 0) > m2 .->. (ecs !! 2) > ntm .->. (ecs !! 1) > ntm .->. (ecs !! 2) > _ <- sequence [ do evidence <- src "EE" > n .->. evidence > | n <- ecs > ] > edge refSpec tarSpec > [("label","Engineering\nEffort"),("style","dotted")] > () <- scope $ do v1 <- box "Hello" > v2 <- box "World" > v1 .->. v2 > (x,()) <- cluster $ > do v1 <- box "Hello" > v2 <- box "World" > v1 .->. v2 > -- x .->. m2 > -- for hpc > () <- same [x,x] > v <- box "XYZ" > v .->. v > () <- attribute ("rankdir","LR") > let n1 = userNodeId 1 > let n2 = userNodeId (-1) > () <- n1 `userNode` [ ("shape","box")] > n1 .->. n2 > _ <- box "XYZ" > _ <- box "(\n\\n)\"(/\\)" > netlistGraph (\ a -> [("label","X" ++ show a)]) > (\ a -> [succ a `mod` 10,pred a `mod` 10]) > [ (n,n) | n <- [0..9] :: [Int] ] > return () > My preference - and hence overall design with graphviz - is that you would generate the graph first, and _then_ convert it to a Dot representation en masse. > > Cf. graphviz with string ids: > > A short example of the monadic notation from [the > documentation](http://hackage.haskell.org/package/graphviz-2999.16.0.0/docs/Data-GraphViz-Types-Monadic.html): That version is a tad out of date, but shouldn't affect this. > > digraph (Str "G") $ do > > cluster (Int 0) $ do > graphAttrs [style filled, color LightGray] > nodeAttrs [style filled, color White] > "a0" --> "a1" > "a1" --> "a2" > "a2" --> "a3" > graphAttrs [textLabel "process #1"] > > cluster (Int 1) $ do > nodeAttrs [style filled] > "b0" --> "b1" > "b1" --> "b2" > "b2" --> "b3" > graphAttrs [textLabel "process #2", color Blue] > > "start" --> "a0" > "start" --> "b0" > "a1" --> "b3" > "b2" --> "a3" > "a3" --> "end" > "b3" --> "end" > > node "start" [shape MDiamond] > node "end" [shape MSquare] > > Thanks for the packages, and best wishes, > Ivan Z. -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From hawu.bnu at gmail.com Fri Apr 17 02:11:10 2015 From: hawu.bnu at gmail.com (Jean Lopes) Date: Thu, 16 Apr 2015 19:11:10 -0700 (PDT) Subject: [Haskell-cafe] cabal install glade In-Reply-To: <552F65B3.6010004@gmail.com> References: <9992f880-81e3-4041-8307-3b5857687c97@googlegroups.com> <552F65B3.6010004@gmail.com> Message-ID: Still no success...I am missing some very basic things probably.. Em quinta-feira, 16 de abril de 2015 04:33:20 UTC-3, Zilin Chen escreveu: > > Hi Jean, > > Simply do `$ cabal sandbox add-source ' and then > `$ cabal install --only-dependencies' as normal. I think it should work. > > Cheers, > Zilin > > > On 15/04/15 22:01, Jean Lopes wrote: > > I will try to use your branch before going back to GHC 7.8... > > But, how exactly should I do that ? > Clone your branch; > Build from local source code with cabal ? (I just scrolled this part while > reading cabal tutorials, guess I'll have to take a look now) > What about dependencies ? I should use $ cabal install glade > --only-dependencies and than install glade from your branch ? > > Em quarta-feira, 15 de abril de 2015 05:48:42 UTC-3, Matthew Pickering > escreveu: >> >> Hi Jean, >> >> You can try cloning my branch until a push gets accepted upstream. >> >> https://github.com/mpickering/glade >> >> The fixes to get it working with 7.10 were fairly minimal. >> >> Matt >> >> On Wed, Apr 15, 2015 at 4:33 AM, Jean Lopes wrote: >> > Hello, I am trying to install the Glade package from hackage, and I >> > keep getting exit failure... >> > >> > Hope someone can help me solve it! >> > >> > What I did: >> > $ mkdir ~/haskell/project >> > $ cd ~/haskell/project >> > $ cabal sandbox init >> > $ cabal update >> > $ cabal install alex >> > $ cabal install happy >> > $ cabal install gtk2hs-buildtools >> > $ cabal install gtk #successful until here >> > $ cabal install glade >> > >> > The last statement gave me the following error: >> > >> > $ [1 of 2] Compiling SetupWrapper ( >> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs, >> > >> /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o >> > ) >> > $ >> > $ /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:91:17: >> > $ Ambiguous occurrence ?die? >> > $ It could refer to either ?Distribution.Simple.Utils.die?, >> > $ imported from >> > ?Distribution.Simple.Utils? at >> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 >> > $ or ?System.Exit.die?, >> > $ imported from ?System.Exit? at >> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 >> > $ Failed to install cairo-0.12.5.3 >> > $ [1 of 2] Compiling SetupWrapper ( >> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs, >> > >> /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o >> > ) >> > $ >> > $ /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:91:17: >> > $ Ambiguous occurrence ?die? >> > $ It could refer to either ?Distribution.Simple.Utils.die?, >> > $ imported from >> > ?Distribution.Simple.Utils? at >> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:8:1-32 >> > $ or ?System.Exit.die?, >> > $ imported from ?System.Exit? at >> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:21:1-18 >> > $ Failed to install glib-0.12.5.4 >> > $ cabal: Error: some packages failed to install: >> > $ cairo-0.12.5.3 failed during the configure step. The exception was: >> > $ ExitFailure 1 >> > $ gio-0.12.5.3 depends on glib-0.12.5.4 which failed to install. >> > $ glade-0.12.5.0 depends on glib-0.12.5.4 which failed to install. >> > $ glib-0.12.5.4 failed during the configure step. The exception was: >> > $ ExitFailure 1 >> > $ gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to install. >> > $ pango-0.12.5.3 depends on glib-0.12.5.4 which failed to install. >> > >> > Important: You can assume I don't know much. I'm rather new to >> Haskell/cabal >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > Haskel... at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskel... at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > > _______________________________________________ > Haskell-Cafe mailing listHaskel... at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Fri Apr 17 03:25:17 2015 From: michael at snoyman.com (Michael Snoyman) Date: Fri, 17 Apr 2015 03:25:17 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <20150416220122.GA4126@tatooine> References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> <20150416220122.GA4126@tatooine> Message-ID: On Fri, Apr 17, 2015 at 1:01 AM Magnus Therning wrote: > On Thu, Apr 16, 2015 at 03:28:10PM +0000, Michael Snoyman wrote: > > Minor update. Some of your points about checking signatures before > > unpacking made me curious about what Git had to offer in these > > circumstances. For those like me who were unaware of the > > functionality, it turns out that Git has the option to reject > > non-signed commits, just run: > > > > git pull --verify-signatures > > > > I've set up the Travis job that pulls from Hackage to sign its > > commits with the GPG key I've attached to this email (fingerprint > > E595 AD42 14AF A6BB 1552 0B23 E40D 74D6 D6CF 60FD). > > Nice one! > > One thing I, as a developer of a tool that consumes the Hackage > index[1], would like to see is a bit more meta data, in particular > > - alternative download URLs for the source > - hashes of the source (probably needs to be per URL) > > I thought I saw something about this in the thread, but going through > it again I can't seem to find it. Would this sort of thing also be > included in "improvements to package hosting"? > > /M > > [1]: http://hackage.haskell.org/package/cblrepo > > > My strawman proposal did include the idea of identifying a package via its hash, and then providing redundant URLs for download (some of those URLs possibly being non-HTTP, such as a special URL to refer to contents within a Git repository). But as I keep saying, that was a strawman proposal, not to be taken as a final design. That said, simply adding that information to the 00-index file seems like an easy win. The hashes, at the very least, would fit in well. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Fri Apr 17 03:34:06 2015 From: michael at snoyman.com (Michael Snoyman) Date: Fri, 17 Apr 2015 03:34:06 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> Message-ID: On Thu, Apr 16, 2015 at 4:36 PM Bardur Arantsson wrote: > On 16-04-2015 14:18, Michael Snoyman wrote: > [--snip--] > > I never claimed nor intended to imply that range requests are > non-standard. > > In fact, I'm quite familiar with them, given that I implemented that > > feature of Warp myself! What I *am* claiming as non-standard is using > range > > requests to implement an incremental update protocol of a tar file. Is > > there any prior art to this working correctly? Do you know that web > servers > > will do what you need and server the byte offsets from the uncompressed > tar > > file instead of the compressed tar.gz? > > Why would HTTP servers serve anything other than the raw contents of the > file? You usually need special configuration for that sort of thing, > e.g. mapping based on requested content type. (Which the client should > always supply correctly, regardless.) > > "Dumb" HTTP servers certainly don't do anything weird here. > > There actually is a weird point to browsers and servers around pre-gziped contents, which is what I was trying to get at (but didn't do a clear enough job of doing). There's some ambiguity when sending compressed tarballs as to whether the browser should decompress, for instance. http-client had to implement a workaround for this specifically: https://www.stackage.org/haddock/nightly-2015-04-16/http-client-0.4.11.1/Network-HTTP-Client-Internal.html#v:browserDecompress > [--snip--] > > On the security front: it seems that we have two options here: > > > > 1. Use a widely used piece of software (Git), likely already in use by > the > > vast majority of people reading this mailing list, relied on by countless > > companies and individuals, holding source code for the kernel of likely > > every mail server between my fingertips and the people reading this > email, > > to distribute incremental updates. And as an aside: that software has > built > > in support for securely signing commits and verifying those signatures. > > > > I think the point that was being made was that it might not have been > hardened sufficiently against mailicious servers (being much more > complicated than a HTTP client, for good reasons). I honestly don't know > how much such hardening it has received, but I doubt that it's anywhere > close to HTTP clients in general. (As to the HTTP client Cabal uses, I > wouldn't know.) > > AFAIK, neither of these proposals as they stand have anything to do with security against a malicious server. In both cases, we need to simply trust the server to be sending the right data. Using some kind of signing mechanism is a mitigation against that, such as the GPG signatures I added to all-cabal-files. HTTPS from Hackage would help prevent MITM attacks, and having the 00-index file be cryptographically signed would be another (though I don't know what Duncan has planned here). > [--snip--] > > I get that you've been working on this TUF-based system in private for a > > while, and are probably heavily invested already in the solutions you > came > > up with in private. But I'm finding it very difficult to see the > reasoning > > to reinventing wheels that need to reinventing. > > > > That's pretty... uncharitable. Especially given that you also have a > horse in this race. > > (Especially, also considering that your proposal *doesn't* address some > of the vulnerabilities mitigated by the TUF work.) > > > I actually really don't have a horse in this race. It seems like a lot of people missed this from the first email I sent, so to repeat myself: > I wrote up a strawman proposal last week[5] which clearly needs work to be a realistic option. My question is: are people interested in moving forward on this? If there's no interest, and everyone is satisfied with continuing with the current Hackage-central-authority, then we can proceed with having reliable and secure services built around Hackage. But if others- like me- would like to see a more secure system built from the ground up, please say so and let's continue that conversation. My "horse in the race" is a security model that's not around putting all trust in a single entity. Other than that, I'm not invested in any specific direction. Using TUF sounds like a promising idea, but- as I raised in the other thread- I have my concerns. All of that said: the discussion here is about efficient incremental downloads, not package signing. For some reason those two points are getting conflated here. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruben.astud at gmail.com Fri Apr 17 03:50:40 2015 From: ruben.astud at gmail.com (Ruben Astudillo) Date: Fri, 17 Apr 2015 00:50:40 -0300 Subject: [Haskell-cafe] Recognizing projects schemes Message-ID: <55308310.8020408@gmail.com> I recently came across the following blog-post: http://blog.cleancoder.com/uncle-bob/2015/04/15/DoesOrganizationMatter.html It speaks a bit of simplicity, efficiency and stuff that isn't important. What is important at least to me was the concept of project scheme, summarized on the following phrase: ``And so this gets to the crux of the question that you were really asking. You were asking whether the time required to learn the organization scheme of the system is worth the bother. Learning that organization scheme is hard. Becoming proficient at reading and changing the code within that scheme take time, effort, and practice. And that can feel like a waste when you compare it to how simple life was when you only had 100 lines of code.'' And that is something I totally struggle at approaching new projects. The only reason I could understand XMonad for example is because they gave a general overview (thanks) of it on the Developing module. I feel I got a problem of methodology. What approaches are you guys using to understanding new projects schemes on a efficient manner? How long it usually takes you?. Any advices? Thanks in advance. -- Ruben Astudillo. pgp: 0x3C332311 , usala en lo posible :-) Crear un haiku, en diecisiete silabas, es complica... From spam at scientician.net Fri Apr 17 04:44:50 2015 From: spam at scientician.net (Bardur Arantsson) Date: Fri, 17 Apr 2015 06:44:50 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> Message-ID: On 17-04-2015 05:34, Michael Snoyman wrote: > AFAIK, neither of these proposals as they stand have anything to do with > security against a malicious server. In both cases, we need to simply trust > the server to be sending the right data. Using some kind of signing > mechanism is a mitigation against that, such as the GPG signatures I added > to all-cabal-files. HTTPS from Hackage would help prevent MITM attacks, and > having the 00-index file be cryptographically signed would be another > (though I don't know what Duncan has planned here). Well, TUF (at at least if fully implemented) can certainly limit the amount of damage that a malicious (read: compromised) server can do. Obviously it can't magically make a malicious server behave like a non-malicious one, but it does prevent e.g. the "serve stale data" trick or Slowloris-for-clients*. (*) By clients knowing up-front, in a secure manner, how much data there is to download. >> [--snip--] >>> I get that you've been working on this TUF-based system in private for a >>> while, and are probably heavily invested already in the solutions you >> came >>> up with in private. But I'm finding it very difficult to see the >> reasoning >>> to reinventing wheels that need to reinventing. >>> >> > > All of that said: the discussion here is about efficient incremental > downloads, not package signing. For some reason those two points are > getting conflated here. I think you might not have been very clear about stating that you were limiting your comments in this subthread to apply only to said mechanism. (Or at least I didn't notice any such statement, but then I might well have missed it.) Another point is: It's often not very useful to talk about things in complete isolation when discussion security systems since there may be non-trivial interplay between the parts -- though TUF tries to limit the amount of interplay (to limit complexity/understandability). Not necessary a major concern in this particular subsystem, but see (*) Regards, From spam at scientician.net Fri Apr 17 04:50:50 2015 From: spam at scientician.net (Bardur Arantsson) Date: Fri, 17 Apr 2015 06:50:50 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> Message-ID: On 17-04-2015 05:34, Michael Snoyman wrote: >> I wrote up a strawman proposal last week[5] which clearly needs work to > be a realistic option. My question is: are people interested in moving > forward on this? If there's no interest, and everyone is satisfied with > continuing with the current Hackage-central-authority, then we can proceed > with having reliable and secure services built around Hackage. But if > others- like me- would like to see a more secure system built from the > ground up, please say so and let's continue that conversation. You say "more secure". Against what? What's the threat model? (Again, sorry if I missed it, it's been a long thread.) Yes, I'd definitely like a more "secure system" against many/all of the threats idenfied in e.g. TUF (perhaps even more, if realistic), but it's hard to evaluate a proposal without an explicitly spelled out threat model. This where adopting bits of TUF seems a lot more appealing than a home-brewed model, at least if we can remain confident that those bits actually mitigates the threats that we want covered. Regards, From michael at snoyman.com Fri Apr 17 05:04:37 2015 From: michael at snoyman.com (Michael Snoyman) Date: Fri, 17 Apr 2015 05:04:37 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> Message-ID: On Fri, Apr 17, 2015 at 7:51 AM Bardur Arantsson wrote: > On 17-04-2015 05:34, Michael Snoyman wrote: > > >> I wrote up a strawman proposal last week[5] which clearly needs work to > > be a realistic option. My question is: are people interested in moving > > forward on this? If there's no interest, and everyone is satisfied with > > continuing with the current Hackage-central-authority, then we can > proceed > > with having reliable and secure services built around Hackage. But if > > others- like me- would like to see a more secure system built from the > > ground up, please say so and let's continue that conversation. > > You say "more secure". Against what? What's the threat model? (Again, > sorry if I missed it, it's been a long thread.) > > Yes, I'd definitely like a more "secure system" against many/all of the > threats idenfied in e.g. TUF (perhaps even more, if realistic), but it's > hard to evaluate a proposal without an explicitly spelled out threat > model. This where adopting bits of TUF seems a lot more appealing than a > home-brewed model, at least if we can remain confident that those bits > actually mitigates the threats that we want covered. > > > Instead of copy-pasting bits and pieces of my initial email until the whole thing makes sense, I'll just link to the initial email, which lists some of the security vulnerabilities and gives my disclaimers about my proposal just being a strawman: https://groups.google.com/d/msg/commercialhaskell/PTbC0p_YFvk/8XqS8wDxgqEJ Note that I never intended that list to be exhaustive at all! The point is to see if others have security concerns along these lines as well, seems to be the case. In this thread others and myself have raised a number of other security threats. TUF raises even additional threads. I've asked Duncan[1] about how TUF would address some specific concerns I raised (such as Hackage server being compromised), but I haven't heard a response. My guess is that TUF will ended up being a necessary but insufficient part of a solution here, but I unfortunately don't know enough about Well Typed's intended implementation to say more than that. Michael [1] Both in the mailing list and on Reddit: http://www.reddit.com/r/haskell/comments/32sezy/ongoing_work_to_improve_hackage_security/cqeco3q -------------- next part -------------- An HTML attachment was scrubbed... URL: From zilinc.dev at gmail.com Fri Apr 17 05:33:52 2015 From: zilinc.dev at gmail.com (Zilin Chen) Date: Fri, 17 Apr 2015 15:33:52 +1000 Subject: [Haskell-cafe] cabal install glade In-Reply-To: References: <9992f880-81e3-4041-8307-3b5857687c97@googlegroups.com> <552F65B3.6010004@gmail.com> Message-ID: <55309B40.5010601@gmail.com> Do you still get the same errors? I think the "Sandboxes: basic usage" section in [0] is what you'd follow. [0] https://www.haskell.org/cabal/users-guide/installing-packages.html#sandboxes-advanced-usage On 17/04/15 12:11, Jean Lopes wrote: > Still no success...I am missing some very basic things probably.. > > Em quinta-feira, 16 de abril de 2015 04:33:20 UTC-3, Zilin Chen escreveu: > > Hi Jean, > > Simply do `$ cabal sandbox add-source ' > and then `$ cabal install --only-dependencies' as normal. I think > it should work. > > Cheers, > Zilin > > > On 15/04/15 22:01, Jean Lopes wrote: >> I will try to use your branch before going back to GHC 7.8... >> >> But, how exactly should I do that ? >> Clone your branch; >> Build from local source code with cabal ? (I just scrolled this >> part while reading cabal tutorials, guess I'll have to take a >> look now) >> What about dependencies ? I should use $ cabal install glade >> --only-dependencies and than install glade from your branch ? >> >> Em quarta-feira, 15 de abril de 2015 05:48:42 UTC-3, Matthew >> Pickering escreveu: >> >> Hi Jean, >> >> You can try cloning my branch until a push gets accepted >> upstream. >> >> https://github.com/mpickering/glade >> >> >> The fixes to get it working with 7.10 were fairly minimal. >> >> Matt >> >> On Wed, Apr 15, 2015 at 4:33 AM, Jean Lopes >> wrote: >> > Hello, I am trying to install the Glade package from >> hackage, and I >> > keep getting exit failure... >> > >> > Hope someone can help me solve it! >> > >> > What I did: >> > $ mkdir ~/haskell/project >> > $ cd ~/haskell/project >> > $ cabal sandbox init >> > $ cabal update >> > $ cabal install alex >> > $ cabal install happy >> > $ cabal install gtk2hs-buildtools >> > $ cabal install gtk #successful until here >> > $ cabal install glade >> > >> > The last statement gave me the following error: >> > >> > $ [1 of 2] Compiling SetupWrapper ( >> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs, >> > >> /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o >> > ) >> > $ >> > $ >> /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:91:17: >> > $ Ambiguous occurrence ?die? >> > $ It could refer to either ?Distribution.Simple.Utils.die?, >> > $ imported from >> > ?Distribution.Simple.Utils? at >> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 >> > $ or ?System.Exit.die?, >> > $ imported from ?System.Exit? at >> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 >> > $ Failed to install cairo-0.12.5.3 >> > $ [1 of 2] Compiling SetupWrapper ( >> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs, >> > >> /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o >> > ) >> > $ >> > $ /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:91:17: >> > $ Ambiguous occurrence ?die? >> > $ It could refer to either ?Distribution.Simple.Utils.die?, >> > $ imported from >> > ?Distribution.Simple.Utils? at >> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:8:1-32 >> > $ or ?System.Exit.die?, >> > $ imported from ?System.Exit? at >> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:21:1-18 >> > $ Failed to install glib-0.12.5.4 >> > $ cabal: Error: some packages failed to install: >> > $ cairo-0.12.5.3 failed during the configure step. The >> exception was: >> > $ ExitFailure 1 >> > $ gio-0.12.5.3 depends on glib-0.12.5.4 which failed to >> install. >> > $ glade-0.12.5.0 depends on glib-0.12.5.4 which failed to >> install. >> > $ glib-0.12.5.4 failed during the configure step. The >> exception was: >> > $ ExitFailure 1 >> > $ gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to >> install. >> > $ pango-0.12.5.3 depends on glib-0.12.5.4 which failed to >> install. >> > >> > Important: You can assume I don't know much. I'm rather new >> to Haskell/cabal >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > Haskel... at haskell.org >> > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskel... at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskel... at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From spam at scientician.net Fri Apr 17 05:38:24 2015 From: spam at scientician.net (Bardur Arantsson) Date: Fri, 17 Apr 2015 07:38:24 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> Message-ID: On 17-04-2015 07:04, Michael Snoyman wrote: > https://groups.google.com/d/msg/commercialhaskell/PTbC0p_YFvk/8XqS8wDxgqEJ > > Note that I never intended that list to be exhaustive at all! The point is > to see if others have security concerns along these lines as well, seems to > be the case. Ok, that's fair enough. And: yes! :) FWIW, I think what people have been asking for is exactly *details*, so that the proposal can be evaluated properly. (I realize that this is a non-trivial amount of work.). For example, a good start would be to evaluate your strawman proposal against the TUF criteria and see where it needs to be fleshed out/beefed up, etc. > > I've asked Duncan[1] about how TUF would address some specific concerns I > raised (such as Hackage server being compromised), but I haven't heard a > response. My guess is that TUF will ended up being a necessary but > insufficient part of a solution here, but I unfortunately don't know enough > about Well Typed's intended implementation to say more than that. > > Michael > > [1] Both in the mailing list and on Reddit: > http://www.reddit.com/r/haskell/comments/32sezy/ongoing_work_to_improve_hackage_security/cqeco3q > I'm reminded of SPJs usual request for a wiki page *with details* discussing pros/cons of all the proposals for new GHC features. Might it be time to start such a page? (Of course this is not meant to imply any particular *rush* per se, but this is obviously becoming a growing concern in the community.) Regards, From edwards.benj at gmail.com Fri Apr 17 06:25:21 2015 From: edwards.benj at gmail.com (Benjamin Edwards) Date: Fri, 17 Apr 2015 06:25:21 +0000 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped below 50 In-Reply-To: References: Message-ID: I have to concede the industry buzz is strong with Scala (currently Scala is my job). The JVM does have a role to play, but mostly in the access it gives you to the Java ecosystem. It's a shame because I'd much rather be using haskell but when you have 10 employees and your business isn't making the platform you can't afford to make the platform. We have nothing like spark or finagle ready to go as far as I am aware. Something finagle like (metrics + pluggable load balancing + platform agnostic + autoscaling (via zk / mdns)) would be huge. I am trying to find the time to hack on these sorts of projects and failing. Ben On Thu, 16 Apr 2015 11:31 pm Gautier DI FOLCO wrote: > 2015-04-16 21:41 GMT+00:00 Henk-Jan van Tuyl : > >> >> L.S., >> >> From the Tiobe index page[0] of this month: >> Another interesting move this month concerns Scala. The functional >> programming language jumps to position 25 after having been between >> position 30 and 50 for many years. Scala seems to be ready to enter the top >> 20 for the first time in history. >> >> Haskell dropped from the top 50 last month and hasn't come back. I >> suppose, if Haskell compiled to JVM, Haskell would have a much wider >> audience. >> >> Regards, >> Henk-Jan van Tuyl >> >> >> [0] http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html >> >> >> -- >> Folding at home >> What if you could share your unused computer power to help find a cure? >> In just 5 minutes you can join the world's biggest networked computer and >> get us closer sooner. Watch the video. >> http://folding.stanford.edu/ >> >> >> http://Van.Tuyl.eu/ >> http://members.chello.nl/hjgtuyl/tourdemonad.html >> Haskell programming >> -- >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > First of all do you know how this ranking is build? It's an agglomeration > of nonsense values which aim to give the language trend. > Maybe Scala has benefited of the JVM or maybe it has profited of the > auto-assigned "functional" because some communities, like the Haskell > community, have heavily worked for that during decades. > The only side-effect of a growth in this ranking will attract some > addicted to Resum? Driven-Development. > Popular or not, you can do anything you want with Haskell and no ranking > will change that. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From edwards.benj at gmail.com Fri Apr 17 06:29:03 2015 From: edwards.benj at gmail.com (Benjamin Edwards) Date: Fri, 17 Apr 2015 06:29:03 +0000 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped below 50 In-Reply-To: References: Message-ID: On Fri, 17 Apr 2015 7:25 am Benjamin Edwards wrote: Sorry, that should read protocol agnostic! I have to concede the industry buzz is strong with Scala (currently Scala is my job). The JVM does have a role to play, but mostly in the access it gives you to the Java ecosystem. It's a shame because I'd much rather be using haskell but when you have 10 employees and your business isn't making the platform you can't afford to make the platform. We have nothing like spark or finagle ready to go as far as I am aware. Something finagle like (metrics + pluggable load balancing + platform agnostic + autoscaling (via zk / mdns)) would be huge. I am trying to find the time to hack on these sorts of projects and failing. Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From wrwills at gmail.com Fri Apr 17 06:35:54 2015 From: wrwills at gmail.com (Robert Wills) Date: Fri, 17 Apr 2015 07:35:54 +0100 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped below 50 In-Reply-To: References: Message-ID: If the often heard statement that Scala is a gateway drug for Haskell is true, then Haskell should be following Scala up the rankings shortly. On Fri, Apr 17, 2015 at 7:29 AM, Benjamin Edwards wrote: > On Fri, 17 Apr 2015 7:25 am Benjamin Edwards > wrote: > > > Sorry, that should read protocol agnostic! > > > I have to concede the industry buzz is strong with Scala (currently > Scala is my job). The JVM does have a role to play, but mostly in the > access it gives you to the Java ecosystem. It's a shame because I'd much > rather be using haskell but when you have 10 employees and your business > isn't making the platform you can't afford to make the platform. We have > nothing like spark or finagle ready to go as far as I am aware. Something > finagle like (metrics + pluggable load balancing + platform agnostic + > autoscaling (via zk / mdns)) would be huge. I am trying to find the time to > hack on these sorts of projects and failing. > > Ben > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dct25-561bs at mythic-beasts.com Fri Apr 17 06:54:17 2015 From: dct25-561bs at mythic-beasts.com (David Turner) Date: Fri, 17 Apr 2015 07:54:17 +0100 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped below 50 In-Reply-To: References: Message-ID: If you like arbitrary rankings, you might like http://githut.info/ which a colleage pointed me at recently. In 2014Q4, in terms of number of active Github repositories, Scala is #19 and Haskell is #21 and yet in terms of open issues, Scala is #3 and Haskell is #23. Make of that what you will. On 16 April 2015 at 22:41, Henk-Jan van Tuyl wrote: > > L.S., > > From the Tiobe index page[0] of this month: > Another interesting move this month concerns Scala. The functional > programming language jumps to position 25 after having been between position > 30 and 50 for many years. Scala seems to be ready to enter the top 20 for > the first time in history. > > Haskell dropped from the top 50 last month and hasn't come back. I suppose, > if Haskell compiled to JVM, Haskell would have a much wider audience. > > Regards, > Henk-Jan van Tuyl > > > [0] http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html > > > -- > Folding at home > What if you could share your unused computer power to help find a cure? In > just 5 minutes you can join the world's biggest networked computer and get > us closer sooner. Watch the video. > http://folding.stanford.edu/ > > > http://Van.Tuyl.eu/ > http://members.chello.nl/hjgtuyl/tourdemonad.html > Haskell programming > -- > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From nikita at karetnikov.org Fri Apr 17 06:56:06 2015 From: nikita at karetnikov.org (Nikita Karetnikov) Date: Fri, 17 Apr 2015 09:56:06 +0300 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: (Bardur Arantsson's message of "Fri, 17 Apr 2015 07:38:24 +0200") References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> Message-ID: <87bninfdkp.fsf@karetnikov.org> > I'm reminded of SPJs usual request for a wiki page *with details* > discussing pros/cons of all the proposals for new GHC features. Might it > be time to start such a page? (Of course this is not meant to imply any > particular *rush* per se, but this is obviously becoming a growing > concern in the community.) I think it must be the first step. Otherwise, it's hard to evaluate the proposals. It would be great if both designs could be compared side by side. I'd suggest to create a file in the commercial haskell repo (so that authors of both designs (and others) could freely edit it) with a list of things that people care about, which should be as specific as possible. For example: | FPComplete | Well-Typed | --------------------------------------------------------------- Design document | https://... | https://... | Does this design protect from these attacks? | FPComplete | Well-Typed | -------------------------------------------------------------- Attack1 | yes | no | Attack1Comment | because of so and so | because of so and so | Attack2 | no | yes | Attack2Comment | because of so and so | because of so and so | Attack3 | no | no | Attack3Comment | because of so and so | because of so and so | ... Features: | | FPComplete | Well-Typed | --|---------------------------------------------------------- 1 |Allows for third-party mirrors | yes | yes | 2 |Comment regarding 1 | ... | ... | Estimated effort: | | FPComplete | Well-Typed | --|----------------------------------------------------------------- 1 | Tools required | git, ... | ... | 2 | Tools that need to be changed | ... | ... | 3 | Time required for 2 (hours) | ... | ... | 4 | Size of changes required for 2 (LOC) | ... | ... | Possibly with comments, too. From neto at netowork.me Fri Apr 17 07:47:50 2015 From: neto at netowork.me (Ernesto Rodriguez) Date: Fri, 17 Apr 2015 09:47:50 +0200 Subject: [Haskell-cafe] Coding katas/dojos and functional programming introduction In-Reply-To: References: Message-ID: I agree with the approach. I have never done a intro to FP but I think parser combinators couod be a good start. U begin by showing the high level interface, I mean learning how to write simple parsers using for instance Parsec is.easy and one immediately sees the elegance and simplicity that approach has over say yacc. Then u guide them on implementimg a own basic parser combinator library which in my opinion is not out of this world with a good guideance. That way they see that such a powerful tool aint dark magic with FP. Anyways good luck with your workshop. Cheers, N. On Apr 16, 2015 7:25 AM, "Raphael Gaschignard" wrote: > Is this aimed for FP beginners who already know something like Java? I > think the thing to do here would be to come up with some tasks that are > genuinely tedious to write in a Java-esque (or Pascal-like) language, and > then present how FP solutions are simpler. > > I'm of the opinion that FP succeeds not just because of the tenants of > FP, but because most of the languages are terse and have code that is > "pretty". Showing some quick things involving quick manipulation of tuples > (basically a bunch of list processing) could show that things don't have to > be complicated with a bunch of anonymous classes. > > Anyways, I think the essential thing is to present a problem that they, > as programmers, have already experienced. The big one being "well these two > functions are *almost* the same but the inner-part of the function has > different logic" (basically, looking at things like map). Open up the world > of possibilities. It's not things that are only possible in Haskell/Scheme > (after all, all of these languages are turing complete so..), but they're > so much easier to write in these languages. > > On Thu, Apr 16, 2015 at 7:41 AM Mike Meyer wrote: > >> On Wed, Apr 15, 2015 at 5:28 PM, Gautier DI FOLCO < >> gautier.difolco at gmail.com> wrote: >> >>> 2015-04-15 19:15 GMT+00:00 Mike Meyer : >>> >>>> Well, functional programming is very much like an elephant. >>>> >>> >>> I have the same thought about OOP some years ago, them I discovered then >>> first meaning of it and all was so clear and simple. My goal isn't to teach >>> the full power of FP, my goal is to give them inspiration, to suggest that >>> there is a wider world to explore. >>> >> >> Just clarify, this is a reference to the fable of the blind men and the >> elephant. What you think it is like will depend on how you approach it. >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Fri Apr 17 08:17:52 2015 From: michael at snoyman.com (Michael Snoyman) Date: Fri, 17 Apr 2015 08:17:52 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <87bninfdkp.fsf@karetnikov.org> References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> <87bninfdkp.fsf@karetnikov.org> Message-ID: On Fri, Apr 17, 2015 at 9:56 AM Nikita Karetnikov wrote: > > I'm reminded of SPJs usual request for a wiki page *with details* > > discussing pros/cons of all the proposals for new GHC features. Might it > > be time to start such a page? (Of course this is not meant to imply any > > particular *rush* per se, but this is obviously becoming a growing > > concern in the community.) > > I think it must be the first step. Otherwise, it's hard to evaluate the > proposals. It would be great if both designs could be compared side by > side. I'd suggest to create a file in the commercial haskell repo (so > that authors of both designs (and others) could freely edit it) with a > list of things that people care about, which should be as specific as > possible. For example: > > | FPComplete | Well-Typed | > --------------------------------------------------------------- > Design document | https://... | https://... | > > Does this design protect from these attacks? > > | FPComplete | Well-Typed | > -------------------------------------------------------------- > Attack1 | yes | no | > Attack1Comment | because of so and so | because of so and so | > Attack2 | no | yes | > Attack2Comment | because of so and so | because of so and so | > Attack3 | no | no | > Attack3Comment | because of so and so | because of so and so | > ... > > Features: > > | | FPComplete | Well-Typed | > --|---------------------------------------------------------- > 1 |Allows for third-party mirrors | yes | yes | > 2 |Comment regarding 1 | ... | ... | > > Estimated effort: > > | | FPComplete | Well-Typed | > --|----------------------------------------------------------------- > 1 | Tools required | git, ... | ... | > 2 | Tools that need to be changed | ... | ... | > 3 | Time required for 2 (hours) | ... | ... | > 4 | Size of changes required for 2 (LOC) | ... | ... | > > Possibly with comments, too. > > This is a great idea, thank you both for raising it. I was discussing something similar with others in a text chat earlier this morning. I've gone ahead and put together a page to cover this discussion: https://github.com/commercialhaskell/commercialhaskell/blob/master/proposal/improved-hackage-security.md The document definitely needs more work, this is just meant to get the ball rolling. As usual with the commercialhaskell repo, if anyone wants edit access, just request it on the issue tracker. Or most likely, send a PR and you'll get a commit bit almost magically ;) Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From magnus at therning.org Fri Apr 17 08:56:46 2015 From: magnus at therning.org (Magnus Therning) Date: Fri, 17 Apr 2015 10:56:46 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> <20150416220122.GA4126@tatooine> Message-ID: On 17 April 2015 at 05:25, Michael Snoyman wrote: > On Fri, Apr 17, 2015 at 1:01 AM Magnus Therning wrote: >> On Thu, Apr 16, 2015 at 03:28:10PM +0000, Michael Snoyman wrote: >> > Minor update. Some of your points about checking signatures before >> > unpacking made me curious about what Git had to offer in these >> > circumstances. For those like me who were unaware of the >> > functionality, it turns out that Git has the option to reject >> > non-signed commits, just run: >> > >> > git pull --verify-signatures >> > >> > I've set up the Travis job that pulls from Hackage to sign its >> > commits with the GPG key I've attached to this email (fingerprint >> > E595 AD42 14AF A6BB 1552 0B23 E40D 74D6 D6CF 60FD). >> >> Nice one! >> >> One thing I, as a developer of a tool that consumes the Hackage >> index[1], would like to see is a bit more meta data, in particular >> >> - alternative download URLs for the source >> - hashes of the source (probably needs to be per URL) >> >> I thought I saw something about this in the thread, but going through >> it again I can't seem to find it. Would this sort of thing also be >> included in "improvements to package hosting"? >> >> /M >> >> [1]: http://hackage.haskell.org/package/cblrepo >> >> > > My strawman proposal did include the idea of identifying a package via its > hash, and then providing redundant URLs for download (some of those URLs > possibly being non-HTTP, such as a special URL to refer to contents within a > Git repository). But as I keep saying, that was a strawman proposal, not to > be taken as a final design. > > That said, simply adding that information to the 00-index file seems like an > easy win. The hashes, at the very least, would fit in well. I knew I'd seen it somewhere :) Yes, the addition of more meta data is an easy win and can be done before the dust has settled on the issue of how to achieve trust :) One thing I personally think is nice with OCaml's opam is that its package database is in a git repo (on github) and that adding packages is a matter of submitting a patch. I'd very much like to see a future where I can get a package onto Hackage by 1. cloning the Hackage package git repo 2. add and commit a .cabal file and meta data about where my package can be found, e.g. something like url="GIT=http://github/myname/mypkg.git;TAG=v1.0.2" sha512="..." 3. submit a pull request /M -- Magnus Therning OpenPGP: 0xAB4DFBA4 email: magnus at therning.org jabber: magnus at therning.org twitter: magthe http://therning.org/magnus From duncan at well-typed.com Fri Apr 17 11:23:30 2015 From: duncan at well-typed.com (Duncan Coutts) Date: Fri, 17 Apr 2015 12:23:30 +0100 Subject: [Haskell-cafe] Ongoing IHG work to improve Hackage security In-Reply-To: References: <1429176835.25663.30.camel@dunky.localdomain> <1429191278.25663.116.camel@dunky.localdomain> Message-ID: <1429269810.25663.176.camel@dunky.localdomain> On Thu, 2015-04-16 at 15:56 +0200, Mikhail Glushenkov wrote: > Hi, > > On 16 April 2015 at 15:34, Duncan Coutts wrote: > > Compliant tar tools (including the standard unix tools, and > > cabal-install) understand this and take the last entry in the archive as > > the current file content. > > Thanks. I looked at the code again, and while this is not explicitly > mentioned in comments, we get this behaviour for free by relying on > Map.fromList. Sorry, I should have added more comments there. I was aware of this issue when I wrote the tar package (indeed I found out more about the history of the tar format than is really healthy for anyone). -- Duncan Coutts, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ From rasen.dubi at gmail.com Fri Apr 17 11:38:25 2015 From: rasen.dubi at gmail.com (Alexey Shmalko) Date: Fri, 17 Apr 2015 14:38:25 +0300 Subject: [Haskell-cafe] Coding katas/dojos and functional programming introduction In-Reply-To: References: Message-ID: When I was a Haskell beginner I was fascinated by writing even a simple CSV parser. It was so clear after flex/bison or parsing by hand in C++. The other cool thing in Haskell is writing JSON parsers with aeson. They also really short and expressive. Regards, Alexey On Fri, Apr 17, 2015 at 10:47 AM, Ernesto Rodriguez wrote: > I agree with the approach. I have never done a intro to FP but I think > parser combinators couod be a good start. U begin by showing the high level > interface, I mean learning how to write simple parsers using for instance > Parsec is.easy and one immediately sees the elegance and simplicity that > approach has over say yacc. Then u guide them on implementimg a own basic > parser combinator library which in my opinion is not out of this world with > a good guideance. That way they see that such a powerful tool aint dark > magic with FP. > > Anyways good luck with your workshop. > > Cheers, > > N. > On Apr 16, 2015 7:25 AM, "Raphael Gaschignard" wrote: > >> Is this aimed for FP beginners who already know something like Java? I >> think the thing to do here would be to come up with some tasks that are >> genuinely tedious to write in a Java-esque (or Pascal-like) language, and >> then present how FP solutions are simpler. >> >> I'm of the opinion that FP succeeds not just because of the tenants of >> FP, but because most of the languages are terse and have code that is >> "pretty". Showing some quick things involving quick manipulation of tuples >> (basically a bunch of list processing) could show that things don't have to >> be complicated with a bunch of anonymous classes. >> >> Anyways, I think the essential thing is to present a problem that they, >> as programmers, have already experienced. The big one being "well these two >> functions are *almost* the same but the inner-part of the function has >> different logic" (basically, looking at things like map). Open up the world >> of possibilities. It's not things that are only possible in Haskell/Scheme >> (after all, all of these languages are turing complete so..), but they're >> so much easier to write in these languages. >> >> On Thu, Apr 16, 2015 at 7:41 AM Mike Meyer wrote: >> >>> On Wed, Apr 15, 2015 at 5:28 PM, Gautier DI FOLCO < >>> gautier.difolco at gmail.com> wrote: >>> >>>> 2015-04-15 19:15 GMT+00:00 Mike Meyer : >>>> >>>>> Well, functional programming is very much like an elephant. >>>>> >>>> >>>> I have the same thought about OOP some years ago, them I discovered >>>> then first meaning of it and all was so clear and simple. My goal isn't to >>>> teach the full power of FP, my goal is to give them inspiration, to suggest >>>> that there is a wider world to explore. >>>> >>> >>> Just clarify, this is a reference to the fable of the blind men and the >>> elephant. What you think it is like will depend on how you approach it. >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guthrie at mum.edu Fri Apr 17 13:21:55 2015 From: guthrie at mum.edu (Gregory Guthrie) Date: Fri, 17 Apr 2015 08:21:55 -0500 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped Message-ID: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5C8@MAIL.cs.mum.edu> I find that Haskell has a very different learning curve from other languages that I use/know/have-tried, in that the basic language itself is very simple and easy to learn and appreciate. However once one starts using a lot of monads and applicatives and other libraries, it can begin to look more like APL. >>> parser >>= >>> ( \s -> return ( pl' { P.payloadData = setField pld (Just s) } } ) ) Certainly one can learn to parse and read this, but with all of the new operators and thus syntax not familiar to standard IP language users. (Not a complaint, just an observation from teaching this to students new to FP.) And in my experience the cabal problems are the "fatal-flaw"; it is not infrequent that I have had to delete all libraries and start over, and I have only very simple usage. I would not want to have a business project that depended on this, as often I have not found a good solution where I could install all the packages I wanted. (Perhaps I just need to learn more about sandboxing techniques.) I am not a fan of the Scala syntax, but it does seem to be an easier transition because it look-and-feel's more like the typical IPs. ------------------------------------------- > -----Original Message----- ... From toad3k at gmail.com Fri Apr 17 13:44:13 2015 From: toad3k at gmail.com (David McBride) Date: Fri, 17 Apr 2015 09:44:13 -0400 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped In-Reply-To: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5C8@MAIL.cs.mum.edu> References: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5C8@MAIL.cs.mum.edu> Message-ID: I wouldn't put much stock in tiobe. They change their algorithm regularly and they apparently did something drastic this month. As for haskell, I have never seen as many job offers for haskell developers as I have seen in the last few months. I do think scala is more popular than haskell in industry, but not by as much as tiobe seems to think at this particular moment. On Fri, Apr 17, 2015 at 9:21 AM, Gregory Guthrie wrote: > I find that Haskell has a very different learning curve from other > languages that I use/know/have-tried, in that the basic language itself is > very simple and easy to learn and appreciate. However once one starts using > a lot of monads and applicatives and other libraries, it can begin to look > more like APL. > > >>> parser >>= >>> ( \s -> return ( pl' { > P.payloadData = setField pld (Just s) } } ) ) > > Certainly one can learn to parse and read this, but with all of the new > operators and thus syntax not familiar to standard IP language users. > (Not a complaint, just an observation from teaching this to students new > to FP.) > > And in my experience the cabal problems are the "fatal-flaw"; it is not > infrequent that I have had to delete all libraries and start over, and I > have only very simple usage. I would not want to have a business project > that depended on this, as often I have not found a good solution where I > could install all the packages I wanted. (Perhaps I just need to learn more > about sandboxing techniques.) > > I am not a fan of the Scala syntax, but it does seem to be an easier > transition because it look-and-feel's more like the typical IPs. > > ------------------------------------------- > > -----Original Message----- > ... > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mantkiew at gsd.uwaterloo.ca Fri Apr 17 14:22:15 2015 From: mantkiew at gsd.uwaterloo.ca (Michal Antkiewicz) Date: Fri, 17 Apr 2015 10:22:15 -0400 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped In-Reply-To: References: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5C8@MAIL.cs.mum.edu> Message-ID: I would not worry too much about rankings like that. What Haskell community should worry about is producing nice empirical software engineering research about usage of Haskell in practice, especially about development vs. maintenance cost for long term projects. It's easy to "learn" and develop something in JavaScript/Python/Java etc. It's much harder to evolve the software when there's sparse documentation and after team members leave (which is the most common situation). I'd say that compared to Haskell, JavaScript and Python codebases are "unmaintainable" in the long term. It's like walking on a mine field, pretty much. I know that doing such studies is very costly but some quantitative evidence should trickle down from the field. I have one anecdotal story to tell. I was doing a very extensive change in my project. It was huge. I approached a change a few times, trying to minimize it but as I was performing the change I was learning a lot about the codebase (I am a maintainer, other people wrote it) with the help of the compiler. You try it and you see the impact. Finally, I found a good way and implemented the change. Once I got it to compile again, one test case failed. I quickly identified the bug I introduced during the change and fixed it in 10min or so. After that, all my test suites and regression tests passed. All that without going through lots of manual testing/debugging/etc. Since then, no new bugs related to that huge change were found. That is the kind of power Haskell provides and we need more stories like that. Cheers, Michal On Fri, Apr 17, 2015 at 9:44 AM, David McBride wrote: > I wouldn't put much stock in tiobe. They change their algorithm regularly > and they apparently did something drastic this month. > > As for haskell, I have never seen as many job offers for haskell > developers as I have seen in the last few months. I do think scala is more > popular than haskell in industry, but not by as much as tiobe seems to > think at this particular moment. > > On Fri, Apr 17, 2015 at 9:21 AM, Gregory Guthrie wrote: > >> I find that Haskell has a very different learning curve from other >> languages that I use/know/have-tried, in that the basic language itself is >> very simple and easy to learn and appreciate. However once one starts using >> a lot of monads and applicatives and other libraries, it can begin to look >> more like APL. >> >> >>> parser >>= >>> ( \s -> return ( pl' { >> P.payloadData = setField pld (Just s) } } ) ) >> >> Certainly one can learn to parse and read this, but with all of the new >> operators and thus syntax not familiar to standard IP language users. >> (Not a complaint, just an observation from teaching this to students new >> to FP.) >> >> And in my experience the cabal problems are the "fatal-flaw"; it is not >> infrequent that I have had to delete all libraries and start over, and I >> have only very simple usage. I would not want to have a business project >> that depended on this, as often I have not found a good solution where I >> could install all the packages I wanted. (Perhaps I just need to learn more >> about sandboxing techniques.) >> >> I am not a fan of the Scala syntax, but it does seem to be an easier >> transition because it look-and-feel's more like the typical IPs. >> >> ------------------------------------------- >> > -----Original Message----- >> ... >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From imz at altlinux.org Fri Apr 17 15:48:22 2015 From: imz at altlinux.org (Ivan Zakharyaschev) Date: Fri, 17 Apr 2015 18:48:22 +0300 Subject: [Haskell-cafe] Graph diagram tools? In-Reply-To: References: Message-ID: Hello! 2015-04-17 4:23 UTC+03:00, Ivan Lazar Miljenovic : >> ### Considering dotgen vs graphviz closer >> >> But looking into the examples, I see that `dotgen` can use "Haskell >> ids" to identify created nodes, whereas in graphviz's monad (see the To bring more clear context for any readers, I put here a short excerpt from that dotgen example: >> refSpec <- src "S" >> c1 <- box "S" >> refSpec .->. c1 >> example above) one must supply extra strings as the unique ids (by >> which we refer to the nodes). Short example: >> "start" --> "a0" >> >> node "start" [shape MDiamond] > I used Strings as an example, as I was directly converting an existing > piece of Dot code; the original can be found here: > http://hackage.haskell.org/package/graphviz-2999.17.0.2/docs/Data-GraphViz-Types.html > > But, you can use any type you like for the node identifiers, as long > as you make them an instance of the PrintDot class. That's where the > `n` in the `Dot n` type comes in. Ok, thanks for the valuable information! >> I like the first approach more ("Haskell ids"). > > I admittedly don't have any ability in graphviz to create new > identifiers for you. I could (just add a StateT to the internal > monadic stack which keeps track of the next unused node identifier) Since the API is already monadic, adding another monad into the stack wouldn't impose big difficulties for the users of the API, because they won't need to restructure the code (as if it were a transition from some pure functional code into monadic). > but I think that would _reduce_ the flexibility of being able to use > your own type (it would either only work for `Dot Int`, or even if you > could apply a mapping function to use something like `GraphID`, but > that has a problem if you have a `Double` with the same value - and > hence same textual representation - as your Int). I see: [GraphID](http://hackage.haskell.org/package/graphviz-2999.17.0.2/docs/Data-GraphViz-Types.html#t:GraphID) can have distinct values with the same textual representation. But if we are thinking about automatically creating new IDs, then this problem can simply be treated in the code for tracking which IDs have already been used. There could be two APIs: a "flexible" one with user-supplied IDs, and an "automatic" API. The "automatic" one is implemented on top of the "flexible" one. > The way I see it, graphviz is usually used for converting existing > Haskell values into Dot code and then processing with dot, neato, etc. > My preference - and hence overall design with graphviz - is that you > would generate the graph first, and _then_ convert it to a Dot > representation en masse. If the Haskell representation of the graph doesn't already have unique IDs for the nodes, then such an "automatic" layer would be useful as an intermediate step in the conversion. So it seems it won't be useless even in your standard scenarios. *** You name flexibility for the user as an advantage of the existing approach. As for some advantages of the other approach (with using Haskell ids for the nodes): the compiler could catch more errors. For example, if I make a typo in an identifier when introducing an edge, then Haskell compiler would report this as an unknown identifier. Also the compiler would catch name clashes, if you accidentally give the same id to two different nodes. A potential disadvantage is then an increased verbosity: first, create the nodes, then use them for the edges. Meaning three actions instead of yours single one: "a0" --> "a1" Still, even in the "automatic ids" approach, this can be written compactly in a single line in the spirit of: bindM2 (-->) (node [textLabel "a0"]) (node [textLabel "a1"]) without explicitly giving Haskell ids to the two nodes. Perhaps, this is not important stuff, because--as you write--one is supposed to use Haskell representations of graphs and then convert them with graphviz... (I might simply not want to learn another language for representing graphs apart from dot, that's why I'd like to use the monadic API: because it closely follows the known dot format.) My last line of code already looks similar to a code constructing a Haskell representation of a graph. I'm just writing down my comments concerning the API, not that I'm confident that I know a definite way to make it better. Well, after writing this post and thinking it all over while writing, I tend to come to a conclusion resonating with your opinion stating that the monadic API turned out not as useful as you used to think: it seems that while imposing the monadic style onto the programmer, it doesn't give the advantages a monad could give (like generating unique ids automatically and catching errors with undefined or clashing ids). Without this stateful feature, much else can be done purely with dedicated graph structures. What do you think about these comments? As for dotgen: my wishes could be satisfied simply with the dotgen package, but--as you wrote--it is not safe w.r.t. to quoting/escaping user supplied values. Best regards, -- Ivan From sumit.sahrawat.apm13 at iitbhu.ac.in Fri Apr 17 16:24:01 2015 From: sumit.sahrawat.apm13 at iitbhu.ac.in (Sumit Sahrawat, Maths & Computing, IIT (BHU)) Date: Fri, 17 Apr 2015 21:54:01 +0530 Subject: [Haskell-cafe] [Haskell-beginners] texture mapping with SDL In-Reply-To: References: Message-ID: On 17 April 2015 at 19:28, Florian Gillard wrote: > Hi everyone, > > I am not sure my previous message went trough, if so, sorry for the double > post. > > I am trying to make a basic raycaster (something like the first > wolfenstein 3D) using haskell and SDL 1.2 > > So far I have it working using coloured lines and I would like to know if > there is any way to apply transforms to textures loaded in memory using > SDL, in order to achieve basic texture mapping on the walls. > > I looked at the SDL doc but I didn't find anything looking like what I > need. > > The code is there: > > https://github.com/eniac314/maze-generator/blob/master/raycaster.hs > > sreenshot here: > https://github.com/eniac314/maze-generator/blob/master/raycaster.png > > (the last part is not done yet but has nothing to do with the raycaster) > > I would appreciate any suggestion :) > > _______________________________________________ > Beginners mailing list > Beginners at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners > > The message got through. It might be better answered on the haskell-cafe so I'm relaying it there. -- Regards Sumit Sahrawat -------------- next part -------------- An HTML attachment was scrubbed... URL: From joehillen at gmail.com Fri Apr 17 18:21:57 2015 From: joehillen at gmail.com (Joe Hillenbrand) Date: Fri, 17 Apr 2015 11:21:57 -0700 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped In-Reply-To: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5C8@MAIL.cs.mum.edu> References: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5C8@MAIL.cs.mum.edu> Message-ID: > On Fri, Apr 17, 2015 at 6:21 AM, Gregory Guthrie wrote: > > And in my experience the cabal problems are the "fatal-flaw"; Big +1 here. Cabal is the biggest thing keeping me from aggressively promoting Haskell in industry. The risk of promoting Haskell now is that people will try out Haskell, hit a cabal issue, give up, and then form a bad opinion of Haskell because of it. There is saying "If a user has a bad experience, that's a bug." I've been patiently awaiting the Backpack overhaul before promoting Haskell in the workplace. [1] [1] https://ghc.haskell.org/trac/ghc/wiki/Backpack From cma at bitemyapp.com Fri Apr 17 18:29:20 2015 From: cma at bitemyapp.com (Christopher Allen) Date: Fri, 17 Apr 2015 13:29:20 -0500 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped In-Reply-To: References: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5C8@MAIL.cs.mum.edu> Message-ID: I work with a lot of Haskell beginners and the Cabal problems went away when sandboxes were added to Cabal and the learners started using a sandbox for every project. I've only seen a handful (one hand, 5 fingers) of problems since then that weren't attributable to, "wasn't using a sandbox". Of those, about half were the user doing something uncommon/unusual. I have a tutorial here http://howistart.org/posts/haskell/1 which among other things, covers the basics of using sandboxes. Library maturity is my only worry with production Haskell. Not enough eyeballs and all that. It's not enough to stop me or my colleagues using it in production though. I can fix libraries, I can't fix Scala. On Fri, Apr 17, 2015 at 1:21 PM, Joe Hillenbrand wrote: > > On Fri, Apr 17, 2015 at 6:21 AM, Gregory Guthrie > wrote: > > > > And in my experience the cabal problems are the "fatal-flaw"; > > Big +1 here. Cabal is the biggest thing keeping me from aggressively > promoting Haskell in industry. The risk of promoting Haskell now is > that people will try out Haskell, hit a cabal issue, give up, and then > form a bad opinion of Haskell because of it. > > There is saying "If a user has a bad experience, that's a bug." > > I've been patiently awaiting the Backpack overhaul before promoting > Haskell in the workplace. [1] > > [1] https://ghc.haskell.org/trac/ghc/wiki/Backpack > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From guthrie at mum.edu Fri Apr 17 18:37:51 2015 From: guthrie at mum.edu (Gregory Guthrie) Date: Fri, 17 Apr 2015 13:37:51 -0500 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped In-Reply-To: References: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5C8@MAIL.cs.mum.edu> Message-ID: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5E4@MAIL.cs.mum.edu> Nice result ? but doesn?t this assume that each package installation is only for one project? Is there some good way to share sandboxes that doesn?t just reintroduce the same version conflicts problems? From: Christopher Allen [mailto:cma at bitemyapp.com] Sent: Friday, April 17, 2015 1:29 PM To: Joe Hillenbrand Cc: Gregory Guthrie; haskell-cafe at haskell.org Subject: Re: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped I work with a lot of Haskell beginners and the Cabal problems went away when sandboxes were added to Cabal and the learners started using a sandbox for every project. I've only seen a handful (one hand, 5 fingers) of problems since then that weren't attributable to, "wasn't using a sandbox". Of those, about half were the user doing something uncommon/unusual. I have a tutorial here http://howistart.org/posts/haskell/1 which among other things, covers the basics of using sandboxes. Library maturity is my only worry with production Haskell. Not enough eyeballs and all that. It's not enough to stop me or my colleagues using it in production though. I can fix libraries, I can't fix Scala. On Fri, Apr 17, 2015 at 1:21 PM, Joe Hillenbrand > wrote: > On Fri, Apr 17, 2015 at 6:21 AM, Gregory Guthrie > wrote: > > And in my experience the cabal problems are the "fatal-flaw"; Big +1 here. Cabal is the biggest thing keeping me from aggressively promoting Haskell in industry. The risk of promoting Haskell now is that people will try out Haskell, hit a cabal issue, give up, and then form a bad opinion of Haskell because of it. There is saying "If a user has a bad experience, that's a bug." I've been patiently awaiting the Backpack overhaul before promoting Haskell in the workplace. [1] [1] https://ghc.haskell.org/trac/ghc/wiki/Backpack _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From joehillen at gmail.com Fri Apr 17 18:47:58 2015 From: joehillen at gmail.com (Joe Hillenbrand) Date: Fri, 17 Apr 2015 11:47:58 -0700 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped In-Reply-To: References: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5C8@MAIL.cs.mum.edu> Message-ID: Sandboxes are really a bandaid and most tutorials don't promote them enough. You can also still hit the multiple package versions issue in a sandbox. The lack of "cabal upgrade" is another big headache. On Fri, Apr 17, 2015 at 11:29 AM, Christopher Allen wrote: > I work with a lot of Haskell beginners and the Cabal problems went away when > sandboxes were added to Cabal and the learners started using a sandbox for > every project. > > I've only seen a handful (one hand, 5 fingers) of problems since then that > weren't attributable to, "wasn't using a sandbox". Of those, about half were > the user doing something uncommon/unusual. > > I have a tutorial here http://howistart.org/posts/haskell/1 which among > other things, covers the basics of using sandboxes. > > Library maturity is my only worry with production Haskell. Not enough > eyeballs and all that. It's not enough to stop me or my colleagues using it > in production though. I can fix libraries, I can't fix Scala. > > > > On Fri, Apr 17, 2015 at 1:21 PM, Joe Hillenbrand > wrote: >> >> > On Fri, Apr 17, 2015 at 6:21 AM, Gregory Guthrie >> > wrote: >> > >> > And in my experience the cabal problems are the "fatal-flaw"; >> >> Big +1 here. Cabal is the biggest thing keeping me from aggressively >> promoting Haskell in industry. The risk of promoting Haskell now is >> that people will try out Haskell, hit a cabal issue, give up, and then >> form a bad opinion of Haskell because of it. >> >> There is saying "If a user has a bad experience, that's a bug." >> >> I've been patiently awaiting the Backpack overhaul before promoting >> Haskell in the workplace. [1] >> >> [1] https://ghc.haskell.org/trac/ghc/wiki/Backpack >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > From cma at bitemyapp.com Fri Apr 17 18:50:11 2015 From: cma at bitemyapp.com (Christopher Allen) Date: Fri, 17 Apr 2015 13:50:11 -0500 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped In-Reply-To: References: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5C8@MAIL.cs.mum.edu> Message-ID: I'm not sure what you think a `cabal upgrade` should do. You can ask Cabal to install a particular, newer version of a library and it will resolve the package dependencies across the project and prompt for reinstalls for where it's needed. On Fri, Apr 17, 2015 at 1:47 PM, Joe Hillenbrand wrote: > Sandboxes are really a bandaid and most tutorials don't promote them > enough. > > You can also still hit the multiple package versions issue in a sandbox. > > The lack of "cabal upgrade" is another big headache. > > On Fri, Apr 17, 2015 at 11:29 AM, Christopher Allen > wrote: > > I work with a lot of Haskell beginners and the Cabal problems went away > when > > sandboxes were added to Cabal and the learners started using a sandbox > for > > every project. > > > > I've only seen a handful (one hand, 5 fingers) of problems since then > that > > weren't attributable to, "wasn't using a sandbox". Of those, about half > were > > the user doing something uncommon/unusual. > > > > I have a tutorial here http://howistart.org/posts/haskell/1 which among > > other things, covers the basics of using sandboxes. > > > > Library maturity is my only worry with production Haskell. Not enough > > eyeballs and all that. It's not enough to stop me or my colleagues using > it > > in production though. I can fix libraries, I can't fix Scala. > > > > > > > > On Fri, Apr 17, 2015 at 1:21 PM, Joe Hillenbrand > > wrote: > >> > >> > On Fri, Apr 17, 2015 at 6:21 AM, Gregory Guthrie > >> > wrote: > >> > > >> > And in my experience the cabal problems are the "fatal-flaw"; > >> > >> Big +1 here. Cabal is the biggest thing keeping me from aggressively > >> promoting Haskell in industry. The risk of promoting Haskell now is > >> that people will try out Haskell, hit a cabal issue, give up, and then > >> form a bad opinion of Haskell because of it. > >> > >> There is saying "If a user has a bad experience, that's a bug." > >> > >> I've been patiently awaiting the Backpack overhaul before promoting > >> Haskell in the workplace. [1] > >> > >> [1] https://ghc.haskell.org/trac/ghc/wiki/Backpack > >> _______________________________________________ > >> Haskell-Cafe mailing list > >> Haskell-Cafe at haskell.org > >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spam at scientician.net Fri Apr 17 18:58:47 2015 From: spam at scientician.net (Bardur Arantsson) Date: Fri, 17 Apr 2015 20:58:47 +0200 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped In-Reply-To: References: <08EF9DA445C4B5439C4733E1F35705BA04F834A3C5C8@MAIL.cs.mum.edu> Message-ID: On 17-04-2015 20:47, Joe Hillenbrand wrote: > Sandboxes are really a bandaid and most tutorials don't promote them enough. > They *are* a bandaid, but a reasonably well-working one. Hopefully we'll an even more robust Nix-like approach soonish: http://www.well-typed.com/blog/2015/01/how-we-might-abolish-cabal-hell-part-2/ (there's a link to part 1 at the start if you haven't read that) That doesn't solve the diamond dependency problem, but hopefully there'll also be a solution for that at some future point in time (Backpack). I'll also note that I personally don't know of *any* software platform that has actually solved this problem entirely satisfactorily. On the JVM, OSGi gets close (at least in theory), but AFAICT it's a lot more complex that it (ideally) should be. > You can also still hit the multiple package versions issue in a sandbox. Indeed, but my personal experience is that it actually doesn't happen very much in practice. (Anecdotal, I know.) > The lack of "cabal upgrade" is another big headache. > Meh. Speak for yourself! :) Personally I'm more interested in repeatable and consistent package installations rather than avoiding 30 minutes of 100% CPU usage once in a while. (Of course, priorities may vary and reasonable people can disagree!) Regards, From spam at scientician.net Fri Apr 17 21:20:19 2015 From: spam at scientician.net (Bardur Arantsson) Date: Fri, 17 Apr 2015 23:20:19 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> <87bninfdkp.fsf@karetnikov.org> Message-ID: On 17-04-2015 10:17, Michael Snoyman wrote: > This is a great idea, thank you both for raising it. I was discussing > something similar with others in a text chat earlier this morning. I've > gone ahead and put together a page to cover this discussion: > > https://github.com/commercialhaskell/commercialhaskell/blob/master/proposal/improved-hackage-security.md > > The document definitely needs more work, this is just meant to get the ball > rolling. As usual with the commercialhaskell repo, if anyone wants edit > access, just request it on the issue tracker. Or most likely, send a PR and > you'll get a commit bit almost magically ;) Thank you. Just to make sure that I understand -- is this page only meant to cover the original "strawman proposal" at the start of this thread, or...? Maybe you intend for this to be extended in a detailed way under the "Long-term solutions" heading? I was imagining a wiki page which could perhaps start out by collecting all the currently identified possible threats in a table, and then all "participants" could perhaps fill in how their suggestion addresses those threats (or tell us why we shouldn't care about this particular threat). Of course other relevent non-threat considerations might be relevant to add to such a table, such as: how prevalent is the software/idea we're basing this on? does this have any prior implementation (e.g. the append-to-tar and expect that web servers will behave sanely thing)? etc. (I realize that I'm asking for a lot of work, but I think it's going to be necessary, at least if there's going to be consensus and not just a de-facto "winner".) Regards, From ivan.miljenovic at gmail.com Fri Apr 17 23:23:59 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Sat, 18 Apr 2015 09:23:59 +1000 Subject: [Haskell-cafe] Graph diagram tools? In-Reply-To: References: Message-ID: On 18 April 2015 at 01:48, Ivan Zakharyaschev wrote: > Hello! > > 2015-04-17 4:23 UTC+03:00, Ivan Lazar Miljenovic : > >>> ### Considering dotgen vs graphviz closer >>> >>> But looking into the examples, I see that `dotgen` can use "Haskell >>> ids" to identify created nodes, whereas in graphviz's monad (see the > > To bring more clear context for any readers, I put here a short > excerpt from that dotgen example: > >>> refSpec <- src "S" >>> c1 <- box "S" >>> refSpec .->. c1 It should be noted that src and box are custom functions and not part of dotgen. > > >>> example above) one must supply extra strings as the unique ids (by >>> which we refer to the nodes). > > Short example: > >>> "start" --> "a0" >>> >>> node "start" [shape MDiamond] > > >> I used Strings as an example, as I was directly converting an existing >> piece of Dot code; the original can be found here: >> http://hackage.haskell.org/package/graphviz-2999.17.0.2/docs/Data-GraphViz-Types.html >> >> But, you can use any type you like for the node identifiers, as long >> as you make them an instance of the PrintDot class. That's where the >> `n` in the `Dot n` type comes in. > > Ok, thanks for the valuable information! > >>> I like the first approach more ("Haskell ids"). >> >> I admittedly don't have any ability in graphviz to create new >> identifiers for you. I could (just add a StateT to the internal >> monadic stack which keeps track of the next unused node identifier) > > Since the API is already monadic, adding another monad into the stack > wouldn't impose big difficulties for the users of the API, because they > won't need to restructure the code (as if it were a transition from > some pure functional code into monadic). Sure, this bit itself isn't a problem. > >> but I think that would _reduce_ the flexibility of being able to use >> your own type (it would either only work for `Dot Int`, or even if you >> could apply a mapping function to use something like `GraphID`, but >> that has a problem if you have a `Double` with the same value - and >> hence same textual representation - as your Int). > > I see: > [GraphID](http://hackage.haskell.org/package/graphviz-2999.17.0.2/docs/Data-GraphViz-Types.html#t:GraphID) > can have distinct values with the same textual representation. > > But if we are thinking about automatically creating new IDs, then this > problem can simply be treated in the code for tracking which IDs have > already been used. Possibly a bit more complicated than its worth: "OK, when I convert this ID to a textual one it appears to be the same as one we've already seen" would require a lot more bookkeeping, and won't help prevent errors from explicit user-defined node IDs defined later (unless we also use some from of backwards state to check for that as well). > > There could be two APIs: a "flexible" one with user-supplied IDs, and > an "automatic" API. The "automatic" one is implemented on top of the > "flexible" one. > >> The way I see it, graphviz is usually used for converting existing >> Haskell values into Dot code and then processing with dot, neato, etc. > >> My preference - and hence overall design with graphviz - is that you >> would generate the graph first, and _then_ convert it to a Dot >> representation en masse. > > If the Haskell representation of the graph doesn't already have unique > IDs for the nodes, then such an "automatic" layer would be useful as > an intermediate step in the conversion. So it seems it won't be > useless even in your standard scenarios. > > *** > > You name flexibility for the user as an advantage of the existing > approach. As for some advantages of the other approach (with using > Haskell ids for the nodes): the compiler could catch more errors. > > For example, if I make a typo in an identifier when introducing an > edge, then Haskell compiler would report this as an unknown > identifier. But you can always use variables rather than hard-coding the Strings in... I don't *recommend* hard-coding Strings in, I just did so in that sample usage just so you could compare it to the sample Dot code and notice how similar it was. > > Also the compiler would catch name clashes, if you accidentally give > the same id to two different nodes. > > A potential disadvantage is then an increased verbosity: first, create > the nodes, then use them for the edges. Meaning three actions instead of > yours single one: > > "a0" --> "a1" > > Still, even in the "automatic ids" approach, this can be written > compactly in a single > line in the spirit of: > > bindM2 (-->) (node [textLabel "a0"]) (node [textLabel "a1"]) > > without explicitly giving Haskell ids to the two nodes. > > Perhaps, this is not important stuff, because--as you write--one is > supposed to use Haskell representations of graphs and then convert > them with graphviz... (I might simply not want to learn another > language for representing graphs apart from dot, that's why I'd like > to use the monadic API: because it closely follows the known dot format.) > > My last line of code already looks similar to a code constructing a > Haskell representation of a graph. > > I'm just writing down my comments concerning the API, not that I'm > confident that I know a definite way to make it better. > > Well, after writing this post and thinking it all over while writing, > I tend to come to a conclusion resonating with your opinion stating > that the monadic API turned out not as useful as you used to think: > > it seems that while imposing the monadic style onto the programmer, it > doesn't give the advantages a monad could give (like generating unique > ids automatically and catching errors with undefined or clashing ids). > Without this stateful feature, much else can be done purely with > dedicated graph structures. > > What do you think about these comments? Pretty much. I think I had an actual use-case when I first wrote the Monadic interface (some kind of tutorial from memory), but after I finished it I realised it would be much simpler using the alternative types. If you have a data structure that already represents a graph, then graphElemsToDot will let you convert that into the representation of a Dot graph: http://hackage.haskell.org/package/graphviz-2999.17.0.2/docs/Data-GraphViz.html#v:graphElemsToDot The only real reason I can come up with for using a Monadic interface is when you want to embed a (relatively) static Dot graph into some Haskell code and try and get some safety from the type-checker for attribute values. In that case, some relatively simple mapM_, etc. expressions might come in handy. But unless you have something rather simple in mind, I don't think this is all that common. > As for dotgen: my wishes could be satisfied simply with the dotgen > package, but--as you wrote--it is not safe w.r.t. to quoting/escaping > user supplied values. For simple values it should be OK. -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From mihai.maruseac at gmail.com Sat Apr 18 02:04:47 2015 From: mihai.maruseac at gmail.com (Mihai Maruseac) Date: Fri, 17 Apr 2015 22:04:47 -0400 Subject: [Haskell-cafe] all for Contributions - Haskell Communities and Activities Report, May 2015 edition (28th edition) Message-ID: Dear all, We would like to collect contributions for the 28th edition of the ============================================================ Haskell Communities & Activities Report http://www.haskell.org/haskellwiki/Haskell_Communities_and_Activities_Report Submission deadline: 17 May 2015 (please send your contributions to hcar at haskell.org, in plain text or LaTeX format) ============================================================ This is the short story (one extra point to the story added from previous editions): * If you are working on any project that is in some way related to Haskell, please write a short entry and submit it. Even if the project is very small or unfinished or you think it is not important enough --- please reconsider and submit an entry anyway! * If you are interested in an existing project related to Haskell that has not previously been mentioned in the HCAR, please tell us, so that we can contact the project leaders and ask them to submit an entry. * **NEW**: If you are working on a project that is looking for contributors, please write a short entry and submit it, mentioning that your are looking for contributors. The final report might have an index with such projects, provided we get enough such submissions. * Feel free to pass on this call for contributions to others that might be interested. More detailed information: The Haskell Communities & Activities Report is a bi-annual overview of the state of Haskell as well as Haskell-related projects over the last, and possibly the upcoming six months. If you have only recently been exposed to Haskell, it might be a good idea to browse the previous edition --- you will find interesting projects described as well as several starting points and links that may provide answers to many questions. Contributions will be collected until the submission deadline. They will then be compiled into a coherent report that is published online as soon as it is ready. As always, this is a great opportunity to update your webpages, make new releases, announce or even start new projects, or to talk about developments you want every Haskeller to know about! Looking forward to your contributions, Mihai Maruseac and Alejandro Serrano Mena FAQ: Q: What format should I write in? A: The required format is a LaTeX source file, adhering to the template that is available at: http://haskell.org/communities/05-2015/template.tex There is also a LaTeX style file at http://haskell.org/communities/05-2015/hcar.sty that you can use to preview your entry. If you do not know LaTeX, then use plain text. If you modify an old entry that you have written for an earlier edition of the report, you should soon receive your old entry as a template (provided we have your valid email address). Please modify that template, rather than using your own version of the old entry as a template. Q: Can I include Haskell code? A: Yes. Please use lhs2tex syntax (http://www.andres-loeh.de/lhs2tex/). The report is compiled in mode polycode.fmt. Q: Can I include images? A: Yes, you are even encouraged to do so. Please use .jpg or .png format, then, PNG being preferred for simplicity. Q: Should I send files in .zip archives or similar? A: No, plain file attachements are the way. Q: How much should I write? A: Authors are asked to limit entries to about one column of text. A general introduction is helpful. Apart from that, you should focus on recent or upcoming developments. Pointers to online content can be given for more comprehensive or "historic" overviews of a project. Images do not count towards the length limit, so you may want to use this opportunity to pep up entries. There is no minimum length of an entry! The report aims for being as complete as possible, so please consider writing an entry, even if it is only a few lines long. Q: Which topics are relevant? A: All topics which are related to Haskell in some way are relevant. We usually had reports from users of Haskell (private, academic, or commercial), from authors or contributors to projects related to Haskell, from people working on the Haskell language, libraries, on language extensions or variants. We also like reports about distributions of Haskell software, Haskell infrastructure, books and tutorials on Haskell. Reports on past and upcoming events related to Haskell are also relevant. Finally, there might be new topics we do not even think about. As a rule of thumb: if in doubt, then it probably is relevant and has a place in the HCAR. You can also simply ask us. Q: Is unfinished work relevant? Are ideas for projects relevant? A: Yes! You can use the HCAR to talk about projects you are currently working on. You can use it to look for other developers that might help you. You can use HCAR to ask for more contributors to your project, it is a good way to gain visibility and traction. Q: If I do not update my entry, but want to keep it in the report, what should I do? A: Tell us that there are no changes. The old entry will typically be reused in this case, but it might be dropped if it is older than a year, to give more room and more attention to projects that change a lot. Do not resend complete entries if you have not changed them. Q: Will I get confirmation if I send an entry? How do I know whether my email has even reached its destination, and not ended up in a spam folder? A: Prior to publication of the final report, we will send a draft to all contributors, for possible corrections. So if you do not hear from us within two weeks after the deadline, it is safer to send another mail and check whether your first one was received. -- Mihai Maruseac (MM) "If you don't know, the thing to do is not to get scared, but to learn." -- Atlas Shrugged. From aeyakovenko at gmail.com Sat Apr 18 04:58:31 2015 From: aeyakovenko at gmail.com (Anatoly Yakovenko) Date: Fri, 17 Apr 2015 21:58:31 -0700 Subject: [Haskell-cafe] dependency issues on a new install of haskell-platform 2014.2.0.0 64bit on OSX Message-ID: so I keep seeing failures in some modules, but if i install them by hand things get resolved. How do i get cabal to do this for me? so the dependencies: criterion -> monad-par-0.3.4.7 -> parallel-3.2.0.4 which fails but installing parallel-3.2.0.6 unravels the broken dependency chain. Below is the output: cabal: Error: some packages failed to install: criterion-1.1.0.0 depends on monad-par-0.3.4.7 which failed to install. monad-par-0.3.4.7 failed during the building phase. The exception was: ExitFailure 1 statistics-0.13.2.3 depends on monad-par-0.3.4.7 which failed to install. anatolys-MacBook:rbm anatolyy$ cabal install monad-par Resolving dependencies... Configuring monad-par-0.3.4.7... Building monad-par-0.3.4.7... Failed to install monad-par-0.3.4.7 Last 10 lines of the build log ( /Users/anatolyy/.cabal/logs/monad-par-0.3.4.7.log ): Building monad-par-0.3.4.7... Preprocessing library monad-par-0.3.4.7... : cannot satisfy -package-id parallel-3.2.0.4-c330f8c64fe6816637464ee78fcb9a93 (use -v for more information) cabal: Error: some packages failed to install: monad-par-0.3.4.7 failed during the building phase. The exception was: ExitFailure 1 anatolys-MacBook:rbm anatolyy$ cabal install parallel Resolving dependencies... Downloading parallel-3.2.0.6... Configuring parallel-3.2.0.6... Building parallel-3.2.0.6... Installed parallel-3.2.0.6 Updating documentation index /Users/anatolyy/Library/Haskell/share/doc/index.html anatolys-MacBook:rbm anatolyy$ cabal install monad-par Resolving dependencies... Configuring monad-par-0.3.4.7... Building monad-par-0.3.4.7... Installed monad-par-0.3.4.7 Updating documentation index /Users/anatolyy/Library/Haskell/share/doc/index.html From elnopintan at gmail.com Sat Apr 18 08:26:56 2015 From: elnopintan at gmail.com (Ignacio Blasco) Date: Sat, 18 Apr 2015 10:26:56 +0200 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped below 50 In-Reply-To: References: Message-ID: Probably this has to do with the popularity of Spark El 17/4/2015 8:54 a. m., "David Turner" escribi?: > If you like arbitrary rankings, you might like http://githut.info/ > which a colleage pointed me at recently. In 2014Q4, in terms of number > of active Github repositories, Scala is #19 and Haskell is #21 and yet > in terms of open issues, Scala is #3 and Haskell is #23. Make of that > what you will. > > On 16 April 2015 at 22:41, Henk-Jan van Tuyl wrote: > > > > L.S., > > > > From the Tiobe index page[0] of this month: > > Another interesting move this month concerns Scala. The functional > > programming language jumps to position 25 after having been between > position > > 30 and 50 for many years. Scala seems to be ready to enter the top 20 for > > the first time in history. > > > > Haskell dropped from the top 50 last month and hasn't come back. I > suppose, > > if Haskell compiled to JVM, Haskell would have a much wider audience. > > > > Regards, > > Henk-Jan van Tuyl > > > > > > [0] http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html > > > > > > -- > > Folding at home > > What if you could share your unused computer power to help find a cure? > In > > just 5 minutes you can join the world's biggest networked computer and > get > > us closer sooner. Watch the video. > > http://folding.stanford.edu/ > > > > > > http://Van.Tuyl.eu/ > > http://members.chello.nl/hjgtuyl/tourdemonad.html > > Haskell programming > > -- > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noteed at gmail.com Sat Apr 18 09:51:36 2015 From: noteed at gmail.com (Vo Minh Thu) Date: Sat, 18 Apr 2015 11:51:36 +0200 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> <552C379B.8080601@vex.net> Message-ID: Michael, thanks for solving this ! I wonder how you came up with the need for /lib/ld-linux-x86-64.so.2. I have needed only /lib64/ld-linux-x86-64.so.2 (which you have too), so maybe you added it by mistake. B.t.w, if you don't want to use a Dockerfile, you can do this: > tar -C rootfs -c . | docker import - tiny where rootfs is your repository. 2015-04-14 10:43 GMT+02:00 Michael Snoyman : > Trac ticket created: https://ghc.haskell.org/trac/ghc/ticket/10298#ticket > > I've also put together a Docker image called snoyberg/haskell-scratch > (source at https://github.com/snoyberg/haskell-scratch), which seems to be > working for me. Here's a minimal test I've put together which seems to be > succeeding (note that I've also tried some real life programs): > > #!/bin/bash > > set -e > set -x > > cat > tiny.hs < main :: IO () > main = putStrLn "Hello from a tiny Docker image" > EOF > > ghc tiny.hs > strip tiny > > cat > Dockerfile < FROM snoyberg/haskell-scratch > ADD tiny /tiny > CMD ["/tiny"] > EOF > > docker build -t tiny . > docker run --rm tiny > > > On Tue, Apr 14, 2015 at 9:52 AM Michael Snoyman wrote: >> >> Actually, I seem to have found the problem: >> >> open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = -1 >> ENOENT (No such file or directory) >> open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules", O_RDONLY|O_CLOEXEC) >> = -1 ENOENT (No such file or directory) >> >> I found that I needed to copy over the following files to make my program >> complete: >> >> /usr/lib/x86_64-linux-gnu/gconv/gconv-modules >> /usr/lib/x86_64-linux-gnu/gconv/UTF-32.so >> >> Once I did that, I could get the executable to run in the chroot. However, >> even running the statically linked executable still required most of the >> shared libraries to be present inside the chroot. So it seems that: >> >> * We can come up with a list of a few files that need to be present inside >> a Docker image to provide for minimal GHC-compiled executables >> * There's a bug in the RTS that results in an infinite loop >> >> I'm going to try to put together a semi-robust solution for the first >> problem, and I'll report the RTS issue on Trac. >> >> On Tue, Apr 14, 2015 at 9:25 AM Michael Snoyman >> wrote: >>> >>> I have a bit more information about this. In particular: I'm able to >>> reproduce this using chroot (no Docker required), and it's reproducing with >>> a dynamically linked executable too. Steps I used to reproduce: >>> >>> 1. Write a minimal "foo.hs" containing `main = putStrLn "Hello World"` >>> 2. Compile that executable and put it in an empty directory >>> 3. Run `ldd` on it and copy all necessary libraries inside that directory >>> 4. Run `sudo strace -o log.txt . /foo` >>> >>> I've uploaded the logs to: >>> >>> https://gist.github.com/snoyberg/095efb17e36acc1d6360 >>> >>> Note that, due to size of the output, I killed the process just a few >>> seconds after starting it, but when I let the output run much longer, I >>> didn't see any difference in the results. I'll continue poking at this a >>> bit, but most likely I'll open a GHC Trac ticket about it later today. >>> >>> On Tue, Apr 14, 2015 at 12:39 AM Albert Y. C. Lai wrote: >>>> >>>> I wonder whether you already know the following, and whether it is >>>> relevant to begin with. (Plus, my knowledge is fairly sketchy.) >>>> >>>> Even though you statically link glibc, its code will, at run time, >>>> dlopen a certain part of glibc. >>>> >>>> Why: To provide a really uniform abstraction layer over user account >>>> queries, e.g., man 3 getpwnam, regardless of whether the accounts are >>>> from /etc/passwd, LDAP, or whatever. >>>> >>>> Therefore, during run time, glibc first reads some config files of the >>>> host to see what kind of user account database the host uses. If it's >>>> /etc/passwd, then dlopen the implementation of getpwnam and friends for >>>> /etc/passwd; else, if it's LDAP, then dlopen the implementation of >>>> getpwnam and friends for LDAP; etc etc. >>>> >>>> So that later when you call getpwnam, it will happen to "do the right >>>> thing". >>>> >>>> This demands the required *.so files to be accessible during run time. >>>> Moreoever, if you statically link glibc, this also demands the required >>>> *.so files to version-match the glibc you statically link. >>>> >>>> (It is the main reason why most people give up on statically linking >>>> glibc.) >>>> _______________________________________________ >>>> Haskell-Cafe mailing list >>>> Haskell-Cafe at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From james at mansionfamily.plus.com Sat Apr 18 11:06:35 2015 From: james at mansionfamily.plus.com (james) Date: Sat, 18 Apr 2015 12:06:35 +0100 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped below 50 In-Reply-To: References: Message-ID: <55323ABB.8010701@mansionfamily.plus.com> > Probably this has to do with the popularity of Spark or Akka, or that there is a choice of IDEs. We have seen candidates with negative Scala experiences, but the trend seems to be to go to Java 8 rather than a different functional language. The negatives seem to relate to speed of compilation (and to some extent execution) and difficulties with debuggers. And that the IDEs, while better than nothing, are not anything like as slick as they are with Java. The extent to which Haskell would be better on any of these things is hard to judge. Runtime and gc is a concern - at least with the JVM you can consider paying for Azul C4. From hawu.bnu at gmail.com Sat Apr 18 16:10:07 2015 From: hawu.bnu at gmail.com (Jean Lopes) Date: Sat, 18 Apr 2015 09:10:07 -0700 (PDT) Subject: [Haskell-cafe] cabal install glade In-Reply-To: <55309B40.5010601@gmail.com> References: <9992f880-81e3-4041-8307-3b5857687c97@googlegroups.com> <552F65B3.6010004@gmail.com> <55309B40.5010601@gmail.com> Message-ID: Ok, I will report the commands I am using: $ cd glade (Matthew Pickering's glade repository clone) $ cabal sandbox init $ cabal update $ cabal install --only-dependencies --dry-run > Resolving dependencies... > In order, the following would be installed (use -v for more details): > mtl-2.2.1 > utf8-string-0.3.8 (latest: 1) > cairo-0.12.5.3 (latest: 0.13.1.0) > glib-0.12.5.4 (latest: 0.13.1.0) > gio-0.12.5.3 (latest: 0.13.1.0) > pango-0.12.5.3 (latest: 0.13.1.0) > gtk-0.12.5.7 (latest: 0.13.6) $ cabal install --only-dependencies > ... here comes the first error ... > [1 of 2] Compiling SetupWrapper ( /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs, /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/dist/dist-sandbox-de3654e1/setup/SetupWrapper.o ) > > /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs:91:17: > Ambiguous occurrence ?die? > It could refer to either ?Distribution.Simple.Utils.die?, > imported from ?Distribution.Simple.Utils? at /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 > or ?System.Exit.die?, > imported from ?System.Exit? at /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 > Failed to install cairo-0.12.5.3 > [1 of 2] Compiling SetupWrapper ( /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs, /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/dist/dist-sandbox-de3654e1/setup/SetupWrapper.o ) > > /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs:91:17: > Ambiguous occurrence ?die? > It could refer to either ?Distribution.Simple.Utils.die?, > imported from ?Distribution.Simple.Utils? at /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs:8:1-32 > or ?System.Exit.die?, > imported from ?System.Exit? at /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs:21:1-18 > Failed to install glib-0.12.5.4 > cabal: Error: some packages failed to install: > cairo-0.12.5.3 failed during the configure step. The exception was: > ExitFailure 1 > gio-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > glib-0.12.5.4 failed during the configure step. The exception was: > ExitFailure 1 > gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to install. > pango-0.12.5.3 depends on glib-0.12.5.4 which failed to install. So I supose the problem lies within the cairo package, right? Em sexta-feira, 17 de abril de 2015 02:34:00 UTC-3, Zilin Chen escreveu: > > Do you still get the same errors? I think the "Sandboxes: basic usage" > section in [0] is what you'd follow. > > [0] > https://www.haskell.org/cabal/users-guide/installing-packages.html#sandboxes-advanced-usage > > On 17/04/15 12:11, Jean Lopes wrote: > > Still no success...I am missing some very basic things probably.. > > Em quinta-feira, 16 de abril de 2015 04:33:20 UTC-3, Zilin Chen escreveu: >> >> Hi Jean, >> >> Simply do `$ cabal sandbox add-source ' and then >> `$ cabal install --only-dependencies' as normal. I think it should work. >> >> Cheers, >> Zilin >> >> >> On 15/04/15 22:01, Jean Lopes wrote: >> >> I will try to use your branch before going back to GHC 7.8... >> >> But, how exactly should I do that ? >> Clone your branch; >> Build from local source code with cabal ? (I just scrolled this part >> while reading cabal tutorials, guess I'll have to take a look now) >> What about dependencies ? I should use $ cabal install glade >> --only-dependencies and than install glade from your branch ? >> >> Em quarta-feira, 15 de abril de 2015 05:48:42 UTC-3, Matthew Pickering >> escreveu: >>> >>> Hi Jean, >>> >>> You can try cloning my branch until a push gets accepted upstream. >>> >>> https://github.com/mpickering/glade >>> >>> The fixes to get it working with 7.10 were fairly minimal. >>> >>> Matt >>> >>> On Wed, Apr 15, 2015 at 4:33 AM, Jean Lopes wrote: >>> > Hello, I am trying to install the Glade package from hackage, and I >>> > keep getting exit failure... >>> > >>> > Hope someone can help me solve it! >>> > >>> > What I did: >>> > $ mkdir ~/haskell/project >>> > $ cd ~/haskell/project >>> > $ cabal sandbox init >>> > $ cabal update >>> > $ cabal install alex >>> > $ cabal install happy >>> > $ cabal install gtk2hs-buildtools >>> > $ cabal install gtk #successful until here >>> > $ cabal install glade >>> > >>> > The last statement gave me the following error: >>> > >>> > $ [1 of 2] Compiling SetupWrapper ( >>> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs, >>> > >>> /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o >>> > ) >>> > $ >>> > $ /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:91:17: >>> > $ Ambiguous occurrence ?die? >>> > $ It could refer to either ?Distribution.Simple.Utils.die?, >>> > $ imported from >>> > ?Distribution.Simple.Utils? at >>> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 >>> > $ or ?System.Exit.die?, >>> > $ imported from ?System.Exit? at >>> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 >>> > $ Failed to install cairo-0.12.5.3 >>> > $ [1 of 2] Compiling SetupWrapper ( >>> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs, >>> > >>> /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o >>> > ) >>> > $ >>> > $ /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:91:17: >>> > $ Ambiguous occurrence ?die? >>> > $ It could refer to either ?Distribution.Simple.Utils.die?, >>> > $ imported from >>> > ?Distribution.Simple.Utils? at >>> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:8:1-32 >>> > $ or ?System.Exit.die?, >>> > $ imported from ?System.Exit? at >>> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:21:1-18 >>> > $ Failed to install glib-0.12.5.4 >>> > $ cabal: Error: some packages failed to install: >>> > $ cairo-0.12.5.3 failed during the configure step. The exception was: >>> > $ ExitFailure 1 >>> > $ gio-0.12.5.3 depends on glib-0.12.5.4 which failed to install. >>> > $ glade-0.12.5.0 depends on glib-0.12.5.4 which failed to install. >>> > $ glib-0.12.5.4 failed during the configure step. The exception was: >>> > $ ExitFailure 1 >>> > $ gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to install. >>> > $ pango-0.12.5.3 depends on glib-0.12.5.4 which failed to install. >>> > >>> > Important: You can assume I don't know much. I'm rather new to >>> Haskell/cabal >>> > _______________________________________________ >>> > Haskell-Cafe mailing list >>> > Haskel... at haskell.org >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskel... at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >> >> >> _______________________________________________ >> Haskell-Cafe mailing listHaskel... at haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> >> > > _______________________________________________ > Haskell-Cafe mailing listHaskel... at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Sat Apr 18 18:11:49 2015 From: michael at snoyman.com (Michael Snoyman) Date: Sat, 18 Apr 2015 18:11:49 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> <87bninfdkp.fsf@karetnikov.org> Message-ID: On Sat, Apr 18, 2015 at 12:20 AM Bardur Arantsson wrote: > On 17-04-2015 10:17, Michael Snoyman wrote: > > This is a great idea, thank you both for raising it. I was discussing > > something similar with others in a text chat earlier this morning. I've > > gone ahead and put together a page to cover this discussion: > > > > > https://github.com/commercialhaskell/commercialhaskell/blob/master/proposal/improved-hackage-security.md > > > > The document definitely needs more work, this is just meant to get the > ball > > rolling. As usual with the commercialhaskell repo, if anyone wants edit > > access, just request it on the issue tracker. Or most likely, send a PR > and > > you'll get a commit bit almost magically ;) > > Thank you. Just to make sure that I understand -- is this page only > meant to cover the original "strawman proposal" at the start of this > thread, or...? > > Maybe you intend for this to be extended in a detailed way under the > "Long-term solutions" heading? > > I was imagining a wiki page which could perhaps start out by collecting > all the currently identified possible threats in a table, and then all > "participants" could perhaps fill in how their suggestion addresses > those threats (or tell us why we shouldn't care about this particular > threat). Of course other relevent non-threat considerations might be > relevant to add to such a table, such as: how prevalent is the > software/idea we're basing this on? does this have any prior > implementation (e.g. the append-to-tar and expect that web servers will > behave sanely thing)? etc. > > (I realize that I'm asking for a lot of work, but I think it's going to > be necessary, at least if there's going to be consensus and not just a > de-facto "winner".) > > > Hi Bardur, I don't think I have any different intention for this page than you've identified. In fact, I thought that I had clearly said exactly what you described when I said: > There are various ideas at play already. The bullets are not intended to be full representations of the proposals, but rather high level summaries. We should continue to expand this page with more details going forward. If this is unclear somehow, please tell me. But my intention absolutely is that many people can edit this page to add their ideas and we can flesh out a complete solution. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Sat Apr 18 18:51:19 2015 From: michael at snoyman.com (Michael Snoyman) Date: Sat, 18 Apr 2015 18:51:19 +0000 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> <552C379B.8080601@vex.net> Message-ID: The /lib/ version of the file was necessary for statically linked executables I believe. It's arguably better to just remove it and tell people "don't do static executables" I suppose, given that they're clearly not *actually* static in reality ;) On Sat, Apr 18, 2015, 12:51 PM Vo Minh Thu wrote: > Michael, thanks for solving this ! > > I wonder how you came up with the need for /lib/ld-linux-x86-64.so.2. > I have needed only /lib64/ld-linux-x86-64.so.2 (which you have too), > so maybe you added it by mistake. > > B.t.w, if you don't want to use a Dockerfile, you can do this: > > > tar -C rootfs -c . | docker import - tiny > > where rootfs is your repository. > > 2015-04-14 10:43 GMT+02:00 Michael Snoyman : > > Trac ticket created: > https://ghc.haskell.org/trac/ghc/ticket/10298#ticket > > > > I've also put together a Docker image called snoyberg/haskell-scratch > > (source at https://github.com/snoyberg/haskell-scratch), which seems to > be > > working for me. Here's a minimal test I've put together which seems to be > > succeeding (note that I've also tried some real life programs): > > > > #!/bin/bash > > > > set -e > > set -x > > > > cat > tiny.hs < > main :: IO () > > main = putStrLn "Hello from a tiny Docker image" > > EOF > > > > ghc tiny.hs > > strip tiny > > > > cat > Dockerfile < > FROM snoyberg/haskell-scratch > > ADD tiny /tiny > > CMD ["/tiny"] > > EOF > > > > docker build -t tiny . > > docker run --rm tiny > > > > > > On Tue, Apr 14, 2015 at 9:52 AM Michael Snoyman > wrote: > >> > >> Actually, I seem to have found the problem: > >> > >> open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = > -1 > >> ENOENT (No such file or directory) > >> open("/usr/lib/x86_64-linux-gnu/gconv/gconv-modules", > O_RDONLY|O_CLOEXEC) > >> = -1 ENOENT (No such file or directory) > >> > >> I found that I needed to copy over the following files to make my > program > >> complete: > >> > >> /usr/lib/x86_64-linux-gnu/gconv/gconv-modules > >> /usr/lib/x86_64-linux-gnu/gconv/UTF-32.so > >> > >> Once I did that, I could get the executable to run in the chroot. > However, > >> even running the statically linked executable still required most of the > >> shared libraries to be present inside the chroot. So it seems that: > >> > >> * We can come up with a list of a few files that need to be present > inside > >> a Docker image to provide for minimal GHC-compiled executables > >> * There's a bug in the RTS that results in an infinite loop > >> > >> I'm going to try to put together a semi-robust solution for the first > >> problem, and I'll report the RTS issue on Trac. > >> > >> On Tue, Apr 14, 2015 at 9:25 AM Michael Snoyman > >> wrote: > >>> > >>> I have a bit more information about this. In particular: I'm able to > >>> reproduce this using chroot (no Docker required), and it's reproducing > with > >>> a dynamically linked executable too. Steps I used to reproduce: > >>> > >>> 1. Write a minimal "foo.hs" containing `main = putStrLn "Hello World"` > >>> 2. Compile that executable and put it in an empty directory > >>> 3. Run `ldd` on it and copy all necessary libraries inside that > directory > >>> 4. Run `sudo strace -o log.txt . /foo` > >>> > >>> I've uploaded the logs to: > >>> > >>> https://gist.github.com/snoyberg/095efb17e36acc1d6360 > >>> > >>> Note that, due to size of the output, I killed the process just a few > >>> seconds after starting it, but when I let the output run much longer, I > >>> didn't see any difference in the results. I'll continue poking at this > a > >>> bit, but most likely I'll open a GHC Trac ticket about it later today. > >>> > >>> On Tue, Apr 14, 2015 at 12:39 AM Albert Y. C. Lai > wrote: > >>>> > >>>> I wonder whether you already know the following, and whether it is > >>>> relevant to begin with. (Plus, my knowledge is fairly sketchy.) > >>>> > >>>> Even though you statically link glibc, its code will, at run time, > >>>> dlopen a certain part of glibc. > >>>> > >>>> Why: To provide a really uniform abstraction layer over user account > >>>> queries, e.g., man 3 getpwnam, regardless of whether the accounts are > >>>> from /etc/passwd, LDAP, or whatever. > >>>> > >>>> Therefore, during run time, glibc first reads some config files of the > >>>> host to see what kind of user account database the host uses. If it's > >>>> /etc/passwd, then dlopen the implementation of getpwnam and friends > for > >>>> /etc/passwd; else, if it's LDAP, then dlopen the implementation of > >>>> getpwnam and friends for LDAP; etc etc. > >>>> > >>>> So that later when you call getpwnam, it will happen to "do the right > >>>> thing". > >>>> > >>>> This demands the required *.so files to be accessible during run time. > >>>> Moreoever, if you statically link glibc, this also demands the > required > >>>> *.so files to version-match the glibc you statically link. > >>>> > >>>> (It is the main reason why most people give up on statically linking > >>>> glibc.) > >>>> _______________________________________________ > >>>> Haskell-Cafe mailing list > >>>> Haskell-Cafe at haskell.org > >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tmorris at tmorris.net Sun Apr 19 08:55:23 2015 From: tmorris at tmorris.net (Tony Morris) Date: Sun, 19 Apr 2015 18:55:23 +1000 Subject: [Haskell-cafe] Scala at position 25 of the Tiobe index, Haskell dropped below 50 In-Reply-To: References: Message-ID: Scala cracks me up, since 2007. On Fri, Apr 17, 2015 at 4:25 PM, Benjamin Edwards wrote: > I have to concede the industry buzz is strong with Scala (currently Scala > is my job). The JVM does have a role to play, but mostly in the access it > gives you to the Java ecosystem. It's a shame because I'd much rather be > using haskell but when you have 10 employees and your business isn't making > the platform you can't afford to make the platform. We have nothing like > spark or finagle ready to go as far as I am aware. Something finagle like > (metrics + pluggable load balancing + platform agnostic + autoscaling (via > zk / mdns)) would be huge. I am trying to find the time to hack on these > sorts of projects and failing. > > Ben > > On Thu, 16 Apr 2015 11:31 pm Gautier DI FOLCO > wrote: > >> 2015-04-16 21:41 GMT+00:00 Henk-Jan van Tuyl : >> >>> >>> L.S., >>> >>> From the Tiobe index page[0] of this month: >>> Another interesting move this month concerns Scala. The functional >>> programming language jumps to position 25 after having been between >>> position 30 and 50 for many years. Scala seems to be ready to enter the top >>> 20 for the first time in history. >>> >>> Haskell dropped from the top 50 last month and hasn't come back. I >>> suppose, if Haskell compiled to JVM, Haskell would have a much wider >>> audience. >>> >>> Regards, >>> Henk-Jan van Tuyl >>> >>> >>> [0] http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html >>> >>> >>> -- >>> Folding at home >>> What if you could share your unused computer power to help find a cure? >>> In just 5 minutes you can join the world's biggest networked computer and >>> get us closer sooner. Watch the video. >>> http://folding.stanford.edu/ >>> >>> >>> http://Van.Tuyl.eu/ >>> http://members.chello.nl/hjgtuyl/tourdemonad.html >>> Haskell programming >>> -- >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >> >> First of all do you know how this ranking is build? It's an agglomeration >> of nonsense values which aim to give the language trend. >> Maybe Scala has benefited of the JVM or maybe it has profited of the >> auto-assigned "functional" because some communities, like the Haskell >> community, have heavily worked for that during decades. >> The only side-effect of a growth in this ranking will attract some >> addicted to Resum? Driven-Development. >> Popular or not, you can do anything you want with Haskell and no ranking >> will change that. >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sven.bartscher at weltraumschlangen.de Sun Apr 19 14:15:15 2015 From: sven.bartscher at weltraumschlangen.de (Sven Bartscher) Date: Sun, 19 Apr 2015 16:15:15 +0200 Subject: [Haskell-cafe] compiling GHC 7.8 on raspberry pi Message-ID: <20150419161515.57bdd83b@sven.bartscher> Greetings, I'm trying to get a haskell program to run on a raspberry pi (running raspbian). Unfortunately it requires template haskell. Since the GHC included in raspbian wheezy doesn't support TH I'm trying to compile GHC 7.8.4 on the rpi. Most of the compilation worked fine. I got problems with the memory consumption, but adding a lot of swapspace solved this problem. During the final phase the compilation process complains about a "strange closure type 49200" (the exact number is varying, but most often it's 49200). Does anyone here have experience, how to compile GHC 7.8 on a raspberry pi? As a side note: The compilation is running in QEMU while the compiled program should run on a real rpi. Regards Sven -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: Digitale Signatur von OpenPGP URL: From nomeata at debian.org Sun Apr 19 15:42:38 2015 From: nomeata at debian.org (Joachim Breitner) Date: Sun, 19 Apr 2015 17:42:38 +0200 Subject: [Haskell-cafe] compiling GHC 7.8 on raspberry pi In-Reply-To: <20150419161515.57bdd83b@sven.bartscher> References: <20150419161515.57bdd83b@sven.bartscher> Message-ID: <1429458158.7729.7.camel@debian.org> Hi, Am Sonntag, den 19.04.2015, 16:15 +0200 schrieb Sven Bartscher: > I'm trying to get a haskell program to run on a raspberry pi (running > raspbian). Unfortunately it requires template haskell. > Since the GHC included in raspbian wheezy doesn't support TH I'm trying > to compile GHC 7.8.4 on the rpi. > Most of the compilation worked fine. I got problems with the memory > consumption, but adding a lot of swapspace solved this problem. > During the final phase the compilation process complains about a > "strange closure type 49200" (the exact number is varying, but most > often it's 49200). > Does anyone here have experience, how to compile GHC 7.8 on a raspberry > pi? > > As a side note: The compilation is running in QEMU while the compiled > program should run on a real rpi. you might be interested in the patches that Debian applies to GHC, in particular the ARM-related one, even more in particular the one that enforces the use of gold as the linker: https://sources.debian.net/src/ghc/7.8.20141223-1/debian/patches/ Gru?, Joachim -- Joachim "nomeata" Breitner Debian Developer nomeata at debian.org | ICQ# 74513189 | GPG-Keyid: F0FBF51F JID: nomeata at joachim-breitner.de | http://people.debian.org/~nomeata -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: This is a digitally signed message part URL: From marlowsd at gmail.com Sun Apr 19 19:54:23 2015 From: marlowsd at gmail.com (Simon Marlow) Date: Sun, 19 Apr 2015 12:54:23 -0700 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> Message-ID: <553407EF.1030303@gmail.com> Hi Michael, This rang a bell for me. It might be the same as these: https://ghc.haskell.org/trac/ghc/ticket/7695 https://ghc.haskell.org/trac/ghc/ticket/8928 I think the conclusion was that the IO library is failing to start iconv, and printing the error messages causes it to retry loading iconv, ad infinitum (or something like that). There's no fix yet, but it probably isn't hard to fix, just that nobody got around to it yet. Cheers, Simon On 13/04/2015 11:50, Michael Snoyman wrote: > I'm not sure if this issue would show up, but I can try it in Fedora > tomorrow. I didn't address the linker warning at all right now, it seems > to not have been triggered, though I suppose it is possible that it's > the cause of this issue. > > On Mon, Apr 13, 2015 at 7:10 PM Greg Weber > wrote: > > Haskell is not that great at producing statically linked libraries > independent of the OS. > The issue you are running into would likely show up in another > non-ubuntu image (or even possibly a different version of an ubuntu > image), so you could probably use a Fedora image that has tracing. > > How are you addressing the linker warning about needing a particular > glibc version at runtime? > > On Mon, Apr 13, 2015 at 3:28 AM, Sharif Olorin > > wrote: > > Unfortunately, strace and ltrace aren't available in that > Docker image, but it's a good idea to see if I can get them > running there somehow. > > > Failing that, you might be able to get useful information of the > same kind by running docker (the server, not the `docker run` > command) under perf[0] and then running your busybox container. > It should at least give you an idea of what it's doing when it > explodes. > > Sharif > > [0]: https://perf.wiki.kernel.org/index.php/Tutorial > > -- > You received this message because you are subscribed to the > Google Groups "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from > it, send an email to > commercialhaskell+unsubscribe at googlegroups.com > . > To post to this group, send email to > commercialhaskell at googlegroups.com > . > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/86ca2603-37f2-4645-9cd2-f09703f2be67%40googlegroups.com > . > > For more options, visit https://groups.google.com/d/optout. > > > -- > You received this message because you are subscribed to the Google > Groups "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send > an email to commercialhaskell+unsubscribe at googlegroups.com > . > To post to this group, send email to commercialhaskell at googlegroups.com > . > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/CAKA2Jg%2B%3DzJiXmak2FU_5GWPO1Dcn%2BvwsiB_xWj%2B8GfHvMkoBjw%40mail.gmail.com > . > For more options, visit https://groups.google.com/d/optout. From rasen.dubi at gmail.com Sun Apr 19 20:18:45 2015 From: rasen.dubi at gmail.com (Alexey Shmalko) Date: Sun, 19 Apr 2015 23:18:45 +0300 Subject: [Haskell-cafe] ANN: hsreadability-1.0.0 Message-ID: I'm pleased to announce the first release of hsreadability: Haskell bindings to the Readability API [1]. The Readability provides three services for developers: Reader, Parser and Shortener: - Reader provides access to the user's info, bookmarks, articles and tags; - Parser is an interface to the powerful web content parser; - Shortener is a URL shortener (doesn't require authentication). With hsreadability you're able to access all the functionality of these services from Haskell. If you have any issues, report them on the GitHub page [2]. I'd also love to hear any other feedback so feel free to comment here or email me. Best regards, Alexey Shmalko [1] https://readability.com/developers/api [2] https://github.com/rasendubi/hsreadability From zilinc.dev at gmail.com Mon Apr 20 02:00:22 2015 From: zilinc.dev at gmail.com (Zilin Chen) Date: Mon, 20 Apr 2015 12:00:22 +1000 Subject: [Haskell-cafe] cabal install glade In-Reply-To: References: <9992f880-81e3-4041-8307-3b5857687c97@googlegroups.com> <552F65B3.6010004@gmail.com> <55309B40.5010601@gmail.com> Message-ID: <55345DB6.10802@gmail.com> Sorry for late reply. This issue has been fixed as per https://github.com/gtk2hs/gtk2hs/issues/100 which appears in a later version of cairo. It seems than glade requires gtk < 0.13 ==> cairo < 0.13 and doesn't include the fix. On 19/04/15 02:10, Jean Lopes wrote: > Ok, I will report the commands I am using: > $ cd glade (Matthew Pickering's glade repository clone) > $ cabal sandbox init > $ cabal update > $ cabal install --only-dependencies --dry-run > > Resolving dependencies... > > In order, the following would be installed (use -v for more details): > > mtl-2.2.1 > > utf8-string-0.3.8 (latest: 1) > > cairo-0.12.5.3 (latest: 0.13.1.0) > > glib-0.12.5.4 (latest: 0.13.1.0) > > gio-0.12.5.3 (latest: 0.13.1.0) > > pango-0.12.5.3 (latest: 0.13.1.0) > > gtk-0.12.5.7 (latest: 0.13.6) > $ cabal install --only-dependencies > > ... here comes the first error ... > > [1 of 2] Compiling SetupWrapper ( > /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs, > /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/dist/dist-sandbox-de3654e1/setup/SetupWrapper.o > ) > > > > /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs:91:17: > > Ambiguous occurrence ?die? > > It could refer to either ?Distribution.Simple.Utils.die?, > > imported from > ?Distribution.Simple.Utils? at > /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 > > or ?System.Exit.die?, > > imported from ?System.Exit? at > /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 > > Failed to install cairo-0.12.5.3 > > [1 of 2] Compiling SetupWrapper ( > /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs, > /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/dist/dist-sandbox-de3654e1/setup/SetupWrapper.o > ) > > > > /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs:91:17: > > Ambiguous occurrence ?die? > > It could refer to either ?Distribution.Simple.Utils.die?, > > imported from > ?Distribution.Simple.Utils? at > /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs:8:1-32 > > or ?System.Exit.die?, > > imported from ?System.Exit? at > /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs:21:1-18 > > Failed to install glib-0.12.5.4 > > cabal: Error: some packages failed to install: > > cairo-0.12.5.3 failed during the configure step. The exception was: > > ExitFailure 1 > > gio-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > > glib-0.12.5.4 failed during the configure step. The exception was: > > ExitFailure 1 > > gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to install. > > pango-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > > So I supose the problem lies within the cairo package, right? > > Em sexta-feira, 17 de abril de 2015 02:34:00 UTC-3, Zilin Chen escreveu: > > Do you still get the same errors? I think the "Sandboxes: basic > usage" section in [0] is what you'd follow. > > [0] > https://www.haskell.org/cabal/users-guide/installing-packages.html#sandboxes-advanced-usage > > > On 17/04/15 12:11, Jean Lopes wrote: >> Still no success...I am missing some very basic things probably.. >> >> Em quinta-feira, 16 de abril de 2015 04:33:20 UTC-3, Zilin Chen >> escreveu: >> >> Hi Jean, >> >> Simply do `$ cabal sandbox add-source > glade>' and then `$ cabal install --only-dependencies' as >> normal. I think it should work. >> >> Cheers, >> Zilin >> >> >> On 15/04/15 22:01, Jean Lopes wrote: >>> I will try to use your branch before going back to GHC 7.8... >>> >>> But, how exactly should I do that ? >>> Clone your branch; >>> Build from local source code with cabal ? (I just scrolled >>> this part while reading cabal tutorials, guess I'll have to >>> take a look now) >>> What about dependencies ? I should use $ cabal install glade >>> --only-dependencies and than install glade from your branch ? >>> >>> Em quarta-feira, 15 de abril de 2015 05:48:42 UTC-3, Matthew >>> Pickering escreveu: >>> >>> Hi Jean, >>> >>> You can try cloning my branch until a push gets accepted >>> upstream. >>> >>> https://github.com/mpickering/glade >>> >>> >>> The fixes to get it working with 7.10 were fairly minimal. >>> >>> Matt >>> >>> On Wed, Apr 15, 2015 at 4:33 AM, Jean Lopes >>> wrote: >>> > Hello, I am trying to install the Glade package from >>> hackage, and I >>> > keep getting exit failure... >>> > >>> > Hope someone can help me solve it! >>> > >>> > What I did: >>> > $ mkdir ~/haskell/project >>> > $ cd ~/haskell/project >>> > $ cabal sandbox init >>> > $ cabal update >>> > $ cabal install alex >>> > $ cabal install happy >>> > $ cabal install gtk2hs-buildtools >>> > $ cabal install gtk #successful until here >>> > $ cabal install glade >>> > >>> > The last statement gave me the following error: >>> > >>> > $ [1 of 2] Compiling SetupWrapper ( >>> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs, >>> > >>> /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o >>> > ) >>> > $ >>> > $ >>> /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:91:17: >>> > $ Ambiguous occurrence ?die? >>> > $ It could refer to either >>> ?Distribution.Simple.Utils.die?, >>> > $ imported from >>> > ?Distribution.Simple.Utils? at >>> > >>> /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 >>> > $ or ?System.Exit.die?, >>> > $ imported from ?System.Exit? at >>> > >>> /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 >>> > $ Failed to install cairo-0.12.5.3 >>> > $ [1 of 2] Compiling SetupWrapper ( >>> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs, >>> > >>> /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o >>> > ) >>> > $ >>> > $ >>> /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:91:17: >>> > $ Ambiguous occurrence ?die? >>> > $ It could refer to either >>> ?Distribution.Simple.Utils.die?, >>> > $ imported from >>> > ?Distribution.Simple.Utils? at >>> > >>> /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:8:1-32 >>> > $ or ?System.Exit.die?, >>> > $ imported from ?System.Exit? at >>> > >>> /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:21:1-18 >>> > $ Failed to install glib-0.12.5.4 >>> > $ cabal: Error: some packages failed to install: >>> > $ cairo-0.12.5.3 failed during the configure step. The >>> exception was: >>> > $ ExitFailure 1 >>> > $ gio-0.12.5.3 depends on glib-0.12.5.4 which failed >>> to install. >>> > $ glade-0.12.5.0 depends on glib-0.12.5.4 which failed >>> to install. >>> > $ glib-0.12.5.4 failed during the configure step. The >>> exception was: >>> > $ ExitFailure 1 >>> > $ gtk-0.12.5.7 depends on glib-0.12.5.4 which failed >>> to install. >>> > $ pango-0.12.5.3 depends on glib-0.12.5.4 which failed >>> to install. >>> > >>> > Important: You can assume I don't know much. I'm >>> rather new to Haskell/cabal >>> > _______________________________________________ >>> > Haskell-Cafe mailing list >>> > Haskel... at haskell.org >>> > >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskel... at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >>> >>> >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskel... at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskel... at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Mon Apr 20 04:34:49 2015 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 20 Apr 2015 04:34:49 +0000 Subject: [Haskell-cafe] Static executables in minimal Docker containers In-Reply-To: <553407EF.1030303@gmail.com> References: <86ca2603-37f2-4645-9cd2-f09703f2be67@googlegroups.com> <553407EF.1030303@gmail.com> Message-ID: Thanks for the update Simon. On Sun, Apr 19, 2015 at 10:54 PM Simon Marlow wrote: > Hi Michael, > > This rang a bell for me. It might be the same as these: > https://ghc.haskell.org/trac/ghc/ticket/7695 > https://ghc.haskell.org/trac/ghc/ticket/8928 > > I think the conclusion was that the IO library is failing to start > iconv, and printing the error messages causes it to retry loading iconv, > ad infinitum (or something like that). There's no fix yet, but it > probably isn't hard to fix, just that nobody got around to it yet. > > Cheers, > Simon > > On 13/04/2015 11:50, Michael Snoyman wrote: > > I'm not sure if this issue would show up, but I can try it in Fedora > > tomorrow. I didn't address the linker warning at all right now, it seems > > to not have been triggered, though I suppose it is possible that it's > > the cause of this issue. > > > > On Mon, Apr 13, 2015 at 7:10 PM Greg Weber > > wrote: > > > > Haskell is not that great at producing statically linked libraries > > independent of the OS. > > The issue you are running into would likely show up in another > > non-ubuntu image (or even possibly a different version of an ubuntu > > image), so you could probably use a Fedora image that has tracing. > > > > How are you addressing the linker warning about needing a particular > > glibc version at runtime? > > > > On Mon, Apr 13, 2015 at 3:28 AM, Sharif Olorin > > > wrote: > > > > Unfortunately, strace and ltrace aren't available in that > > Docker image, but it's a good idea to see if I can get them > > running there somehow. > > > > > > Failing that, you might be able to get useful information of the > > same kind by running docker (the server, not the `docker run` > > command) under perf[0] and then running your busybox container. > > It should at least give you an idea of what it's doing when it > > explodes. > > > > Sharif > > > > [0]: https://perf.wiki.kernel.org/index.php/Tutorial > > > > -- > > You received this message because you are subscribed to the > > Google Groups "Commercial Haskell" group. > > To unsubscribe from this group and stop receiving emails from > > it, send an email to > > commercialhaskell+unsubscribe at googlegroups.com > > . > > To post to this group, send email to > > commercialhaskell at googlegroups.com > > . > > To view this discussion on the web visit > > > https://groups.google.com/d/msgid/commercialhaskell/86ca2603-37f2-4645-9cd2-f09703f2be67%40googlegroups.com > > < > https://groups.google.com/d/msgid/commercialhaskell/86ca2603-37f2-4645-9cd2-f09703f2be67%40googlegroups.com?utm_medium=email&utm_source=footer > >. > > > > For more options, visit https://groups.google.com/d/optout. > > > > > > -- > > You received this message because you are subscribed to the Google > > Groups "Commercial Haskell" group. > > To unsubscribe from this group and stop receiving emails from it, send > > an email to commercialhaskell+unsubscribe at googlegroups.com > > . > > To post to this group, send email to commercialhaskell at googlegroups.com > > . > > To view this discussion on the web visit > > > https://groups.google.com/d/msgid/commercialhaskell/CAKA2Jg%2B%3DzJiXmak2FU_5GWPO1Dcn%2BvwsiB_xWj%2B8GfHvMkoBjw%40mail.gmail.com > > < > https://groups.google.com/d/msgid/commercialhaskell/CAKA2Jg%2B%3DzJiXmak2FU_5GWPO1Dcn%2BvwsiB_xWj%2B8GfHvMkoBjw%40mail.gmail.com?utm_medium=email&utm_source=footer > >. > > For more options, visit https://groups.google.com/d/optout. > > -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/553407EF.1030303%40gmail.com > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Hage at uu.nl Mon Apr 20 08:36:46 2015 From: J.Hage at uu.nl (Jurriaan Hage) Date: Mon, 20 Apr 2015 10:36:46 +0200 Subject: [Haskell-cafe] [ANN]: the Helium compiler, version 1.8.1 Message-ID: <647A7C27-F466-4975-9A51-B866FD417342@uu.nl> Dear all, we have recently uploaded Helium 1.8.1, the novice friendly Haskell compiler, to Hackage. Improvements in this version - Helium can again work together with our Java-based programming environment Hint. The jar file for Hint itself can be downloaded from the Helium website at: http://foswiki.cs.uu.nl/foswiki/Helium which also has some more documentation on how to use Hint and helium. - the svn location if you are interested in the sources is now the correct one To install Helium simply type cabal install helium cabal install lvmrun Helium compiles with GHC 7.6.3 and 7.8.x, but does not yet compile with 7.10. Any questions and feedback are welcome at helium at cs.uu.nl. best regards, The Helium Team From agocorona at gmail.com Mon Apr 20 10:26:48 2015 From: agocorona at gmail.com (Alberto G. Corona ) Date: Mon, 20 Apr 2015 12:26:48 +0200 Subject: [Haskell-cafe] [ANN]: the Helium compiler, version 1.8.1 In-Reply-To: <647A7C27-F466-4975-9A51-B866FD417342@uu.nl> References: <647A7C27-F466-4975-9A51-B866FD417342@uu.nl> Message-ID: Great! How the type rules detailed in the "scripting the type inference engine" paper are implemented? it is possible to script the inference engine with such rules? If so, are there some examples? 2015-04-20 10:36 GMT+02:00 Jurriaan Hage : > Dear all, > > we have recently uploaded Helium 1.8.1, the novice friendly Haskell > compiler, to Hackage. > > Improvements in this version > - Helium can again work together with our Java-based programming > environment Hint. > The jar file for Hint itself can be downloaded from the Helium website > at: > http://foswiki.cs.uu.nl/foswiki/Helium > which also has some more documentation on how to use Hint and helium. > - the svn location if you are interested in the sources is now the > correct one > > > To install Helium simply type > > cabal install helium > cabal install lvmrun > > Helium compiles with GHC 7.6.3 and 7.8.x, but does not yet compile with > 7.10. > > Any questions and feedback are welcome at helium at cs.uu.nl. > > best regards, > The Helium Team > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- Alberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam at bergmark.nl Mon Apr 20 13:03:08 2015 From: adam at bergmark.nl (Adam Bergmark) Date: Mon, 20 Apr 2015 15:03:08 +0200 Subject: [Haskell-cafe] Maintenance of feed package Message-ID: Hi caf?, Sigbjorn. I emailed Sigbjorn a week ago asking whether he was still maintaining the `feed' package. I haven't received a reply, and I haven't been able to find any sign of him online recently. Unless he resurfaces I'd like to take over as maintainer of the package to publish the changes required for GHC 7.10 compatibility and other dependencies that have fallen out of date. See: https://github.com/sof https://github.com/sof/feed/pulls https://github.com/haskell-infra/hackage-trustees/issues/12 Regards, Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Mon Apr 20 13:12:00 2015 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 20 Apr 2015 13:12:00 +0000 Subject: [Haskell-cafe] Maintenance of feed package In-Reply-To: References: Message-ID: +1, dependency issues with feed are currently affecting at least one submission to Stackage, and other packages have had to change their dependencies to avoid it. On Mon, Apr 20, 2015 at 4:03 PM Adam Bergmark wrote: > Hi caf?, Sigbjorn. > > I emailed Sigbjorn a week ago asking whether he was still maintaining the > `feed' package. I haven't received a reply, and I haven't been able to find > any sign of him online recently. > > Unless he resurfaces I'd like to take over as maintainer of the package to > publish the changes required for GHC 7.10 compatibility and other > dependencies that have fallen out of date. > > See: > https://github.com/sof > https://github.com/sof/feed/pulls > https://github.com/haskell-infra/hackage-trustees/issues/12 > > Regards, > Adam > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean at functionaljobs.com Mon Apr 20 16:00:01 2015 From: sean at functionaljobs.com (Functional Jobs) Date: Mon, 20 Apr 2015 12:00:01 -0400 Subject: [Haskell-cafe] New Functional Programming Job Opportunities Message-ID: <55352282d85bf@functionaljobs.com> Here are some functional programming job opportunities that were posted recently: Mid/Senior Software Development Engineer at Lookingglass Cyber Solutions http://functionaljobs.com/jobs/8810-mid-senior-software-development-engineer-at-lookingglass-cyber-solutions Senior/Principal Software Development Engineer at Lookingglass Cyber Solutions http://functionaljobs.com/jobs/8809-senior-principal-software-development-engineer-at-lookingglass-cyber-solutions Cheers, Sean Murphy FunctionalJobs.com From ryan at galois.com Mon Apr 20 17:01:46 2015 From: ryan at galois.com (Ryan Wright) Date: Mon, 20 Apr 2015 10:01:46 -0700 Subject: [Haskell-cafe] Fall Internship Opportunities at Galois Message-ID: Galois has some exciting opportunities for interns in software engineering/research for the Fall of 2015. Most of our projects use Haskell to some extent, so this is a great chance to get experience applying functional programming to important real-world problems. Please pass along this posting to anyone you think might be interested--even if they don't yet know Haskell! Thank you, Ryan Wright ? # Galois Software Engineering/Research Intern # Galois is currently seeking software engineering and research interns for Fall 2015 at all educational levels. We are committed to matching interns with exciting and engaging engineering work that fits their particular interests, creating lasting value for interns, Galois, and our community. A Galois internship is a chance to tackle cutting-edge, meaningful problems in a uniquely collaborative environment with world-leading researchers. Roles may include technology research and development, requirements gathering, implementation, testing, formal verification, and infrastructure development. Past interns have integrated formal methods tools into larger projects, built comprehensive validation suites, synthesized high-performance cryptographic algorithms, written autopilots for quad-copters, designed the syntax and semantics of scripting languages, and researched type system extensions for academic publication. We deeply believe in providing comprehensive support and mentorship to all of our employees, particularly interns. We provide our employees with a steward who regularly checks in to ensure that they feel welcome and safe in the Galois community while gaining real value from their experiences. ## Important Dates ## Applications due: June 1st, 2015 Internship period (flexible): September through December, 2015 ## About Galois ## Our mission is to create trustworthiness in critical systems. We?re in the business of taking blue-sky ideas and turning them into real-world technology solutions. We?ve been developing real-world systems for over ten years using functional programming, language design, and formal methods. Galois values diversity. We believe that differing viewpoints and experiences are essential to the process of innovation. We look broadly, including outside of established communities, to deliver innovation. ## How to Prepare ## An internship is an opportunity for learning and growth as an engineer. To make the most of the opportunity, we ask that candidates have experience reading, writing, and maintaining code in a realistic project. Many university courses involve multi-week collaborative projects that provide this type of experience. Most of our projects use the Haskell programming language and the git version control system. These tools aren?t often taught in computer science classes, but there are many free resources available that we recommend for learning: - [Learn You a Haskell] for Great Good! [1] by Miran Lipova?a - [Real World Haskell] [2] by Bryan O?Sullivan, Don Stewart, and John Goerzen - [tryGit] [3] by Code School - [Pro Git] [4] by Scott Chacon [1]: http://learnyouahaskell.com/ [2]: http://book.realworldhaskell.org/ [3]: http://try.github.io [4]: http://git-scm.com/book ## Qualifications ## The ability to be geographically located in Portland during the internship Experience reading, writing, and maintaining code in a project as described above Proficiency in software development practices such as design, documentation, testing, and the use of version control Well-developed verbal and written communication skills; comfort in a collaborative team environment The following skills are not required, but may be relevant to a particular project. - Proficiency in Haskell or other programming languages with rich type systems (eg., Scala, OCaml, Standard ML) - Experience using C and assembly languages for low-level systems programming - Development experience in high assurance systems or security software - Specific experience in an area of Galois? expertise, such as: - Assured information sharing - Software modeling and formal verification - Cyber-physical systems and control systems - Operating systems, virtualization and secure platforms - Networking and mobile technology - Cyber defense systems - Scientific computing - Program analysis and software evaluation - Web security ## Logistics ## The length and start date of the internship are negotiable: starting any time from the new year through next spring is acceptable, but an intern must be at Galois for at least three continuous months. The internship is paid competitively, and interns are responsible for living arrangements (although we can certainly help you find arrangements). Galois is located in the heart of downtown Portland with multiple public transportation options available and world-class bicycle infrastructure. ## Application Details ## We?re looking for people who can invent, learn, think, and inspire. We reward creativity and thrive on collaboration. If you are interested, please send your cover letter and resume to us via http://galois-inc.hiringthing.com. From clintonmead at gmail.com Mon Apr 20 18:01:43 2015 From: clintonmead at gmail.com (Clinton Mead) Date: Tue, 21 Apr 2015 04:01:43 +1000 Subject: [Haskell-cafe] Attempt to emulate subclasses in Haskell, am I reinventing the wheel? Message-ID: I've linked to an ugly attempt to emulate subclasses in Haskell, and I'm wondering if anyone has done what I have perhaps in a cleaner way. Firstly, the code is here: http://ideone.com/znHfSG To explain what I've done, I first thought that a "method" basically takes some "input" (which I've called "i"), an object (which I've called "c" for class) and returns some output "o" and a potentially modified object "c". I've captured this behaviour in the badly named class "C". I've then made a class "User", with methods "getFirstName" and "putFirstName" and defined them appropriately. Furthermore, I've then made a data type "Age", and then "ExtendedUser" which combines "User" with "Age". At this point, I can still call "getFirstName" and "putFirstName" on "ExtendedUser", as would be hoped. I also defined "getAge", which naturally works on ExtendedUser. Furthermore, I can override "getFirstName" on "ExtendedUser", which I have done to instead return a capitalised version. Is what I've done of any practical use? And has someone done it better than me? -------------- next part -------------- An HTML attachment was scrubbed... URL: From spam at scientician.net Mon Apr 20 18:26:01 2015 From: spam at scientician.net (Bardur Arantsson) Date: Mon, 20 Apr 2015 20:26:01 +0200 Subject: [Haskell-cafe] Attempt to emulate subclasses in Haskell, am I reinventing the wheel? In-Reply-To: References: Message-ID: On 20-04-2015 20:01, Clinton Mead wrote: > I've linked to an ugly attempt to emulate subclasses in Haskell, and I'm > wondering if anyone has done what I have perhaps in a cleaner way. > Well, I would generally advise against trying to emulate a flawed concept in the first place. "OverlappingInstances" and "UndecidableInstances" are a red flag unless you know *exactly* what you are doing. A much better way (IMO, of course) is to use either 1) normal aggregation (that's the simpl{e,istic} answer), at the cost of some boilerplate. If the boilerplate exceeds practicality, I would look at 2) extensible records/row types as in e.g. "vinyl" Regards, From alex.solla at gmail.com Mon Apr 20 18:35:53 2015 From: alex.solla at gmail.com (Alexander Solla) Date: Mon, 20 Apr 2015 11:35:53 -0700 Subject: [Haskell-cafe] Attempt to emulate subclasses in Haskell, am I reinventing the wheel? In-Reply-To: References: Message-ID: Don't do this. It isn't a "bad" idea, it's just that you're not using the language to its full potential, and will end up with a lot of annoying (and not quite trivial) boilerplate. Read 'Data types a la carte'.[1] There really ought to be a "standard" (even if unofficial) library to do open data types, but rolling your own is really easy. [1]: http://www.cs.ru.nl/~W.Swierstra/Publications/DataTypesALaCarte.pdf On Mon, Apr 20, 2015 at 11:01 AM, Clinton Mead wrote: > I've linked to an ugly attempt to emulate subclasses in Haskell, and I'm > wondering if anyone has done what I have perhaps in a cleaner way. > > Firstly, the code is here: http://ideone.com/znHfSG > > To explain what I've done, I first thought that a "method" basically takes > some "input" (which I've called "i"), an object (which I've called "c" for > class) and returns some output "o" and a potentially modified object "c". > I've captured this behaviour in the badly named class "C". > > I've then made a class "User", with methods "getFirstName" and > "putFirstName" and defined them appropriately. > > Furthermore, I've then made a data type "Age", and then "ExtendedUser" > which combines "User" with "Age". > > At this point, I can still call "getFirstName" and "putFirstName" on > "ExtendedUser", as would be hoped. > > I also defined "getAge", which naturally works on ExtendedUser. > > Furthermore, I can override "getFirstName" on "ExtendedUser", which I have > done to instead return a capitalised version. > > Is what I've done of any practical use? And has someone done it better > than me? > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.feuer at gmail.com Mon Apr 20 19:25:33 2015 From: david.feuer at gmail.com (David Feuer) Date: Mon, 20 Apr 2015 15:25:33 -0400 Subject: [Haskell-cafe] Attempt to emulate subclasses in Haskell, am I reinventing the wheel? In-Reply-To: References: Message-ID: On Apr 20, 2015 2:26 PM, "Bardur Arantsson" wrote: > Well, I would generally advise against trying to emulate a flawed > concept in the first place. "OverlappingInstances" and > "UndecidableInstances" are a red flag unless you know *exactly* what you > are doing. While I completely agree about OverlappingInstances, I completely disagree about UndecidableInstances. Most of the time, that's only needed because the termination check for FlexibleInstances and FlexibleContexts is too primitive. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gershomb at gmail.com Mon Apr 20 19:30:53 2015 From: gershomb at gmail.com (Gershom B) Date: Mon, 20 Apr 2015 15:30:53 -0400 Subject: [Haskell-cafe] Help wanted with Wiki.Haskell.Org Message-ID: One of the central repositories of knowledge in the Haskell world is the HaskellWiki (https://wiki.haskell.org). This wiki has been with the Haskell community for years, and contains a wealth of knowledge. Like other services on the haskell.org domain and with haskell.org equipment, ultimate responsibility for maintaining it falls on the Haskell Committee. However, it is a community wiki, and requires care and maintenance and contributions from all of us.? The wiki has been, and continues to be, a vital component in the Haskell world. Like any wiki, it is fueled by contributions from many people, and in this sense it is thriving.? However, it could use a certain amount of attention and work in three key areas. We are looking for volunteers to step up in these regards,? 1) Account Creation Management: Account creation is manual only because spambots otherwise destroy us. It would be worth investigating if a full upgrade and new plugins could help this issue. In the meantime, the responsibility for creating new accounts has fallen on only one person for years. This is not a good situation. We would like to set up a mail alias for wiki admins and extend account creation rights to a range of people. If you would be willing to be one of a team of responders to account creation requests, please write and let us know.? 2) Technical and design oversight: Now that we have a new haskell.org homepage, the current wiki frontpage could use a redesign. For that matter, the whole wiki could use a bit of a redesign to bring it into a more modern style. Along with that, it may be the case that additional plugins ? such as for typesetting code or equations better ? could be quite helpful. It would be good to have a mediawiki admin who wants to help improve the technical capacities of the site, as well as to overhaul its look. Again, if you are interested in taking charge of this, please let us know. 3) Content curation: ?One issue with a large collection of documents written by different people is the lack of curation. Some pages fall out of date, information is spread across multiple pages instead of collected together, and quality varies greatly. Without a central authority, no one is responsible (or empowered) to fix the situation. There is a balance between keeping things up-to-date and preserving the historic content of the wiki, and without people feeling empowered to make big changes, the tendency will always fall towards the latter. We're looking for people in the community to volunteer to help improve this status quo. The task, generally speaking, is to be responsible for curating and improving the content of the wiki, but that's clearly a vague description with lots of room for individual embellishment. This doesn't need to be a single person either: a team working in a coordinated fashion could be incredibly effective.? Once more, depending on response, we?d be happy to designate and empower people to make broader changes on the wiki or to organize a team to do so. If you are interested in this as well, please let us know. Gershom, for the Haskell.org Committee From gleber.p at gmail.com Mon Apr 20 21:12:00 2015 From: gleber.p at gmail.com (Gleb Peregud) Date: Mon, 20 Apr 2015 23:12:00 +0200 Subject: [Haskell-cafe] Is there a name for this algebraic structure? Message-ID: Hello I am wondering if there's a well known algebraic structure which follows the following patterns. Let's call it S: It's update-able with some opaque "a" (which can be an element or an operation with an element): update :: S -> a -> S There's a well defined zero for it: empty :: S Operations on it are idempotent: update s a == update (update s a) a Every S can be reconstructed from a sequence of updates: forall s. exists [a]. s == foldl update empty [a] An example of this would be Data.Set: empty = Set.empty update = flip Set.insert Is there something like this in algebra? Cheers, Gleb -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas.dicioccio at gmail.com Mon Apr 20 21:57:06 2015 From: lucas.dicioccio at gmail.com (lucas di cioccio) Date: Mon, 20 Apr 2015 22:57:06 +0100 Subject: [Haskell-cafe] Is there a name for this algebraic structure? In-Reply-To: References: Message-ID: Hi Gleb, I think join/meet-semilattices captures idempotence (in a sense, convexity and monotonicity) pretty well, but they may require a slightly different/stronger structure than what you may need. Depending on the properties you really need, maybe a Poset is enough. For example, the "meet" function maps pairs of elements of "S" to elements of "S". Whereas your "update" takes an "S" and an arbitrary "a", which is slightly different. However, if you have an existing "update" function for some given a, one possibility is to project any "a" to an "S" with "update empty :: a -> S". Then you can work using the semilattice. Cheers, --Lucas 2015-04-20 22:12 GMT+01:00 Gleb Peregud : > Hello > > I am wondering if there's a well known algebraic structure which follows > the following patterns. Let's call it S: > > It's update-able with some opaque "a" (which can be an element or an > operation with an element): > > update :: S -> a -> S > > There's a well defined zero for it: > > empty :: S > > Operations on it are idempotent: > > update s a == update (update s a) a > > Every S can be reconstructed from a sequence of updates: > > forall s. exists [a]. s == foldl update empty [a] > > > An example of this would be Data.Set: > > empty = Set.empty > update = flip Set.insert > > Is there something like this in algebra? > > Cheers, > Gleb > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertesx at gmx.de Mon Apr 20 22:53:43 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Tue, 21 Apr 2015 00:53:43 +0200 Subject: [Haskell-cafe] haskell.org Message-ID: Hi everybody, I'd like to note that the prime "sieve" example that is sitting at the top of the homepage is not a real sieve and will more likely make people with number theory experience (like me) feel highly irritated rather than fascinated. A real sieve does not only run a million times (!) faster and consumes far less memory, but is also much longer, even in Haskell. Here is a real one: I don't want to make a mountain out of a molehill, but please note: If I'd be new to Haskell, that example would have turned me off, because it would have hurt my ability to take Haskell programmers seriously. You can easily promote your tools when you claim that they can build a car easily, except in reality it's just a toy bicycle. It's the same feeling to cryptographers when people call a regular stream cipher a "one-time pad" and promote it as such. It rings the "this is snake oil!" alarm bell. So I propose to either rename the 'sieve' function to something more appropriate (like `trialDiv`) or replace the example altogether. I would suggest an example that truly shows Haskell's strengths. Trial division search is really just a bad substitute for the more common and equally inappropriate list quicksort example. Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From anton.kholomiov at gmail.com Mon Apr 20 23:07:10 2015 From: anton.kholomiov at gmail.com (Anton Kholomiov) Date: Tue, 21 Apr 2015 02:07:10 +0300 Subject: [Haskell-cafe] [ANN] Haskell Music - Two tracks in the Jungle style Message-ID: Dear list. I'd like to share two new tracks made in haskell (with csound-expression lib). They are two etudes in the style of the Jungle. One track is just a joke. I hope that Simon doesn't mind me sampling him. The track is made in Haskell :) So it's a musical thank-you for the wonderful language! https://soundcloud.com/anton-kho/jungle-etude-1 https://soundcloud.com/anton-kho/jungle-etude-2-feat-simon-peyton-jones Anton -------------- next part -------------- An HTML attachment was scrubbed... URL: From amindfv at gmail.com Mon Apr 20 23:27:57 2015 From: amindfv at gmail.com (amindfv at gmail.com) Date: Mon, 20 Apr 2015 19:27:57 -0400 Subject: [Haskell-cafe] [ANN] Haskell Music - Two tracks in the Jungle style In-Reply-To: References: Message-ID: <57979660-87BA-4350-A9DD-560E39F609E7@gmail.com> Great! It's perfect that the chorus of an spj track is simon asking "does that make sense?" Care to share the code? Tom El Apr 20, 2015, a las 19:07, Anton Kholomiov escribi?: > Dear list. I'd like to share two new tracks made in haskell (with csound-expression lib). They are two etudes in the style of the Jungle. > > One track is just a joke. I hope that Simon doesn't mind me sampling him. The track is made in Haskell :) So it's a musical thank-you for the wonderful language! > > https://soundcloud.com/anton-kho/jungle-etude-1 > > https://soundcloud.com/anton-kho/jungle-etude-2-feat-simon-peyton-jones > > Anton > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon at joyful.com Mon Apr 20 23:35:49 2015 From: simon at joyful.com (Simon Michael) Date: Mon, 20 Apr 2015 16:35:49 -0700 Subject: [Haskell-cafe] ANN: ssh, darcsden vulnerability Message-ID: <53D5E248-8F42-4935-99BD-4622EF07A2DA@joyful.com> We recently learned of a serious undocumented vulnerability in the ssh package. This is a minimal ssh server implementation used by darcsden to support darcs push/pull. If you use the ssh package, or you have darcsden?s darcsden-ssh server running, you should upgrade to/rebuild with the imminent ssh-0.3 release right away. Or if you know of someone like that, please let them know. Also, if you're interested in cryptography/security, additional help and patches for the ssh and darcsden packages would be very welcome. I've blogged more details at http://joyful.com/blog/2015-04-20-ssh-darcs-hub-vulnerability.html (if you're a Darcs Hub user, hopefully you've already seen it). Best - Simon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan.miljenovic at gmail.com Tue Apr 21 00:27:23 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Tue, 21 Apr 2015 10:27:23 +1000 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: Message-ID: On 21 April 2015 at 08:53, Ertugrul S?ylemez wrote: > Hi everybody, > > I'd like to note that the prime "sieve" example that is sitting at the > top of the homepage is not a real sieve and will more likely make people > with number theory experience (like me) feel highly irritated rather > than fascinated. A real sieve does not only run a million times (!) > faster and consumes far less memory, but is also much longer, even in > Haskell. Here is a real one: > > > > I don't want to make a mountain out of a molehill, but please note: If > I'd be new to Haskell, that example would have turned me off, because it > would have hurt my ability to take Haskell programmers seriously. You > can easily promote your tools when you claim that they can build a car > easily, except in reality it's just a toy bicycle. > > It's the same feeling to cryptographers when people call a regular > stream cipher a "one-time pad" and promote it as such. It rings the > "this is snake oil!" alarm bell. > > So I propose to either rename the 'sieve' function to something more > appropriate (like `trialDiv`) or replace the example altogether. I > would suggest an example that truly shows Haskell's strengths. Trial > division search is really just a bad substitute for the more common and > equally inappropriate list quicksort example. My understanding is that it *is* a sieve, just not the Sieve of Eratosthenes (because it's a bit hard to fit that into that small little sample box up the top of the page :p). -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From amindfv at gmail.com Tue Apr 21 00:22:32 2015 From: amindfv at gmail.com (amindfv at gmail.com) Date: Mon, 20 Apr 2015 20:22:32 -0400 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: Message-ID: <289C7652-366D-4213-AF2B-9A860A16EB7C@gmail.com> While we're at it, the "foldr (:) [] [1,2,3]" example probably isn't going to cause anyone to give away their worldly possessions and dedicate their lives to haskell. Tom El Apr 20, 2015, a las 18:53, Ertugrul S?ylemez escribi?: > Hi everybody, > > I'd like to note that the prime "sieve" example that is sitting at the > top of the homepage is not a real sieve and will more likely make people > with number theory experience (like me) feel highly irritated rather > than fascinated. A real sieve does not only run a million times (!) > faster and consumes far less memory, but is also much longer, even in > Haskell. Here is a real one: > > > > I don't want to make a mountain out of a molehill, but please note: If > I'd be new to Haskell, that example would have turned me off, because it > would have hurt my ability to take Haskell programmers seriously. You > can easily promote your tools when you claim that they can build a car > easily, except in reality it's just a toy bicycle. > > It's the same feeling to cryptographers when people call a regular > stream cipher a "one-time pad" and promote it as such. It rings the > "this is snake oil!" alarm bell. > > So I propose to either rename the 'sieve' function to something more > appropriate (like `trialDiv`) or replace the example altogether. I > would suggest an example that truly shows Haskell's strengths. Trial > division search is really just a bad substitute for the more common and > equally inappropriate list quicksort example. > > > Greets, > Ertugrul > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From tikhon at jelv.is Tue Apr 21 00:47:54 2015 From: tikhon at jelv.is (Tikhon Jelvis) Date: Mon, 20 Apr 2015 17:47:54 -0700 Subject: [Haskell-cafe] haskell.org In-Reply-To: <289C7652-366D-4213-AF2B-9A860A16EB7C@gmail.com> References: <289C7652-366D-4213-AF2B-9A860A16EB7C@gmail.com> Message-ID: Are we constraining the examples to not use any external libraries? I can see why that's a good idea, but it also makes it hard to show something both pithy and useful. On Mon, Apr 20, 2015 at 5:22 PM, wrote: > While we're at it, the "foldr (:) [] [1,2,3]" example probably isn't going > to cause anyone to give away their worldly possessions and dedicate their > lives to haskell. > > Tom > > > El Apr 20, 2015, a las 18:53, Ertugrul S?ylemez escribi?: > > > Hi everybody, > > > > I'd like to note that the prime "sieve" example that is sitting at the > > top of the homepage is not a real sieve and will more likely make people > > with number theory experience (like me) feel highly irritated rather > > than fascinated. A real sieve does not only run a million times (!) > > faster and consumes far less memory, but is also much longer, even in > > Haskell. Here is a real one: > > > > > > > > I don't want to make a mountain out of a molehill, but please note: If > > I'd be new to Haskell, that example would have turned me off, because it > > would have hurt my ability to take Haskell programmers seriously. You > > can easily promote your tools when you claim that they can build a car > > easily, except in reality it's just a toy bicycle. > > > > It's the same feeling to cryptographers when people call a regular > > stream cipher a "one-time pad" and promote it as such. It rings the > > "this is snake oil!" alarm bell. > > > > So I propose to either rename the 'sieve' function to something more > > appropriate (like `trialDiv`) or replace the example altogether. I > > would suggest an example that truly shows Haskell's strengths. Trial > > division search is really just a bad substitute for the more common and > > equally inappropriate list quicksort example. > > > > > > Greets, > > Ertugrul > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan.miljenovic at gmail.com Tue Apr 21 01:02:37 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Tue, 21 Apr 2015 11:02:37 +1000 Subject: [Haskell-cafe] haskell.org In-Reply-To: <289C7652-366D-4213-AF2B-9A860A16EB7C@gmail.com> References: <289C7652-366D-4213-AF2B-9A860A16EB7C@gmail.com> Message-ID: On 21 April 2015 at 10:22, wrote: > While we're at it, the "foldr (:) [] [1,2,3]" example probably isn't going to cause anyone to give away their worldly possessions and dedicate their lives to haskell. I'd hope not: it's a bit hard to write and share awesome Haskell packages without a computer to write and test them on! :p > > Tom > > > El Apr 20, 2015, a las 18:53, Ertugrul S?ylemez escribi?: > >> Hi everybody, >> >> I'd like to note that the prime "sieve" example that is sitting at the >> top of the homepage is not a real sieve and will more likely make people >> with number theory experience (like me) feel highly irritated rather >> than fascinated. A real sieve does not only run a million times (!) >> faster and consumes far less memory, but is also much longer, even in >> Haskell. Here is a real one: >> >> >> >> I don't want to make a mountain out of a molehill, but please note: If >> I'd be new to Haskell, that example would have turned me off, because it >> would have hurt my ability to take Haskell programmers seriously. You >> can easily promote your tools when you claim that they can build a car >> easily, except in reality it's just a toy bicycle. >> >> It's the same feeling to cryptographers when people call a regular >> stream cipher a "one-time pad" and promote it as such. It rings the >> "this is snake oil!" alarm bell. >> >> So I propose to either rename the 'sieve' function to something more >> appropriate (like `trialDiv`) or replace the example altogether. I >> would suggest an example that truly shows Haskell's strengths. Trial >> division search is really just a bad substitute for the more common and >> equally inappropriate list quicksort example. >> >> >> Greets, >> Ertugrul >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From gershomb at gmail.com Tue Apr 21 01:08:28 2015 From: gershomb at gmail.com (Gershom B) Date: Mon, 20 Apr 2015 21:08:28 -0400 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: <289C7652-366D-4213-AF2B-9A860A16EB7C@gmail.com> Message-ID: The code sample at the top of the page is much debated, and despite the evident problems with it, alternatives proposed have seemed worse for various reasons. There?s an open ticket that discusses this and other issues with that code sample code here:?https://github.com/haskell-infra/hl/issues/46 Anybody that wants to make a stab at redesigning the top so that it A) has a code sample demonstrating many language features, B) can?t be poked at for not being the ?real? this or that, and C) can be typed or translated directly into the REPL, please have at! -g On April 20, 2015 at 8:48:05 PM, Tikhon Jelvis (tikhon at jelv.is) wrote: > Are we constraining the examples to not use any external libraries? I can > see why that's a good idea, but it also makes it hard to show something > both pithy and useful. > > On Mon, Apr 20, 2015 at 5:22 PM, wrote: > > > While we're at it, the "foldr (:) [] [1,2,3]" example probably isn't going > > to cause anyone to give away their worldly possessions and dedicate their > > lives to haskell. > > > > Tom > > > > > > El Apr 20, 2015, a las 18:53, Ertugrul S?ylemez escribi?: > > > > > Hi everybody, > > > > > > I'd like to note that the prime "sieve" example that is sitting at the > > > top of the homepage is not a real sieve and will more likely make people > > > with number theory experience (like me) feel highly irritated rather > > > than fascinated. A real sieve does not only run a million times (!) > > > faster and consumes far less memory, but is also much longer, even in > > > Haskell. Here is a real one: > > > > > > > > > > > > I don't want to make a mountain out of a molehill, but please note: If > > > I'd be new to Haskell, that example would have turned me off, because it > > > would have hurt my ability to take Haskell programmers seriously. You > > > can easily promote your tools when you claim that they can build a car > > > easily, except in reality it's just a toy bicycle. > > > > > > It's the same feeling to cryptographers when people call a regular > > > stream cipher a "one-time pad" and promote it as such. It rings the > > > "this is snake oil!" alarm bell. > > > > > > So I propose to either rename the 'sieve' function to something more > > > appropriate (like `trialDiv`) or replace the example altogether. I > > > would suggest an example that truly shows Haskell's strengths. Trial > > > division search is really just a bad substitute for the more common and > > > equally inappropriate list quicksort example. > > > > > > > > > Greets, > > > Ertugrul > > > _______________________________________________ > > > Haskell-Cafe mailing list > > > Haskell-Cafe at haskell.org > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From ertesx at gmx.de Tue Apr 21 01:11:45 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Tue, 21 Apr 2015 03:11:45 +0200 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: Message-ID: >> I'd like to note that the prime "sieve" example that is sitting at >> the top of the homepage is not a real sieve [...] > > My understanding is that it *is* a sieve, just not the Sieve of > Eratosthenes (because it's a bit hard to fit that into that small > little sample box up the top of the page :p). The main characteristic of a sieve is that it does not divide and that it eliminates all multiples of a prime without a test. Check one bit, eliminate many. In general if you see any of `mod`, `div` and friends, then it's very unlikely to be a sieve. The only real advantage of the example is that it uses shared primes to use trial division only against primes (instead of probable primes). This gives a slight speedup at the expense of needing a lot of memory. Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From amindfv at gmail.com Tue Apr 21 01:12:15 2015 From: amindfv at gmail.com (amindfv at gmail.com) Date: Mon, 20 Apr 2015 21:12:15 -0400 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: <289C7652-366D-4213-AF2B-9A860A16EB7C@gmail.com> Message-ID: As a replacement, what about: [ n + 10 | n <- [1..9], odd n ] Pithy and not equivalent to "id" :P Tom El Apr 20, 2015, a las 20:47, Tikhon Jelvis escribi?: > Are we constraining the examples to not use any external libraries? I can see why that's a good idea, but it also makes it hard to show something both pithy and useful. > > On Mon, Apr 20, 2015 at 5:22 PM, wrote: >> While we're at it, the "foldr (:) [] [1,2,3]" example probably isn't going to cause anyone to give away their worldly possessions and dedicate their lives to haskell. >> >> Tom >> >> >> El Apr 20, 2015, a las 18:53, Ertugrul S?ylemez escribi?: >> >> > Hi everybody, >> > >> > I'd like to note that the prime "sieve" example that is sitting at the >> > top of the homepage is not a real sieve and will more likely make people >> > with number theory experience (like me) feel highly irritated rather >> > than fascinated. A real sieve does not only run a million times (!) >> > faster and consumes far less memory, but is also much longer, even in >> > Haskell. Here is a real one: >> > >> > >> > >> > I don't want to make a mountain out of a molehill, but please note: If >> > I'd be new to Haskell, that example would have turned me off, because it >> > would have hurt my ability to take Haskell programmers seriously. You >> > can easily promote your tools when you claim that they can build a car >> > easily, except in reality it's just a toy bicycle. >> > >> > It's the same feeling to cryptographers when people call a regular >> > stream cipher a "one-time pad" and promote it as such. It rings the >> > "this is snake oil!" alarm bell. >> > >> > So I propose to either rename the 'sieve' function to something more >> > appropriate (like `trialDiv`) or replace the example altogether. I >> > would suggest an example that truly shows Haskell's strengths. Trial >> > division search is really just a bad substitute for the more common and >> > equally inappropriate list quicksort example. >> > >> > >> > Greets, >> > Ertugrul >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > Haskell-Cafe at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertesx at gmx.de Tue Apr 21 01:13:08 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Tue, 21 Apr 2015 03:13:08 +0200 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: <289C7652-366D-4213-AF2B-9A860A16EB7C@gmail.com> Message-ID: > The code sample at the top of the page is much debated, and despite > the evident problems with it, alternatives proposed have seemed worse > for various reasons. This really suggests that the code is fine as a little example. Just change the word "sieve" to something else, and everything is fine. =) Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From ok at cs.otago.ac.nz Tue Apr 21 02:51:16 2015 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Tue, 21 Apr 2015 14:51:16 +1200 Subject: [Haskell-cafe] Is there a name for this algebraic structure? In-Reply-To: References: Message-ID: <5234BE03-8D7E-4EF9-92BE-7F134C17E165@cs.otago.ac.nz> You said in words that > Every S can be reconstructed from a sequence of updates: but your formula > forall s. exists [a]. s == foldl update empty [a] says that a (not necessarily unique) sequence of updates can be reconstructed from every S. I think you meant something like "there are no elements of S but those that can be constructed by a sequence of updates". I'm a little confused by "a" being lower case. There's a little wrinkle that tells me this isn't going to be simple. type A = Bool newtype S = S [A] empty :: S empty = S [] update :: S -> A -> S update o@(S (x:xs)) y | x == y = o update (S xs) y = S (y:xs) reconstruct :: S -> [A] reconstruct (S xs) = xs Here update is *locally* idempotent: update (update s a) a == update s a But it's not *globally* idempotent: you can build up a list of any desired length, such as S [False,True,False,True,False], as long as the elements alternate. Perhaps I have misunderstood your specification. From david.feuer at gmail.com Tue Apr 21 03:43:05 2015 From: david.feuer at gmail.com (David Feuer) Date: Mon, 20 Apr 2015 23:43:05 -0400 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: Message-ID: If you want to use bit fiddly mutable vector stuff to make the classic Sieve of Eratosthenes fast and compact, I think it makes a lot of sense to use the bitvec package instead of doing the bit fiddling by hand. On the other hand, I think the O'Neill prime sieve makes an excellent example, much prettier than a mutable-vector-based sieve. Her paper is at https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf and her actual implementation (much better than the one described directly in the paper) is at https://hackage.haskell.org/package/NumberSieves/docs/Math-Sieve-ONeill.html . It can be optimized in various ways, most obviously by specializing from Integral to Word, but probably also by switching from a tree-based heap to one based on a mutable vector. I'm not sure how a really carefully optimized version would compare to Eratosthenes. On Mon, Apr 20, 2015 at 9:11 PM, Ertugrul S?ylemez wrote: > >> I'd like to note that the prime "sieve" example that is sitting at > >> the top of the homepage is not a real sieve [...] > > > > My understanding is that it *is* a sieve, just not the Sieve of > > Eratosthenes (because it's a bit hard to fit that into that small > > little sample box up the top of the page :p). > > The main characteristic of a sieve is that it does not divide and that > it eliminates all multiples of a prime without a test. Check one bit, > eliminate many. > > In general if you see any of `mod`, `div` and friends, then it's very > unlikely to be a sieve. The only real advantage of the example is that > it uses shared primes to use trial division only against primes (instead > of probable primes). This gives a slight speedup at the expense of > needing a lot of memory. > > > Greets, > Ertugrul > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anton.kholomiov at gmail.com Tue Apr 21 08:36:45 2015 From: anton.kholomiov at gmail.com (Anton Kholomiov) Date: Tue, 21 Apr 2015 11:36:45 +0300 Subject: [Haskell-cafe] [ANN] Haskell Music - Two tracks in the Jungle style In-Reply-To: <57979660-87BA-4350-A9DD-560E39F609E7@gmail.com> References: <57979660-87BA-4350-A9DD-560E39F609E7@gmail.com> Message-ID: Thanks! Here is the source code (jungle 1 and 2): https://github.com/anton-k/csound-bits/tree/master/pieces The code requires libs: csound-expression and csound-sampler. The interesting thing about the tracks is that they aligned with bpm-rate. If you change the variable all samples would react. 2015-04-21 2:27 GMT+03:00 : > Great! It's perfect that the chorus of an spj track is simon asking "does > that make sense?" > > Care to share the code? > > Tom > > > El Apr 20, 2015, a las 19:07, Anton Kholomiov > escribi?: > > Dear list. I'd like to share two new tracks made in haskell (with > csound-expression lib). They are two etudes in the style of the Jungle. > > One track is just a joke. I hope that Simon doesn't mind me sampling him. > The track is made in Haskell :) So it's a musical thank-you for the > wonderful language! > > https://soundcloud.com/anton-kho/jungle-etude-1 > > https://soundcloud.com/anton-kho/jungle-etude-2-feat-simon-peyton-jones > > Anton > > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gale at sefer.org Tue Apr 21 09:13:19 2015 From: gale at sefer.org (Yitzchak Gale) Date: Tue, 21 Apr 2015 12:13:19 +0300 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: Message-ID: So how about this: Leave the example. But change the blurb to say: This is inspired by the Sieve of Eratosthenes. For a true Sieve of Eratosthenes, see [link to O'Neil]. On Tue, Apr 21, 2015 at 6:43 AM, David Feuer wrote: > If you want to use bit fiddly mutable vector stuff to make the classic Sieve > of Eratosthenes fast and compact, I think it makes a lot of sense to use the > bitvec package instead of doing the bit fiddling by hand. > > On the other hand, I think the O'Neill prime sieve makes an excellent > example, much prettier than a mutable-vector-based sieve. Her paper is at > https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf and her actual > implementation (much better than the one described directly in the paper) is > at > https://hackage.haskell.org/package/NumberSieves/docs/Math-Sieve-ONeill.html > . It can be optimized in various ways, most obviously by specializing from > Integral to Word, but probably also by switching from a tree-based heap to > one based on a mutable vector. I'm not sure how a really carefully optimized > version would compare to Eratosthenes. > > > On Mon, Apr 20, 2015 at 9:11 PM, Ertugrul S?ylemez wrote: >> >> >> I'd like to note that the prime "sieve" example that is sitting at >> >> the top of the homepage is not a real sieve [...] >> > >> > My understanding is that it *is* a sieve, just not the Sieve of >> > Eratosthenes (because it's a bit hard to fit that into that small >> > little sample box up the top of the page :p). >> >> The main characteristic of a sieve is that it does not divide and that >> it eliminates all multiples of a prime without a test. Check one bit, >> eliminate many. >> >> In general if you see any of `mod`, `div` and friends, then it's very >> unlikely to be a sieve. The only real advantage of the example is that >> it uses shared primes to use trial division only against primes (instead >> of probable primes). This gives a slight speedup at the expense of >> needing a lot of memory. >> >> >> Greets, >> Ertugrul >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From gleber.p at gmail.com Tue Apr 21 09:18:30 2015 From: gleber.p at gmail.com (Gleb Peregud) Date: Tue, 21 Apr 2015 11:18:30 +0200 Subject: [Haskell-cafe] Is there a name for this algebraic structure? In-Reply-To: <5234BE03-8D7E-4EF9-92BE-7F134C17E165@cs.otago.ac.nz> References: <5234BE03-8D7E-4EF9-92BE-7F134C17E165@cs.otago.ac.nz> Message-ID: Thanks for answers and sorry for goofy definitions and laws. I didn't think it thoroughly enough. In general I think I was looking for something slightly less powerful than this CRDTs: https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type Basically I would like to find an algebraic structure which corresponds to a versioned shared data-structure which can be synchronized using log replication between multiple actors/applications/devices. Think if a structure which can be used to synchronize chat room with messages or friends list or notification panel content, etc. It should work over intermittent connection, with a single source of truth (a server), can be incrementally updated to the latest version based on some cached stale version, etc. I think I need to think a bit more about this to find a proper definitions and laws. Cheers, Gleb On Tue, Apr 21, 2015 at 4:51 AM, Richard A. O'Keefe wrote: > You said in words that > > > Every S can be reconstructed from a sequence of updates: > > but your formula > > > forall s. exists [a]. s == foldl update empty [a] > > says that a (not necessarily unique) sequence of updates > can be reconstructed from every S. I think you meant > something like "there are no elements of S but those > that can be constructed by a sequence of updates". > > I'm a little confused by "a" being lower case. > > There's a little wrinkle that tells me this isn't going to > be simple. > > type A = Bool > newtype S = S [A] > > empty :: S > > empty = S [] > > update :: S -> A -> S > > update o@(S (x:xs)) y | x == y = o > update (S xs) y = S (y:xs) > > reconstruct :: S -> [A] > > reconstruct (S xs) = xs > > Here update is *locally* idempotent: > update (update s a) a == update s a > But it's not *globally* idempotent: > you can build up a list of any desired length, > such as S [False,True,False,True,False], > as long as the elements alternate. > > Perhaps I have misunderstood your specification. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From magicloud.magiclouds at gmail.com Tue Apr 21 09:22:42 2015 From: magicloud.magiclouds at gmail.com (Magicloud Magiclouds) Date: Tue, 21 Apr 2015 17:22:42 +0800 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? Message-ID: Hi, I am trying to work with some binary data that encrypted by field instead of the result of serialization. I'd like to use Data.Serialize to wrap the data structure. But I could not figure out how to apply an runtime specified cipher method to the bytestring. Any idea? Or I should use totally other solution? Thanks. -- ??????? ??????? And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gale at sefer.org Tue Apr 21 09:28:51 2015 From: gale at sefer.org (Yitzchak Gale) Date: Tue, 21 Apr 2015 12:28:51 +0300 Subject: [Haskell-cafe] Help wanted with Wiki.Haskell.Org In-Reply-To: References: Message-ID: Gershom wrote: > ...the HaskellWiki ...requires care and maintenance > and contributions from all of us. I replied to this in the reddit thread at: http://www.reddit.com/r/haskell/comments/339qxm/haskellcafe_help_wanted_with_wikihaskellorg/ Short summary: > 1) Account Creation Management I believe we should migrate off of MediaWiki which would, among other benefits, make ACM unnecessary. But if we don't do that, I volunteer to be on the list of responders for this. But on condition that I don't become the only active responder, which is what ended up happening with community.haskell.org. > 2) Technical and design oversight We should focus that effort on migrating the wiki to a modern markdown-based wiki, such as a github wiki, not on legacy MedaWiki administration. > 3) Content curation I think the real problem here is that much of the community either doesn't know about the wiki or has forgotten about it. If we make the wiki more visible and get more buzz about it, then people will use it. And if they use it, they will also update it. Thanks, Yitz From lemming at henning-thielemann.de Tue Apr 21 09:53:07 2015 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Tue, 21 Apr 2015 11:53:07 +0200 (CEST) Subject: [Haskell-cafe] [Haskell] Help wanted with Wiki.Haskell.Org In-Reply-To: References: Message-ID: On Tue, 21 Apr 2015, Yitzchak Gale wrote: >> 2) Technical and design oversight > > We should focus that effort on migrating the wiki to a > modern markdown-based wiki, such as a github wiki, > not on legacy MedaWiki administration. I'd much prefer to have a gitit/pandoc/markdown Wiki were I can pull the whole wiki and manipulate it offline. On the other hand we already had a lot of conversions, from the first wiki (what was its name?) to hawiki to haskellwiki. Each time making a lot of content obsolete and it was no fun for me to update each article via the web form. Somewhen in the past MediaWiki for haskellwiki was updated which formatted the markup differently (e.g. linebreaks after inline code). Maybe even these changes were the cause that made people move to their own blogs. From gale at sefer.org Tue Apr 21 10:43:45 2015 From: gale at sefer.org (Yitzchak Gale) Date: Tue, 21 Apr 2015 13:43:45 +0300 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: Magicloud Magiclouds wrote: > I am trying to work with some binary data that encrypted by field instead of > the result of serialization. I'd like to use Data.Serialize to wrap the data > structure. But I could not figure out how to apply an runtime specified > cipher method to the bytestring. Are you using the set of crypto libraries written by Victor Hanquez, such as cryptocipher-types, crypto-pubkey-types, and cryptohash? Or the set of libraries written by Thomas DuBuisson, such as crypto-api, cipher-aes128, etc.? Here is an example of decoding for Victor's libraries. Encoding would be similar using Put instead of Get. Thomas' libraries would be similar using the other API. Let's say you have a type like this: data MyCipher = MyAES | MyBlowfish | ... Then in your cereal code you would have a Get monad expression something like this (assuming you have written all of the functions called parseSomething): getStuff = do cipher <- parseCipher :: Get MyCipher clearText <- case cipher of MyAES -> do keyBS <- parseAESKey :: Get ByteString let key = either (error "bad AES key") id $ makeKey keyBS cipher = cipherInit key cipherText <- parseAESCipherText :: Get ByteString return $ ecbDecrypt cipher cipherText MyBlowfish -> do ... etc. Hope this helps, Yitz From gale at sefer.org Tue Apr 21 10:50:36 2015 From: gale at sefer.org (Yitzchak Gale) Date: Tue, 21 Apr 2015 13:50:36 +0300 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: Sorry, you need to specify which cipher type you are using of course. So: let key :: Key AES key = either (error "bad AES key") id $ makeKey keyBS cipher = cipherInit key etc. From gale at sefer.org Tue Apr 21 11:02:16 2015 From: gale at sefer.org (Yitzchak Gale) Date: Tue, 21 Apr 2015 14:02:16 +0300 Subject: [Haskell-cafe] [Haskell] Help wanted with Wiki.Haskell.Org In-Reply-To: References: Message-ID: Henning Thielemann wrote: > I'd much prefer to have a gitit/pandoc/markdown > Wiki were I can pull the whole wiki and manipulate > it offline. Yes true. I'm not also happy about the idea of putting it under control of GitHub Inc., and being left to their mercy. But that would mean we would need to build and maintain our own site. Not too hard, but someone must do it, and maintain it. And we are back to the problem of Account Creation Management, and dealing with wiki spam ourselves. > On the other hand we already had a lot > of conversions, from the first wiki (what was its name?) > to hawiki to haskellwiki. Each time making a lot of > content obsolete and it was no fun for me to update > each article via the web form. Somewhen in the past > MediaWiki for haskellwiki was updated which formatted > the markup differently (e.g. linebreaks after inline code). > Maybe even these changes were the cause > that made people move to their own blogs. Agreed. But what choice have we? Technology moves forward. Leaving the wiki on an aging platform would lead to its death, because it would be too hard to maintain, and because newcomers would not be comfortable with using it. Perhaps the migration would be easier this time due to better tooling. Does pandoc support MediaWiki input? If not, we could write a backend for it. From chemistmail at gmail.com Tue Apr 21 12:46:33 2015 From: chemistmail at gmail.com (=?UTF-8?B?0JDQu9C10LrRgdC10Lkg0KHQvNC40YDQvdC+0LI=?=) Date: Tue, 21 Apr 2015 15:46:33 +0300 Subject: [Haskell-cafe] ANN: Upload agentx, Library for write extensible SNMP agents. Message-ID: Can be useful for create monitoring tools. Usage example in docs. get, set, walk, context is work. Bulk work as simple get. Tables don't work, Indexes don't work. https://hackage.haskell.org/package/agentx https://github.com/chemist/agentx -- Alexey Smirnov. -------------- next part -------------- An HTML attachment was scrubbed... URL: From magicloud.magiclouds at gmail.com Tue Apr 21 13:58:42 2015 From: magicloud.magiclouds at gmail.com (Magicloud Magiclouds) Date: Tue, 21 Apr 2015 21:58:42 +0800 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: Thank you. But how if the cipher was specified outside the binary data? I mean I need to pass the decrypt/encrypt function to get/put while they do not accept parameters. Should I use Reader here? On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale wrote: > Magicloud Magiclouds wrote: > > I am trying to work with some binary data that encrypted by field > instead of > > the result of serialization. I'd like to use Data.Serialize to wrap the > data > > structure. But I could not figure out how to apply an runtime specified > > cipher method to the bytestring. > > Are you using the set of crypto libraries written by > Victor Hanquez, such as cryptocipher-types, > crypto-pubkey-types, and cryptohash? > > Or the set of libraries written by Thomas DuBuisson, > such as crypto-api, cipher-aes128, etc.? > > Here is an example of decoding for Victor's libraries. > Encoding would be similar using Put instead of Get. > Thomas' libraries would be similar using the other > API. > > Let's say you have a type like this: > > data MyCipher = MyAES | MyBlowfish | ... > > Then in your cereal code you would have a Get monad > expression something like this (assuming you have > written all of the functions called parseSomething): > > getStuff = do > cipher <- parseCipher :: Get MyCipher > clearText <- case cipher of > MyAES -> do > keyBS <- parseAESKey :: Get ByteString > let key = either (error "bad AES key") id $ makeKey keyBS > cipher = cipherInit key > cipherText <- parseAESCipherText :: Get ByteString > return $ ecbDecrypt cipher cipherText > MyBlowfish -> do ... > > etc. > > Hope this helps, > Yitz > -- ??????? ??????? And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan.miljenovic at gmail.com Tue Apr 21 14:08:29 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Wed, 22 Apr 2015 00:08:29 +1000 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: On 21 April 2015 at 23:58, Magicloud Magiclouds wrote: > Thank you. But how if the cipher was specified outside the binary data? I > mean I need to pass the decrypt/encrypt function to get/put while they do > not accept parameters. Should I use Reader here? Maybe you could explain what you're doing better. I would envisage that you would get a Bytestring/Text value, then encrypt/decrypt and then put it back (though if you're dealing with Bytestrings, unless you're wanting to compose them with others there's no real need to use Get and Put as you'll have the resulting Bytestring already...). Or are you wanting to implement your own encryption/decryption scheme? In which case, you might want to either: a) write custom functions in the Get and Put monads OR b) write custom parsers (e.g. attoparsec) and builders (using the Builder module in bytestring); this is probably going to suit you better. > > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale wrote: >> >> Magicloud Magiclouds wrote: >> > I am trying to work with some binary data that encrypted by field >> > instead of >> > the result of serialization. I'd like to use Data.Serialize to wrap the >> > data >> > structure. But I could not figure out how to apply an runtime specified >> > cipher method to the bytestring. >> >> Are you using the set of crypto libraries written by >> Victor Hanquez, such as cryptocipher-types, >> crypto-pubkey-types, and cryptohash? >> >> Or the set of libraries written by Thomas DuBuisson, >> such as crypto-api, cipher-aes128, etc.? >> >> Here is an example of decoding for Victor's libraries. >> Encoding would be similar using Put instead of Get. >> Thomas' libraries would be similar using the other >> API. >> >> Let's say you have a type like this: >> >> data MyCipher = MyAES | MyBlowfish | ... >> >> Then in your cereal code you would have a Get monad >> expression something like this (assuming you have >> written all of the functions called parseSomething): >> >> getStuff = do >> cipher <- parseCipher :: Get MyCipher >> clearText <- case cipher of >> MyAES -> do >> keyBS <- parseAESKey :: Get ByteString >> let key = either (error "bad AES key") id $ makeKey keyBS >> cipher = cipherInit key >> cipherText <- parseAESCipherText :: Get ByteString >> return $ ecbDecrypt cipher cipherText >> MyBlowfish -> do ... >> >> etc. >> >> Hope this helps, >> Yitz > > > > > -- > ??????? > ??????? > > And for G+, please use magiclouds#gmail.com. > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From hesselink at gmail.com Tue Apr 21 14:13:40 2015 From: hesselink at gmail.com (Erik Hesselink) Date: Tue, 21 Apr 2015 16:13:40 +0200 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: One thing I've done in the past is, instead of giving an 'instance Serialize YourType', give an 'instance Serialize (Input -> YourType)'. This way you can get access to the input in the instance, but you have to provide the input when you can the deserialization function. Regards, Erik On Tue, Apr 21, 2015 at 3:58 PM, Magicloud Magiclouds wrote: > Thank you. But how if the cipher was specified outside the binary data? I > mean I need to pass the decrypt/encrypt function to get/put while they do > not accept parameters. Should I use Reader here? > > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale wrote: >> >> Magicloud Magiclouds wrote: >> > I am trying to work with some binary data that encrypted by field >> > instead of >> > the result of serialization. I'd like to use Data.Serialize to wrap the >> > data >> > structure. But I could not figure out how to apply an runtime specified >> > cipher method to the bytestring. >> >> Are you using the set of crypto libraries written by >> Victor Hanquez, such as cryptocipher-types, >> crypto-pubkey-types, and cryptohash? >> >> Or the set of libraries written by Thomas DuBuisson, >> such as crypto-api, cipher-aes128, etc.? >> >> Here is an example of decoding for Victor's libraries. >> Encoding would be similar using Put instead of Get. >> Thomas' libraries would be similar using the other >> API. >> >> Let's say you have a type like this: >> >> data MyCipher = MyAES | MyBlowfish | ... >> >> Then in your cereal code you would have a Get monad >> expression something like this (assuming you have >> written all of the functions called parseSomething): >> >> getStuff = do >> cipher <- parseCipher :: Get MyCipher >> clearText <- case cipher of >> MyAES -> do >> keyBS <- parseAESKey :: Get ByteString >> let key = either (error "bad AES key") id $ makeKey keyBS >> cipher = cipherInit key >> cipherText <- parseAESCipherText :: Get ByteString >> return $ ecbDecrypt cipher cipherText >> MyBlowfish -> do ... >> >> etc. >> >> Hope this helps, >> Yitz > > > > > -- > ??????? > ??????? > > And for G+, please use magiclouds#gmail.com. > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From sven.bartscher at weltraumschlangen.de Tue Apr 21 16:37:29 2015 From: sven.bartscher at weltraumschlangen.de (Sven Bartscher) Date: Tue, 21 Apr 2015 18:37:29 +0200 Subject: [Haskell-cafe] compiling GHC 7.8 on raspberry pi In-Reply-To: <1429458158.7729.7.camel@debian.org> References: <20150419161515.57bdd83b@sven.bartscher> <1429458158.7729.7.camel@debian.org> Message-ID: <20150421183729.6555acce@sven.bartscher> On Sun, 19 Apr 2015 17:42:38 +0200 Joachim Breitner wrote: > Hi, > > Am Sonntag, den 19.04.2015, 16:15 +0200 schrieb Sven Bartscher: > > I'm trying to get a haskell program to run on a raspberry pi (running > > raspbian). Unfortunately it requires template haskell. > > Since the GHC included in raspbian wheezy doesn't support TH I'm trying > > to compile GHC 7.8.4 on the rpi. > > Most of the compilation worked fine. I got problems with the memory > > consumption, but adding a lot of swapspace solved this problem. > > During the final phase the compilation process complains about a > > "strange closure type 49200" (the exact number is varying, but most > > often it's 49200). > > Does anyone here have experience, how to compile GHC 7.8 on a raspberry > > pi? > > > > As a side note: The compilation is running in QEMU while the compiled > > program should run on a real rpi. > > you might be interested in the patches that Debian applies to GHC, in > particular the ARM-related one, even more in particular the one that > enforces the use of gold as the linker: > https://sources.debian.net/src/ghc/7.8.20141223-1/debian/patches/ Many thanks. I will try that, but it will take some time, until I know whether it worked. Regards Sven -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: Digitale Signatur von OpenPGP URL: From kyle.marek.spartz at gmail.com Tue Apr 21 16:55:14 2015 From: kyle.marek.spartz at gmail.com (Kyle Marek-Spartz) Date: Tue, 21 Apr 2015 11:55:14 -0500 Subject: [Haskell-cafe] [Haskell] Help wanted with Wiki.Haskell.Org In-Reply-To: References: Message-ID: <2fcp3s618pjua5.fsf@kmarekszmbp3743.stp01.office.gdi> Yitzchak Gale writes: > And we are back to the problem > of Account Creation Management, and dealing with > wiki spam ourselves. Not necessarily. The wiki spam bots tend to target mediawiki wikis as they are low hanging fruit. I'd be interested in helping with a migration to gitit if that's the route we choose. -- Kyle Marek-Spartz From kyle.marek.spartz at gmail.com Tue Apr 21 17:00:02 2015 From: kyle.marek.spartz at gmail.com (Kyle Marek-Spartz) Date: Tue, 21 Apr 2015 12:00:02 -0500 Subject: [Haskell-cafe] Is there a name for this algebraic structure? In-Reply-To: References: <5234BE03-8D7E-4EF9-92BE-7F134C17E165@cs.otago.ac.nz> Message-ID: <2fcp3s4mo9ju25.fsf@kmarekszmbp3743.stp01.office.gdi> A CRDT may be a commutative semi-group, where the operation is merge. Given two conflicting versions, the merge operation yields an updated version. Associativity is important for this use case since one does not know the order that versions will be reconciled. Commutativity is important for this use case since version A merged with version B should be the same as version B merged with version A. Hopefully this helps. Gleb Peregud writes: > Thanks for answers and sorry for goofy definitions and laws. I didn't think > it thoroughly enough. > > In general I think I was looking for something slightly less powerful than > this CRDTs: > https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type > > Basically I would like to find an algebraic structure which corresponds to > a versioned shared data-structure which can be synchronized using log > replication between multiple actors/applications/devices. Think if a > structure which can be used to synchronize chat room with messages or > friends list or notification panel content, etc. It should work over > intermittent connection, with a single source of truth (a server), can be > incrementally updated to the latest version based on some cached stale > version, etc. I think I need to think a bit more about this to find a > proper definitions and laws. > > Cheers, > Gleb > > On Tue, Apr 21, 2015 at 4:51 AM, Richard A. O'Keefe > wrote: > >> You said in words that >> >> > Every S can be reconstructed from a sequence of updates: >> >> but your formula >> >> > forall s. exists [a]. s == foldl update empty [a] >> >> says that a (not necessarily unique) sequence of updates >> can be reconstructed from every S. I think you meant >> something like "there are no elements of S but those >> that can be constructed by a sequence of updates". >> >> I'm a little confused by "a" being lower case. >> >> There's a little wrinkle that tells me this isn't going to >> be simple. >> >> type A = Bool >> newtype S = S [A] >> >> empty :: S >> >> empty = S [] >> >> update :: S -> A -> S >> >> update o@(S (x:xs)) y | x == y = o >> update (S xs) y = S (y:xs) >> >> reconstruct :: S -> [A] >> >> reconstruct (S xs) = xs >> >> Here update is *locally* idempotent: >> update (update s a) a == update s a >> But it's not *globally* idempotent: >> you can build up a list of any desired length, >> such as S [False,True,False,True,False], >> as long as the elements alternate. >> >> Perhaps I have misunderstood your specification. >> >> > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -- Kyle Marek-Spartz From jonathangfischoff at gmail.com Tue Apr 21 17:01:21 2015 From: jonathangfischoff at gmail.com (Jonathan Fischoff) Date: Tue, 21 Apr 2015 13:01:21 -0400 Subject: [Haskell-cafe] Looking for Senior Software Engineers Message-ID: Dear Haskellers, There are two senior software engineer openings at skedge.me, a Haskell and NixOS based company located in Manhattan NY. Here are the job postings. http://www.indeed.com/cmp/skedge.me/jobs/Senior-Software-Engineer-18d7f042d40b589d http://www.indeed.com/cmp/skedge.me/jobs/Senior-Software-Engineer-b62e690290fe72b3 Professional Haskell experience is not required. Provided you have experience maintaining and developing web services, this is your chance to become a professional Haskeller (if you're not one already). Relocation will be provided, but remote is not an option. Please send your resumes to jobs at skedge.me if you are interested. Sincerely, Jonathan Fischoff VP of Engineering at skege.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From jgm at berkeley.edu Tue Apr 21 18:21:06 2015 From: jgm at berkeley.edu (John MacFarlane) Date: Tue, 21 Apr 2015 11:21:06 -0700 Subject: [Haskell-cafe] Help wanted with Wiki.Haskell.Org In-Reply-To: References: Message-ID: <20150421182106.GA15633@protagoras.berkeley.edu> +++ Yitzchak Gale [Apr 21 15 14:02 ]: >Henning Thielemann wrote: >> I'd much prefer to have a gitit/pandoc/markdown >> Wiki were I can pull the whole wiki and manipulate >> it offline. >Perhaps the migration would be easier this time due >to better tooling. Does pandoc support MediaWiki input? >If not, we could write a backend for it. Yes, pandoc can convert from MediaWiki (not perfectly, but pretty well). Last time this issue came up (2012), I produced a proof-of-concept gitit clone of the haskellwiki. I've reactivated it here: http://haskellwiki.gitit.net/ (Of course, the last commit was in 2012.) I used a custom script (https://github.com/jgm/hw2gitit/blob/master/hw2gitit.hs) to convert it. At the time, pandoc didn't read MediaWiki, so I parsed the HTML and converted this to Markdown. This worked fairly well. It might probably make more sense now to directly parse the MediaWiki source using pandoc. John From ertesx at gmx.de Tue Apr 21 18:32:11 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Tue, 21 Apr 2015 20:32:11 +0200 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: Message-ID: > Leave the example. But change the blurb to say: > This is inspired by the Sieve of Eratosthenes. For a true Sieve of > Eratosthenes, see [link to O'Neil]. Unfortunately trial division is in no way inspired by the SoE. It's about the most bruteforcy way to find primes. Basically all it does is to perform a full primality test for every single candidate. The only advantage of the example code is that it uses laziness and sharing to keep a list of the primes found so far to make this trial division slightly less expensive. In fact it can be improved quadratically and still wouldn't be a sieve. On my current machine in 1 second the following code finds the first 60000 primes while the homepage example finds only 3700 primes, both specialised to [Int]: primes = 2 : filter isPrime [3..] where isPrime x = all (\p -> mod x p /= 0) . takeWhile (\p -> p*p <= x) $ primes The SoE on the other hand exploits the global structure of the distribution of a prime's multiples. Without any division at all it simply deduces that after each crossing operation the smallest remaining integer *must* be prime. This is pretty much the same improvement the quadratic sieve makes to Dixon's factoring method and the number field sieve makes to the index calculus method. It gets rid of primality tests and uses the distributions of multiples. Thus it finds lots and lots of relations in one go rather than testing every single relation. It would be equally wrong to call Dixon's method or index calculus sieves. How about simply changing `sieve` to `trialDiv`? It's not that I don't like the given example, because it gives a very small use case for laziness that is difficult enough to reproduce in an eagerly evaluated language. And with the new name for the local function it stops turning the stomach of a number theoretician. Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From ertesx at gmx.de Tue Apr 21 18:35:20 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Tue, 21 Apr 2015 20:35:20 +0200 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: Message-ID: > If you want to use bit fiddly mutable vector stuff to make the classic > Sieve of Eratosthenes fast and compact, I think it makes a lot of > sense to use the bitvec package instead of doing the bit fiddling by > hand. Indeed. However, the purpose of this code is to outperform the equally well optimised version in C and compiled with GCC or Clang. That's also the sole reason why I used TH to precompute a value that should never have been noticable in the first place. The code was written for GHC 7.6. More recent versions might no longer require TH to outperform GCC/Clang. Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From andrew.gibiansky at gmail.com Tue Apr 21 18:42:50 2015 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Tue, 21 Apr 2015 11:42:50 -0700 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: Message-ID: I think the code sample should be replaced with a number of different code samples, demonstrating different things. Then we don't have to try to fit all the syntax into one bit of code, and so we can avoid using a fake sieve entirely. See the way Racket does their code demos: http://racket-lang.org/ I also posted this for discussion on the Github issue: https://github.com/haskell-infra/hl/issues/46 -- Andrew On Tue, Apr 21, 2015 at 11:32 AM, Ertugrul S?ylemez wrote: > > Leave the example. But change the blurb to say: > > This is inspired by the Sieve of Eratosthenes. For a true Sieve of > > Eratosthenes, see [link to O'Neil]. > > Unfortunately trial division is in no way inspired by the SoE. It's > about the most bruteforcy way to find primes. Basically all it does is > to perform a full primality test for every single candidate. The only > advantage of the example code is that it uses laziness and sharing to > keep a list of the primes found so far to make this trial division > slightly less expensive. > > In fact it can be improved quadratically and still wouldn't be a sieve. > On my current machine in 1 second the following code finds the first > 60000 primes while the homepage example finds only 3700 primes, both > specialised to [Int]: > > primes = 2 : filter isPrime [3..] > where > isPrime x = > all (\p -> mod x p /= 0) . > takeWhile (\p -> p*p <= x) $ primes > > The SoE on the other hand exploits the global structure of the > distribution of a prime's multiples. Without any division at all it > simply deduces that after each crossing operation the smallest remaining > integer *must* be prime. > > This is pretty much the same improvement the quadratic sieve makes to > Dixon's factoring method and the number field sieve makes to the index > calculus method. It gets rid of primality tests and uses the > distributions of multiples. Thus it finds lots and lots of relations in > one go rather than testing every single relation. It would be equally > wrong to call Dixon's method or index calculus sieves. > > How about simply changing `sieve` to `trialDiv`? It's not that I don't > like the given example, because it gives a very small use case for > laziness that is difficult enough to reproduce in an eagerly evaluated > language. And with the new name for the local function it stops turning > the stomach of a number theoretician. > > > Greets, > Ertugrul > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagliardi.curtis at gmail.com Tue Apr 21 20:32:06 2015 From: gagliardi.curtis at gmail.com (Curtis Gagliardi) Date: Tue, 21 Apr 2015 20:32:06 +0000 Subject: [Haskell-cafe] haskell.org In-Reply-To: References: Message-ID: Nobody wants to see a number theory example anyway. I agree with Andrew that racket's system with multiple demos is nice, but also think each demo is much more interesting because they're problems you're more likely to run into day to day. On Tue, Apr 21, 2015 at 6:42 PM, Andrew Gibiansky < andrew.gibiansky at gmail.com> wrote: > I think the code sample should be replaced with a number of different code > samples, demonstrating different things. Then we don't have to try to fit > all the syntax into one bit of code, and so we can avoid using a fake sieve > entirely. See the way Racket does their code demos: > > http://racket-lang.org/ > > I also posted this for discussion on the Github issue: > > https://github.com/haskell-infra/hl/issues/46 > > -- Andrew > > On Tue, Apr 21, 2015 at 11:32 AM, Ertugrul S?ylemez wrote: > >> > Leave the example. But change the blurb to say: >> > This is inspired by the Sieve of Eratosthenes. For a true Sieve of >> > Eratosthenes, see [link to O'Neil]. >> >> Unfortunately trial division is in no way inspired by the SoE. It's >> about the most bruteforcy way to find primes. Basically all it does is >> to perform a full primality test for every single candidate. The only >> advantage of the example code is that it uses laziness and sharing to >> keep a list of the primes found so far to make this trial division >> slightly less expensive. >> >> In fact it can be improved quadratically and still wouldn't be a sieve. >> On my current machine in 1 second the following code finds the first >> 60000 primes while the homepage example finds only 3700 primes, both >> specialised to [Int]: >> >> primes = 2 : filter isPrime [3..] >> where >> isPrime x = >> all (\p -> mod x p /= 0) . >> takeWhile (\p -> p*p <= x) $ primes >> >> The SoE on the other hand exploits the global structure of the >> distribution of a prime's multiples. Without any division at all it >> simply deduces that after each crossing operation the smallest remaining >> integer *must* be prime. >> >> This is pretty much the same improvement the quadratic sieve makes to >> Dixon's factoring method and the number field sieve makes to the index >> calculus method. It gets rid of primality tests and uses the >> distributions of multiples. Thus it finds lots and lots of relations in >> one go rather than testing every single relation. It would be equally >> wrong to call Dixon's method or index calculus sieves. >> >> How about simply changing `sieve` to `trialDiv`? It's not that I don't >> like the given example, because it gives a very small use case for >> laziness that is difficult enough to reproduce in an eagerly evaluated >> language. And with the new name for the local function it stops turning >> the stomach of a number theoretician. >> >> >> Greets, >> Ertugrul >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Tue Apr 21 20:36:59 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Tue, 21 Apr 2015 21:36:59 +0100 Subject: [Haskell-cafe] Is there a name for this algebraic structure? In-Reply-To: <5234BE03-8D7E-4EF9-92BE-7F134C17E165@cs.otago.ac.nz> References: <5234BE03-8D7E-4EF9-92BE-7F134C17E165@cs.otago.ac.nz> Message-ID: <20150421203659.GI19341@weber> On Tue, Apr 21, 2015 at 02:51:16PM +1200, Richard A. O'Keefe wrote: > You said in words that > > > Every S can be reconstructed from a sequence of updates: > > but your formula > > > forall s. exists [a]. s == foldl update empty [a] > > says that a (not necessarily unique) sequence of updates > can be reconstructed from every S. I think you meant > something like "there are no elements of S but those > that can be constructed by a sequence of updates". Those sound like the same thing to me! From hjgtuyl at chello.nl Tue Apr 21 22:32:13 2015 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Wed, 22 Apr 2015 00:32:13 +0200 Subject: [Haskell-cafe] Help wanted with Wiki.Haskell.Org In-Reply-To: <20150421182106.GA15633@protagoras.berkeley.edu> References: <20150421182106.GA15633@protagoras.berkeley.edu> Message-ID: On Tue, 21 Apr 2015 20:21:06 +0200, John MacFarlane wrote: > Yes, pandoc can convert from MediaWiki (not perfectly, > but pretty well). > > Last time this issue came up (2012), I produced a > proof-of-concept gitit clone of the haskellwiki. > I've reactivated it here: http://haskellwiki.gitit.net/ > (Of course, the last commit was in 2012.) I just looked at the Lojban page; if you compare https://wiki.haskell.org/Lojban with http://haskellwiki.gitit.net/Lojban you can see that the gitit page needs a lot of manual labor to get it right. Regards, Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From jgm at berkeley.edu Tue Apr 21 22:56:43 2015 From: jgm at berkeley.edu (John MacFarlane) Date: Tue, 21 Apr 2015 15:56:43 -0700 Subject: [Haskell-cafe] Help wanted with Wiki.Haskell.Org In-Reply-To: References: <20150421182106.GA15633@protagoras.berkeley.edu> Message-ID: <20150421225643.GB8927@dhcp-128-32-252-59.lips.berkeley.edu> +++ Henk-Jan van Tuyl [Apr 22 15 00:32 ]: >I just looked at the Lojban page; if you compare > https://wiki.haskell.org/Lojban >with > http://haskellwiki.gitit.net/Lojban >you can see that the gitit page needs a lot of manual labor to get it >right. Sure. I didn't put enough time into it to get a perfect automated translation. Part of the problem here is that pandoc's internal model of tables contains neither colspans nor rowspans, nor borders, so lots of information gets lost in translation in a table-heavy page like this. The script could be improved to pass through some tables as raw HTML. And we might have better results doing it again with pandoc's mediawiki reader (though we'd no doubt also find bugs in the reader that could be fixed). From magicloud.magiclouds at gmail.com Wed Apr 22 01:52:53 2015 From: magicloud.magiclouds at gmail.com (Magicloud Magiclouds) Date: Wed, 22 Apr 2015 09:52:53 +0800 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: This seems like what I need. Thanks. On Tue, Apr 21, 2015 at 10:13 PM, Erik Hesselink wrote: > One thing I've done in the past is, instead of giving an 'instance > Serialize YourType', give an 'instance Serialize (Input -> YourType)'. > This way you can get access to the input in the instance, but you have > to provide the input when you can the deserialization function. > > Regards, > > Erik > > On Tue, Apr 21, 2015 at 3:58 PM, Magicloud Magiclouds > wrote: > > Thank you. But how if the cipher was specified outside the binary data? I > > mean I need to pass the decrypt/encrypt function to get/put while they do > > not accept parameters. Should I use Reader here? > > > > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale wrote: > >> > >> Magicloud Magiclouds wrote: > >> > I am trying to work with some binary data that encrypted by field > >> > instead of > >> > the result of serialization. I'd like to use Data.Serialize to wrap > the > >> > data > >> > structure. But I could not figure out how to apply an runtime > specified > >> > cipher method to the bytestring. > >> > >> Are you using the set of crypto libraries written by > >> Victor Hanquez, such as cryptocipher-types, > >> crypto-pubkey-types, and cryptohash? > >> > >> Or the set of libraries written by Thomas DuBuisson, > >> such as crypto-api, cipher-aes128, etc.? > >> > >> Here is an example of decoding for Victor's libraries. > >> Encoding would be similar using Put instead of Get. > >> Thomas' libraries would be similar using the other > >> API. > >> > >> Let's say you have a type like this: > >> > >> data MyCipher = MyAES | MyBlowfish | ... > >> > >> Then in your cereal code you would have a Get monad > >> expression something like this (assuming you have > >> written all of the functions called parseSomething): > >> > >> getStuff = do > >> cipher <- parseCipher :: Get MyCipher > >> clearText <- case cipher of > >> MyAES -> do > >> keyBS <- parseAESKey :: Get ByteString > >> let key = either (error "bad AES key") id $ makeKey keyBS > >> cipher = cipherInit key > >> cipherText <- parseAESCipherText :: Get ByteString > >> return $ ecbDecrypt cipher cipherText > >> MyBlowfish -> do ... > >> > >> etc. > >> > >> Hope this helps, > >> Yitz > > > > > > > > > > -- > > ??????? > > ??????? > > > > And for G+, please use magiclouds#gmail.com. > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > -- ??????? ??????? And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From magicloud.magiclouds at gmail.com Wed Apr 22 02:49:10 2015 From: magicloud.magiclouds at gmail.com (Magicloud Magiclouds) Date: Wed, 22 Apr 2015 10:49:10 +0800 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: Similar as you envisaged. I would receive a bytestring data and a config point out what cipher to use. Then I deserialize the data to a data type with some fields. The serialize process is something like: msum $ map (encrypt . encode) [field1, field2, field3] I could parse the bytestring outside Get/Put monads. But I think that looks ugly. I really want to embed the decrypt process into Get/Put monads. On Tue, Apr 21, 2015 at 10:08 PM, Ivan Lazar Miljenovic < ivan.miljenovic at gmail.com> wrote: > On 21 April 2015 at 23:58, Magicloud Magiclouds > wrote: > > Thank you. But how if the cipher was specified outside the binary data? I > > mean I need to pass the decrypt/encrypt function to get/put while they do > > not accept parameters. Should I use Reader here? > > Maybe you could explain what you're doing better. > > I would envisage that you would get a Bytestring/Text value, then > encrypt/decrypt and then put it back (though if you're dealing with > Bytestrings, unless you're wanting to compose them with others there's > no real need to use Get and Put as you'll have the resulting > Bytestring already...). > > Or are you wanting to implement your own encryption/decryption scheme? > In which case, you might want to either: > > a) write custom functions in the Get and Put monads OR > > b) write custom parsers (e.g. attoparsec) and builders (using the > Builder module in bytestring); this is probably going to suit you > better. > > > > > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale wrote: > >> > >> Magicloud Magiclouds wrote: > >> > I am trying to work with some binary data that encrypted by field > >> > instead of > >> > the result of serialization. I'd like to use Data.Serialize to wrap > the > >> > data > >> > structure. But I could not figure out how to apply an runtime > specified > >> > cipher method to the bytestring. > >> > >> Are you using the set of crypto libraries written by > >> Victor Hanquez, such as cryptocipher-types, > >> crypto-pubkey-types, and cryptohash? > >> > >> Or the set of libraries written by Thomas DuBuisson, > >> such as crypto-api, cipher-aes128, etc.? > >> > >> Here is an example of decoding for Victor's libraries. > >> Encoding would be similar using Put instead of Get. > >> Thomas' libraries would be similar using the other > >> API. > >> > >> Let's say you have a type like this: > >> > >> data MyCipher = MyAES | MyBlowfish | ... > >> > >> Then in your cereal code you would have a Get monad > >> expression something like this (assuming you have > >> written all of the functions called parseSomething): > >> > >> getStuff = do > >> cipher <- parseCipher :: Get MyCipher > >> clearText <- case cipher of > >> MyAES -> do > >> keyBS <- parseAESKey :: Get ByteString > >> let key = either (error "bad AES key") id $ makeKey keyBS > >> cipher = cipherInit key > >> cipherText <- parseAESCipherText :: Get ByteString > >> return $ ecbDecrypt cipher cipherText > >> MyBlowfish -> do ... > >> > >> etc. > >> > >> Hope this helps, > >> Yitz > > > > > > > > > > -- > > ??????? > > ??????? > > > > And for G+, please use magiclouds#gmail.com. > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > > -- > Ivan Lazar Miljenovic > Ivan.Miljenovic at gmail.com > http://IvanMiljenovic.wordpress.com > -- ??????? ??????? And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From magicloud.magiclouds at gmail.com Wed Apr 22 02:52:03 2015 From: magicloud.magiclouds at gmail.com (Magicloud Magiclouds) Date: Wed, 22 Apr 2015 10:52:03 +0800 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: Apparently I did not think this through. For "instance (BlockCipher k, Byteable iv) => Serialize ((k, iv), Destination)", how should I code get function? I mean in get, I should give out (k, iv), instead of using them. On Wed, Apr 22, 2015 at 9:52 AM, Magicloud Magiclouds < magicloud.magiclouds at gmail.com> wrote: > This seems like what I need. Thanks. > > On Tue, Apr 21, 2015 at 10:13 PM, Erik Hesselink > wrote: > >> One thing I've done in the past is, instead of giving an 'instance >> Serialize YourType', give an 'instance Serialize (Input -> YourType)'. >> This way you can get access to the input in the instance, but you have >> to provide the input when you can the deserialization function. >> >> Regards, >> >> Erik >> >> On Tue, Apr 21, 2015 at 3:58 PM, Magicloud Magiclouds >> wrote: >> > Thank you. But how if the cipher was specified outside the binary data? >> I >> > mean I need to pass the decrypt/encrypt function to get/put while they >> do >> > not accept parameters. Should I use Reader here? >> > >> > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale wrote: >> >> >> >> Magicloud Magiclouds wrote: >> >> > I am trying to work with some binary data that encrypted by field >> >> > instead of >> >> > the result of serialization. I'd like to use Data.Serialize to wrap >> the >> >> > data >> >> > structure. But I could not figure out how to apply an runtime >> specified >> >> > cipher method to the bytestring. >> >> >> >> Are you using the set of crypto libraries written by >> >> Victor Hanquez, such as cryptocipher-types, >> >> crypto-pubkey-types, and cryptohash? >> >> >> >> Or the set of libraries written by Thomas DuBuisson, >> >> such as crypto-api, cipher-aes128, etc.? >> >> >> >> Here is an example of decoding for Victor's libraries. >> >> Encoding would be similar using Put instead of Get. >> >> Thomas' libraries would be similar using the other >> >> API. >> >> >> >> Let's say you have a type like this: >> >> >> >> data MyCipher = MyAES | MyBlowfish | ... >> >> >> >> Then in your cereal code you would have a Get monad >> >> expression something like this (assuming you have >> >> written all of the functions called parseSomething): >> >> >> >> getStuff = do >> >> cipher <- parseCipher :: Get MyCipher >> >> clearText <- case cipher of >> >> MyAES -> do >> >> keyBS <- parseAESKey :: Get ByteString >> >> let key = either (error "bad AES key") id $ makeKey keyBS >> >> cipher = cipherInit key >> >> cipherText <- parseAESCipherText :: Get ByteString >> >> return $ ecbDecrypt cipher cipherText >> >> MyBlowfish -> do ... >> >> >> >> etc. >> >> >> >> Hope this helps, >> >> Yitz >> > >> > >> > >> > >> > -- >> > ??????? >> > ??????? >> > >> > And for G+, please use magiclouds#gmail.com. >> > >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > Haskell-Cafe at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > >> > > > > -- > ??????? > ??????? > > And for G+, please use magiclouds#gmail.com. > -- ??????? ??????? And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From blaze at ruddy.ru Wed Apr 22 03:05:18 2015 From: blaze at ruddy.ru (Andrey Sverdlichenko) Date: Tue, 21 Apr 2015 20:05:18 -0700 (PDT) Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: <1429671918206.52445bf@Nodemailer> You probably should not merge decrypt and decode operations, it is bad crypto habit. Until you decrypted and verified integrity of data, parsing is dangerous and opening your service to attacks. Correct way of implementing this would be to pass ciphertext to decryption function and run parser only if decryption is successful. If bytestring is too big to be decrypted in one piece, consider encrypting it in blocks and feeding decrypted parts to parser. On Tue, Apr 21, 2015 at 7:49 PM, Magicloud Magiclouds wrote: > Similar as you envisaged. I would receive a bytestring data and a config > point out what cipher to use. Then I deserialize the data to a data type > with some fields. The serialize process is something like: > msum $ map (encrypt . encode) [field1, field2, field3] > I could parse the bytestring outside Get/Put monads. But I think that looks > ugly. I really want to embed the decrypt process into Get/Put monads. > On Tue, Apr 21, 2015 at 10:08 PM, Ivan Lazar Miljenovic < > ivan.miljenovic at gmail.com> wrote: >> On 21 April 2015 at 23:58, Magicloud Magiclouds >> wrote: >> > Thank you. But how if the cipher was specified outside the binary data? I >> > mean I need to pass the decrypt/encrypt function to get/put while they do >> > not accept parameters. Should I use Reader here? >> >> Maybe you could explain what you're doing better. >> >> I would envisage that you would get a Bytestring/Text value, then >> encrypt/decrypt and then put it back (though if you're dealing with >> Bytestrings, unless you're wanting to compose them with others there's >> no real need to use Get and Put as you'll have the resulting >> Bytestring already...). >> >> Or are you wanting to implement your own encryption/decryption scheme? >> In which case, you might want to either: >> >> a) write custom functions in the Get and Put monads OR >> >> b) write custom parsers (e.g. attoparsec) and builders (using the >> Builder module in bytestring); this is probably going to suit you >> better. >> >> > >> > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale wrote: >> >> >> >> Magicloud Magiclouds wrote: >> >> > I am trying to work with some binary data that encrypted by field >> >> > instead of >> >> > the result of serialization. I'd like to use Data.Serialize to wrap >> the >> >> > data >> >> > structure. But I could not figure out how to apply an runtime >> specified >> >> > cipher method to the bytestring. >> >> >> >> Are you using the set of crypto libraries written by >> >> Victor Hanquez, such as cryptocipher-types, >> >> crypto-pubkey-types, and cryptohash? >> >> >> >> Or the set of libraries written by Thomas DuBuisson, >> >> such as crypto-api, cipher-aes128, etc.? >> >> >> >> Here is an example of decoding for Victor's libraries. >> >> Encoding would be similar using Put instead of Get. >> >> Thomas' libraries would be similar using the other >> >> API. >> >> >> >> Let's say you have a type like this: >> >> >> >> data MyCipher = MyAES | MyBlowfish | ... >> >> >> >> Then in your cereal code you would have a Get monad >> >> expression something like this (assuming you have >> >> written all of the functions called parseSomething): >> >> >> >> getStuff = do >> >> cipher <- parseCipher :: Get MyCipher >> >> clearText <- case cipher of >> >> MyAES -> do >> >> keyBS <- parseAESKey :: Get ByteString >> >> let key = either (error "bad AES key") id $ makeKey keyBS >> >> cipher = cipherInit key >> >> cipherText <- parseAESCipherText :: Get ByteString >> >> return $ ecbDecrypt cipher cipherText >> >> MyBlowfish -> do ... >> >> >> >> etc. >> >> >> >> Hope this helps, >> >> Yitz >> > >> > >> > >> > >> > -- >> > ??????? >> > ??????? >> > >> > And for G+, please use magiclouds#gmail.com. >> > >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > Haskell-Cafe at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > >> >> >> >> -- >> Ivan Lazar Miljenovic >> Ivan.Miljenovic at gmail.com >> http://IvanMiljenovic.wordpress.com >> > -- > ??????? > ??????? > And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From magicloud.magiclouds at gmail.com Wed Apr 22 03:12:18 2015 From: magicloud.magiclouds at gmail.com (Magicloud Magiclouds) Date: Wed, 22 Apr 2015 11:12:18 +0800 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: <1429671918206.52445bf@Nodemailer> References: <1429671918206.52445bf@Nodemailer> Message-ID: How about fail in Get monad if decrypt failed? So decrypt failure would lead to a result of "Left String" on decode. On Wed, Apr 22, 2015 at 11:05 AM, Andrey Sverdlichenko wrote: > You probably should not merge decrypt and decode operations, it is bad > crypto habit. Until you decrypted and verified integrity of data, parsing > is dangerous and opening your service to attacks. Correct way of > implementing this would be to pass ciphertext to decryption function and > run parser only if decryption is successful. If bytestring is too big to be > decrypted in one piece, consider encrypting it in blocks and feeding > decrypted parts to parser. > > > > On Tue, Apr 21, 2015 at 7:49 PM, Magicloud Magiclouds < > magicloud.magiclouds at gmail.com> wrote: > >> Similar as you envisaged. I would receive a bytestring data and a config >> point out what cipher to use. Then I deserialize the data to a data type >> with some fields. The serialize process is something like: >> >> msum $ map (encrypt . encode) [field1, field2, field3] >> >> I could parse the bytestring outside Get/Put monads. But I think that >> looks ugly. I really want to embed the decrypt process into Get/Put monads. >> >> On Tue, Apr 21, 2015 at 10:08 PM, Ivan Lazar Miljenovic < >> ivan.miljenovic at gmail.com> wrote: >> >>> On 21 April 2015 at 23:58, Magicloud Magiclouds >>> wrote: >>> > Thank you. But how if the cipher was specified outside the binary >>> data? I >>> > mean I need to pass the decrypt/encrypt function to get/put while they >>> do >>> > not accept parameters. Should I use Reader here? >>> >>> Maybe you could explain what you're doing better. >>> >>> I would envisage that you would get a Bytestring/Text value, then >>> encrypt/decrypt and then put it back (though if you're dealing with >>> Bytestrings, unless you're wanting to compose them with others there's >>> no real need to use Get and Put as you'll have the resulting >>> Bytestring already...). >>> >>> Or are you wanting to implement your own encryption/decryption scheme? >>> In which case, you might want to either: >>> >>> a) write custom functions in the Get and Put monads OR >>> >>> b) write custom parsers (e.g. attoparsec) and builders (using the >>> Builder module in bytestring); this is probably going to suit you >>> better. >>> >>> > >>> > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale wrote: >>> >> >>> >> Magicloud Magiclouds wrote: >>> >> > I am trying to work with some binary data that encrypted by field >>> >> > instead of >>> >> > the result of serialization. I'd like to use Data.Serialize to wrap >>> the >>> >> > data >>> >> > structure. But I could not figure out how to apply an runtime >>> specified >>> >> > cipher method to the bytestring. >>> >> >>> >> Are you using the set of crypto libraries written by >>> >> Victor Hanquez, such as cryptocipher-types, >>> >> crypto-pubkey-types, and cryptohash? >>> >> >>> >> Or the set of libraries written by Thomas DuBuisson, >>> >> such as crypto-api, cipher-aes128, etc.? >>> >> >>> >> Here is an example of decoding for Victor's libraries. >>> >> Encoding would be similar using Put instead of Get. >>> >> Thomas' libraries would be similar using the other >>> >> API. >>> >> >>> >> Let's say you have a type like this: >>> >> >>> >> data MyCipher = MyAES | MyBlowfish | ... >>> >> >>> >> Then in your cereal code you would have a Get monad >>> >> expression something like this (assuming you have >>> >> written all of the functions called parseSomething): >>> >> >>> >> getStuff = do >>> >> cipher <- parseCipher :: Get MyCipher >>> >> clearText <- case cipher of >>> >> MyAES -> do >>> >> keyBS <- parseAESKey :: Get ByteString >>> >> let key = either (error "bad AES key") id $ makeKey keyBS >>> >> cipher = cipherInit key >>> >> cipherText <- parseAESCipherText :: Get ByteString >>> >> return $ ecbDecrypt cipher cipherText >>> >> MyBlowfish -> do ... >>> >> >>> >> etc. >>> >> >>> >> Hope this helps, >>> >> Yitz >>> > >>> > >>> > >>> > >>> > -- >>> > ??????? >>> > ??????? >>> > >>> > And for G+, please use magiclouds#gmail.com. >>> > >>> > _______________________________________________ >>> > Haskell-Cafe mailing list >>> > Haskell-Cafe at haskell.org >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> > >>> >>> >>> >>> -- >>> Ivan Lazar Miljenovic >>> Ivan.Miljenovic at gmail.com >>> http://IvanMiljenovic.wordpress.com >>> >> >> >> >> -- >> ??????? >> ??????? >> >> And for G+, please use magiclouds#gmail.com. >> > > -- ??????? ??????? And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From blaze at ruddy.ru Wed Apr 22 03:26:05 2015 From: blaze at ruddy.ru (Andrey Sverdlichenko) Date: Tue, 21 Apr 2015 20:26:05 -0700 (PDT) Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: <1429673165007.73279961@Nodemailer> You can't really modify source bytestring inside Get monad, and this is what decryption effectively do. The only option I know about is to run another parser inside Get monad. I'd rather write decrypt-runGetPartial-decrypt-runGetPartial loop and return Fail from it on decryption error. On Tue, Apr 21, 2015 at 8:12 PM, Magicloud Magiclouds wrote: > How about fail in Get monad if decrypt failed? So decrypt failure would > lead to a result of "Left String" on decode. > On Wed, Apr 22, 2015 at 11:05 AM, Andrey Sverdlichenko > wrote: >> You probably should not merge decrypt and decode operations, it is bad >> crypto habit. Until you decrypted and verified integrity of data, parsing >> is dangerous and opening your service to attacks. Correct way of >> implementing this would be to pass ciphertext to decryption function and >> run parser only if decryption is successful. If bytestring is too big to be >> decrypted in one piece, consider encrypting it in blocks and feeding >> decrypted parts to parser. >> >> >> >> On Tue, Apr 21, 2015 at 7:49 PM, Magicloud Magiclouds < >> magicloud.magiclouds at gmail.com> wrote: >> >>> Similar as you envisaged. I would receive a bytestring data and a config >>> point out what cipher to use. Then I deserialize the data to a data type >>> with some fields. The serialize process is something like: >>> >>> msum $ map (encrypt . encode) [field1, field2, field3] >>> >>> I could parse the bytestring outside Get/Put monads. But I think that >>> looks ugly. I really want to embed the decrypt process into Get/Put monads. >>> >>> On Tue, Apr 21, 2015 at 10:08 PM, Ivan Lazar Miljenovic < >>> ivan.miljenovic at gmail.com> wrote: >>> >>>> On 21 April 2015 at 23:58, Magicloud Magiclouds >>>> wrote: >>>> > Thank you. But how if the cipher was specified outside the binary >>>> data? I >>>> > mean I need to pass the decrypt/encrypt function to get/put while they >>>> do >>>> > not accept parameters. Should I use Reader here? >>>> >>>> Maybe you could explain what you're doing better. >>>> >>>> I would envisage that you would get a Bytestring/Text value, then >>>> encrypt/decrypt and then put it back (though if you're dealing with >>>> Bytestrings, unless you're wanting to compose them with others there's >>>> no real need to use Get and Put as you'll have the resulting >>>> Bytestring already...). >>>> >>>> Or are you wanting to implement your own encryption/decryption scheme? >>>> In which case, you might want to either: >>>> >>>> a) write custom functions in the Get and Put monads OR >>>> >>>> b) write custom parsers (e.g. attoparsec) and builders (using the >>>> Builder module in bytestring); this is probably going to suit you >>>> better. >>>> >>>> > >>>> > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale wrote: >>>> >> >>>> >> Magicloud Magiclouds wrote: >>>> >> > I am trying to work with some binary data that encrypted by field >>>> >> > instead of >>>> >> > the result of serialization. I'd like to use Data.Serialize to wrap >>>> the >>>> >> > data >>>> >> > structure. But I could not figure out how to apply an runtime >>>> specified >>>> >> > cipher method to the bytestring. >>>> >> >>>> >> Are you using the set of crypto libraries written by >>>> >> Victor Hanquez, such as cryptocipher-types, >>>> >> crypto-pubkey-types, and cryptohash? >>>> >> >>>> >> Or the set of libraries written by Thomas DuBuisson, >>>> >> such as crypto-api, cipher-aes128, etc.? >>>> >> >>>> >> Here is an example of decoding for Victor's libraries. >>>> >> Encoding would be similar using Put instead of Get. >>>> >> Thomas' libraries would be similar using the other >>>> >> API. >>>> >> >>>> >> Let's say you have a type like this: >>>> >> >>>> >> data MyCipher = MyAES | MyBlowfish | ... >>>> >> >>>> >> Then in your cereal code you would have a Get monad >>>> >> expression something like this (assuming you have >>>> >> written all of the functions called parseSomething): >>>> >> >>>> >> getStuff = do >>>> >> cipher <- parseCipher :: Get MyCipher >>>> >> clearText <- case cipher of >>>> >> MyAES -> do >>>> >> keyBS <- parseAESKey :: Get ByteString >>>> >> let key = either (error "bad AES key") id $ makeKey keyBS >>>> >> cipher = cipherInit key >>>> >> cipherText <- parseAESCipherText :: Get ByteString >>>> >> return $ ecbDecrypt cipher cipherText >>>> >> MyBlowfish -> do ... >>>> >> >>>> >> etc. >>>> >> >>>> >> Hope this helps, >>>> >> Yitz >>>> > >>>> > >>>> > >>>> > >>>> > -- >>>> > ??????? >>>> > ??????? >>>> > >>>> > And for G+, please use magiclouds#gmail.com. >>>> > >>>> > _______________________________________________ >>>> > Haskell-Cafe mailing list >>>> > Haskell-Cafe at haskell.org >>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>> > >>>> >>>> >>>> >>>> -- >>>> Ivan Lazar Miljenovic >>>> Ivan.Miljenovic at gmail.com >>>> http://IvanMiljenovic.wordpress.com >>>> >>> >>> >>> >>> -- >>> ??????? >>> ??????? >>> >>> And for G+, please use magiclouds#gmail.com. >>> >> >> > -- > ??????? > ??????? > And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From magicloud.magiclouds at gmail.com Wed Apr 22 03:41:54 2015 From: magicloud.magiclouds at gmail.com (Magicloud Magiclouds) Date: Wed, 22 Apr 2015 11:41:54 +0800 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: <1429673165007.73279961@Nodemailer> References: <1429673165007.73279961@Nodemailer> Message-ID: That is the ugliness of the original binary data. The encryption is not by fixed block size. So decrypt cannot be run before the get* helpers. So decrypt-runGetPartial-decrypt-runGetPartial loop would not work. I need a "post process" in Get. For example, "portNumber <- liftM decrypt getWord16be; return $ MyDataType portNumber". But currently I could not pass decrypt into get function. On Wed, Apr 22, 2015 at 11:26 AM, Andrey Sverdlichenko wrote: > You can't really modify source bytestring inside Get monad, and this is > what decryption effectively do. The only option I know about is to run > another parser inside Get monad. I'd rather write > decrypt-runGetPartial-decrypt-runGetPartial loop and return Fail from it on > decryption error. > > > > On Tue, Apr 21, 2015 at 8:12 PM, Magicloud Magiclouds < > magicloud.magiclouds at gmail.com> wrote: > >> How about fail in Get monad if decrypt failed? So decrypt failure would >> lead to a result of "Left String" on decode. >> >> On Wed, Apr 22, 2015 at 11:05 AM, Andrey Sverdlichenko >> wrote: >> >>> You probably should not merge decrypt and decode operations, it is bad >>> crypto habit. Until you decrypted and verified integrity of data, parsing >>> is dangerous and opening your service to attacks. Correct way of >>> implementing this would be to pass ciphertext to decryption function and >>> run parser only if decryption is successful. If bytestring is too big to be >>> decrypted in one piece, consider encrypting it in blocks and feeding >>> decrypted parts to parser. >>> >>> >>> >>> On Tue, Apr 21, 2015 at 7:49 PM, Magicloud Magiclouds < >>> magicloud.magiclouds at gmail.com> wrote: >>> >>>> Similar as you envisaged. I would receive a bytestring data and a >>>> config point out what cipher to use. Then I deserialize the data to a data >>>> type with some fields. The serialize process is something like: >>>> >>>> msum $ map (encrypt . encode) [field1, field2, field3] >>>> >>>> I could parse the bytestring outside Get/Put monads. But I think that >>>> looks ugly. I really want to embed the decrypt process into Get/Put monads. >>>> >>>> On Tue, Apr 21, 2015 at 10:08 PM, Ivan Lazar Miljenovic < >>>> ivan.miljenovic at gmail.com> wrote: >>>> >>>>> On 21 April 2015 at 23:58, Magicloud Magiclouds >>>>> wrote: >>>>> > Thank you. But how if the cipher was specified outside the binary >>>>> data? I >>>>> > mean I need to pass the decrypt/encrypt function to get/put while >>>>> they do >>>>> > not accept parameters. Should I use Reader here? >>>>> >>>>> Maybe you could explain what you're doing better. >>>>> >>>>> I would envisage that you would get a Bytestring/Text value, then >>>>> encrypt/decrypt and then put it back (though if you're dealing with >>>>> Bytestrings, unless you're wanting to compose them with others there's >>>>> no real need to use Get and Put as you'll have the resulting >>>>> Bytestring already...). >>>>> >>>>> Or are you wanting to implement your own encryption/decryption scheme? >>>>> In which case, you might want to either: >>>>> >>>>> a) write custom functions in the Get and Put monads OR >>>>> >>>>> b) write custom parsers (e.g. attoparsec) and builders (using the >>>>> Builder module in bytestring); this is probably going to suit you >>>>> better. >>>>> >>>>> > >>>>> > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale >>>>> wrote: >>>>> >> >>>>> >> Magicloud Magiclouds wrote: >>>>> >> > I am trying to work with some binary data that encrypted by field >>>>> >> > instead of >>>>> >> > the result of serialization. I'd like to use Data.Serialize to >>>>> wrap the >>>>> >> > data >>>>> >> > structure. But I could not figure out how to apply an runtime >>>>> specified >>>>> >> > cipher method to the bytestring. >>>>> >> >>>>> >> Are you using the set of crypto libraries written by >>>>> >> Victor Hanquez, such as cryptocipher-types, >>>>> >> crypto-pubkey-types, and cryptohash? >>>>> >> >>>>> >> Or the set of libraries written by Thomas DuBuisson, >>>>> >> such as crypto-api, cipher-aes128, etc.? >>>>> >> >>>>> >> Here is an example of decoding for Victor's libraries. >>>>> >> Encoding would be similar using Put instead of Get. >>>>> >> Thomas' libraries would be similar using the other >>>>> >> API. >>>>> >> >>>>> >> Let's say you have a type like this: >>>>> >> >>>>> >> data MyCipher = MyAES | MyBlowfish | ... >>>>> >> >>>>> >> Then in your cereal code you would have a Get monad >>>>> >> expression something like this (assuming you have >>>>> >> written all of the functions called parseSomething): >>>>> >> >>>>> >> getStuff = do >>>>> >> cipher <- parseCipher :: Get MyCipher >>>>> >> clearText <- case cipher of >>>>> >> MyAES -> do >>>>> >> keyBS <- parseAESKey :: Get ByteString >>>>> >> let key = either (error "bad AES key") id $ makeKey keyBS >>>>> >> cipher = cipherInit key >>>>> >> cipherText <- parseAESCipherText :: Get ByteString >>>>> >> return $ ecbDecrypt cipher cipherText >>>>> >> MyBlowfish -> do ... >>>>> >> >>>>> >> etc. >>>>> >> >>>>> >> Hope this helps, >>>>> >> Yitz >>>>> > >>>>> > >>>>> > >>>>> > >>>>> > -- >>>>> > ??????? >>>>> > ??????? >>>>> > >>>>> > And for G+, please use magiclouds#gmail.com. >>>>> > >>>>> > _______________________________________________ >>>>> > Haskell-Cafe mailing list >>>>> > Haskell-Cafe at haskell.org >>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>>> > >>>>> >>>>> >>>>> >>>>> -- >>>>> Ivan Lazar Miljenovic >>>>> Ivan.Miljenovic at gmail.com >>>>> http://IvanMiljenovic.wordpress.com >>>>> >>>> >>>> >>>> >>>> -- >>>> ??????? >>>> ??????? >>>> >>>> And for G+, please use magiclouds#gmail.com. >>>> >>> >>> >> >> >> -- >> ??????? >> ??????? >> >> And for G+, please use magiclouds#gmail.com. >> > > -- ??????? ??????? And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From blaze at ruddy.ru Wed Apr 22 03:44:44 2015 From: blaze at ruddy.ru (Andrey Sverdlichenko) Date: Tue, 21 Apr 2015 20:44:44 -0700 (PDT) Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: <1429674283598.189c259d@Nodemailer> Could you describe encrypted data format? I can't understand problem with decryption. On Tue, Apr 21, 2015 at 8:41 PM, Magicloud Magiclouds wrote: > That is the ugliness of the original binary data. The encryption is not by > fixed block size. So decrypt cannot be run before the get* helpers. > So decrypt-runGetPartial-decrypt-runGetPartial loop would not work. > I need a "post process" in Get. For example, "portNumber <- liftM decrypt > getWord16be; return $ MyDataType portNumber". But currently I could not > pass decrypt into get function. > On Wed, Apr 22, 2015 at 11:26 AM, Andrey Sverdlichenko > wrote: >> You can't really modify source bytestring inside Get monad, and this is >> what decryption effectively do. The only option I know about is to run >> another parser inside Get monad. I'd rather write >> decrypt-runGetPartial-decrypt-runGetPartial loop and return Fail from it on >> decryption error. >> >> >> >> On Tue, Apr 21, 2015 at 8:12 PM, Magicloud Magiclouds < >> magicloud.magiclouds at gmail.com> wrote: >> >>> How about fail in Get monad if decrypt failed? So decrypt failure would >>> lead to a result of "Left String" on decode. >>> >>> On Wed, Apr 22, 2015 at 11:05 AM, Andrey Sverdlichenko >>> wrote: >>> >>>> You probably should not merge decrypt and decode operations, it is bad >>>> crypto habit. Until you decrypted and verified integrity of data, parsing >>>> is dangerous and opening your service to attacks. Correct way of >>>> implementing this would be to pass ciphertext to decryption function and >>>> run parser only if decryption is successful. If bytestring is too big to be >>>> decrypted in one piece, consider encrypting it in blocks and feeding >>>> decrypted parts to parser. >>>> >>>> >>>> >>>> On Tue, Apr 21, 2015 at 7:49 PM, Magicloud Magiclouds < >>>> magicloud.magiclouds at gmail.com> wrote: >>>> >>>>> Similar as you envisaged. I would receive a bytestring data and a >>>>> config point out what cipher to use. Then I deserialize the data to a data >>>>> type with some fields. The serialize process is something like: >>>>> >>>>> msum $ map (encrypt . encode) [field1, field2, field3] >>>>> >>>>> I could parse the bytestring outside Get/Put monads. But I think that >>>>> looks ugly. I really want to embed the decrypt process into Get/Put monads. >>>>> >>>>> On Tue, Apr 21, 2015 at 10:08 PM, Ivan Lazar Miljenovic < >>>>> ivan.miljenovic at gmail.com> wrote: >>>>> >>>>>> On 21 April 2015 at 23:58, Magicloud Magiclouds >>>>>> wrote: >>>>>> > Thank you. But how if the cipher was specified outside the binary >>>>>> data? I >>>>>> > mean I need to pass the decrypt/encrypt function to get/put while >>>>>> they do >>>>>> > not accept parameters. Should I use Reader here? >>>>>> >>>>>> Maybe you could explain what you're doing better. >>>>>> >>>>>> I would envisage that you would get a Bytestring/Text value, then >>>>>> encrypt/decrypt and then put it back (though if you're dealing with >>>>>> Bytestrings, unless you're wanting to compose them with others there's >>>>>> no real need to use Get and Put as you'll have the resulting >>>>>> Bytestring already...). >>>>>> >>>>>> Or are you wanting to implement your own encryption/decryption scheme? >>>>>> In which case, you might want to either: >>>>>> >>>>>> a) write custom functions in the Get and Put monads OR >>>>>> >>>>>> b) write custom parsers (e.g. attoparsec) and builders (using the >>>>>> Builder module in bytestring); this is probably going to suit you >>>>>> better. >>>>>> >>>>>> > >>>>>> > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale >>>>>> wrote: >>>>>> >> >>>>>> >> Magicloud Magiclouds wrote: >>>>>> >> > I am trying to work with some binary data that encrypted by field >>>>>> >> > instead of >>>>>> >> > the result of serialization. I'd like to use Data.Serialize to >>>>>> wrap the >>>>>> >> > data >>>>>> >> > structure. But I could not figure out how to apply an runtime >>>>>> specified >>>>>> >> > cipher method to the bytestring. >>>>>> >> >>>>>> >> Are you using the set of crypto libraries written by >>>>>> >> Victor Hanquez, such as cryptocipher-types, >>>>>> >> crypto-pubkey-types, and cryptohash? >>>>>> >> >>>>>> >> Or the set of libraries written by Thomas DuBuisson, >>>>>> >> such as crypto-api, cipher-aes128, etc.? >>>>>> >> >>>>>> >> Here is an example of decoding for Victor's libraries. >>>>>> >> Encoding would be similar using Put instead of Get. >>>>>> >> Thomas' libraries would be similar using the other >>>>>> >> API. >>>>>> >> >>>>>> >> Let's say you have a type like this: >>>>>> >> >>>>>> >> data MyCipher = MyAES | MyBlowfish | ... >>>>>> >> >>>>>> >> Then in your cereal code you would have a Get monad >>>>>> >> expression something like this (assuming you have >>>>>> >> written all of the functions called parseSomething): >>>>>> >> >>>>>> >> getStuff = do >>>>>> >> cipher <- parseCipher :: Get MyCipher >>>>>> >> clearText <- case cipher of >>>>>> >> MyAES -> do >>>>>> >> keyBS <- parseAESKey :: Get ByteString >>>>>> >> let key = either (error "bad AES key") id $ makeKey keyBS >>>>>> >> cipher = cipherInit key >>>>>> >> cipherText <- parseAESCipherText :: Get ByteString >>>>>> >> return $ ecbDecrypt cipher cipherText >>>>>> >> MyBlowfish -> do ... >>>>>> >> >>>>>> >> etc. >>>>>> >> >>>>>> >> Hope this helps, >>>>>> >> Yitz >>>>>> > >>>>>> > >>>>>> > >>>>>> > >>>>>> > -- >>>>>> > ??????? >>>>>> > ??????? >>>>>> > >>>>>> > And for G+, please use magiclouds#gmail.com. >>>>>> > >>>>>> > _______________________________________________ >>>>>> > Haskell-Cafe mailing list >>>>>> > Haskell-Cafe at haskell.org >>>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>>>> > >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Ivan Lazar Miljenovic >>>>>> Ivan.Miljenovic at gmail.com >>>>>> http://IvanMiljenovic.wordpress.com >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> ??????? >>>>> ??????? >>>>> >>>>> And for G+, please use magiclouds#gmail.com. >>>>> >>>> >>>> >>> >>> >>> -- >>> ??????? >>> ??????? >>> >>> And for G+, please use magiclouds#gmail.com. >>> >> >> > -- > ??????? > ??????? > And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From magicloud.magiclouds at gmail.com Wed Apr 22 03:52:08 2015 From: magicloud.magiclouds at gmail.com (Magicloud Magiclouds) Date: Wed, 22 Apr 2015 11:52:08 +0800 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: <1429674283598.189c259d@Nodemailer> References: <1429674283598.189c259d@Nodemailer> Message-ID: Say the data structure is: data Person = Person { name :: String , gender :: Gender , age :: Int } Then the process to generate the binary is: msum $ map (encrypt . encode) [ length $ name person, name person, gender person, age person ] Above process is just persudo in Haskell, the actual is not coded in Haskell. On Wed, Apr 22, 2015 at 11:44 AM, Andrey Sverdlichenko wrote: > Could you describe encrypted data format? I can't understand problem with > decryption. > > > > On Tue, Apr 21, 2015 at 8:41 PM, Magicloud Magiclouds < > magicloud.magiclouds at gmail.com> wrote: > >> That is the ugliness of the original binary data. The encryption is not >> by fixed block size. So decrypt cannot be run before the get* helpers. >> So decrypt-runGetPartial-decrypt-runGetPartial loop would not work. >> >> I need a "post process" in Get. For example, "portNumber <- liftM decrypt >> getWord16be; return $ MyDataType portNumber". But currently I could not >> pass decrypt into get function. >> >> On Wed, Apr 22, 2015 at 11:26 AM, Andrey Sverdlichenko >> wrote: >> >>> You can't really modify source bytestring inside Get monad, and this is >>> what decryption effectively do. The only option I know about is to run >>> another parser inside Get monad. I'd rather write >>> decrypt-runGetPartial-decrypt-runGetPartial loop and return Fail from it on >>> decryption error. >>> >>> >>> >>> On Tue, Apr 21, 2015 at 8:12 PM, Magicloud Magiclouds < >>> magicloud.magiclouds at gmail.com> wrote: >>> >>>> How about fail in Get monad if decrypt failed? So decrypt failure would >>>> lead to a result of "Left String" on decode. >>>> >>>> On Wed, Apr 22, 2015 at 11:05 AM, Andrey Sverdlichenko >>>> wrote: >>>> >>>>> You probably should not merge decrypt and decode operations, it is bad >>>>> crypto habit. Until you decrypted and verified integrity of data, parsing >>>>> is dangerous and opening your service to attacks. Correct way of >>>>> implementing this would be to pass ciphertext to decryption function and >>>>> run parser only if decryption is successful. If bytestring is too big to be >>>>> decrypted in one piece, consider encrypting it in blocks and feeding >>>>> decrypted parts to parser. >>>>> >>>>> >>>>> >>>>> On Tue, Apr 21, 2015 at 7:49 PM, Magicloud Magiclouds < >>>>> magicloud.magiclouds at gmail.com> wrote: >>>>> >>>>>> Similar as you envisaged. I would receive a bytestring data and a >>>>>> config point out what cipher to use. Then I deserialize the data to a data >>>>>> type with some fields. The serialize process is something like: >>>>>> >>>>>> msum $ map (encrypt . encode) [field1, field2, field3] >>>>>> >>>>>> I could parse the bytestring outside Get/Put monads. But I think that >>>>>> looks ugly. I really want to embed the decrypt process into Get/Put monads. >>>>>> >>>>>> On Tue, Apr 21, 2015 at 10:08 PM, Ivan Lazar Miljenovic < >>>>>> ivan.miljenovic at gmail.com> wrote: >>>>>> >>>>>>> On 21 April 2015 at 23:58, Magicloud Magiclouds >>>>>>> wrote: >>>>>>> > Thank you. But how if the cipher was specified outside the binary >>>>>>> data? I >>>>>>> > mean I need to pass the decrypt/encrypt function to get/put while >>>>>>> they do >>>>>>> > not accept parameters. Should I use Reader here? >>>>>>> >>>>>>> Maybe you could explain what you're doing better. >>>>>>> >>>>>>> I would envisage that you would get a Bytestring/Text value, then >>>>>>> encrypt/decrypt and then put it back (though if you're dealing with >>>>>>> Bytestrings, unless you're wanting to compose them with others >>>>>>> there's >>>>>>> no real need to use Get and Put as you'll have the resulting >>>>>>> Bytestring already...). >>>>>>> >>>>>>> Or are you wanting to implement your own encryption/decryption >>>>>>> scheme? >>>>>>> In which case, you might want to either: >>>>>>> >>>>>>> a) write custom functions in the Get and Put monads OR >>>>>>> >>>>>>> b) write custom parsers (e.g. attoparsec) and builders (using the >>>>>>> Builder module in bytestring); this is probably going to suit you >>>>>>> better. >>>>>>> >>>>>>> > >>>>>>> > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale >>>>>>> wrote: >>>>>>> >> >>>>>>> >> Magicloud Magiclouds wrote: >>>>>>> >> > I am trying to work with some binary data that encrypted by >>>>>>> field >>>>>>> >> > instead of >>>>>>> >> > the result of serialization. I'd like to use Data.Serialize to >>>>>>> wrap the >>>>>>> >> > data >>>>>>> >> > structure. But I could not figure out how to apply an runtime >>>>>>> specified >>>>>>> >> > cipher method to the bytestring. >>>>>>> >> >>>>>>> >> Are you using the set of crypto libraries written by >>>>>>> >> Victor Hanquez, such as cryptocipher-types, >>>>>>> >> crypto-pubkey-types, and cryptohash? >>>>>>> >> >>>>>>> >> Or the set of libraries written by Thomas DuBuisson, >>>>>>> >> such as crypto-api, cipher-aes128, etc.? >>>>>>> >> >>>>>>> >> Here is an example of decoding for Victor's libraries. >>>>>>> >> Encoding would be similar using Put instead of Get. >>>>>>> >> Thomas' libraries would be similar using the other >>>>>>> >> API. >>>>>>> >> >>>>>>> >> Let's say you have a type like this: >>>>>>> >> >>>>>>> >> data MyCipher = MyAES | MyBlowfish | ... >>>>>>> >> >>>>>>> >> Then in your cereal code you would have a Get monad >>>>>>> >> expression something like this (assuming you have >>>>>>> >> written all of the functions called parseSomething): >>>>>>> >> >>>>>>> >> getStuff = do >>>>>>> >> cipher <- parseCipher :: Get MyCipher >>>>>>> >> clearText <- case cipher of >>>>>>> >> MyAES -> do >>>>>>> >> keyBS <- parseAESKey :: Get ByteString >>>>>>> >> let key = either (error "bad AES key") id $ makeKey keyBS >>>>>>> >> cipher = cipherInit key >>>>>>> >> cipherText <- parseAESCipherText :: Get ByteString >>>>>>> >> return $ ecbDecrypt cipher cipherText >>>>>>> >> MyBlowfish -> do ... >>>>>>> >> >>>>>>> >> etc. >>>>>>> >> >>>>>>> >> Hope this helps, >>>>>>> >> Yitz >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > -- >>>>>>> > ??????? >>>>>>> > ??????? >>>>>>> > >>>>>>> > And for G+, please use magiclouds#gmail.com. >>>>>>> > >>>>>>> > _______________________________________________ >>>>>>> > Haskell-Cafe mailing list >>>>>>> > Haskell-Cafe at haskell.org >>>>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>>>>> > >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Ivan Lazar Miljenovic >>>>>>> Ivan.Miljenovic at gmail.com >>>>>>> http://IvanMiljenovic.wordpress.com >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> ??????? >>>>>> ??????? >>>>>> >>>>>> And for G+, please use magiclouds#gmail.com. >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> ??????? >>>> ??????? >>>> >>>> And for G+, please use magiclouds#gmail.com. >>>> >>> >>> >> >> >> -- >> ??????? >> ??????? >> >> And for G+, please use magiclouds#gmail.com. >> > > -- ??????? ??????? And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan.miljenovic at gmail.com Wed Apr 22 03:56:14 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Wed, 22 Apr 2015 13:56:14 +1000 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: <1429674283598.189c259d@Nodemailer> Message-ID: On 22 April 2015 at 13:52, Magicloud Magiclouds wrote: > Say the data structure is: > > data Person = Person { name :: String > , gender :: Gender > , age :: Int } > > Then the process to generate the binary is: > > msum $ map (encrypt . encode) [ length $ name person, name person, gender > person, age person ] > > Above process is just persudo in Haskell, the actual is not coded in > Haskell. Except that binary and cereal are for serializing Haskell values directly; you seem to be wanting to parse and generate a particular encoding for a value. In which case, I don't think binary or cereal is really appropriate. > > On Wed, Apr 22, 2015 at 11:44 AM, Andrey Sverdlichenko > wrote: >> >> Could you describe encrypted data format? I can't understand problem with >> decryption. >> >> >> >> On Tue, Apr 21, 2015 at 8:41 PM, Magicloud Magiclouds >> wrote: >>> >>> That is the ugliness of the original binary data. The encryption is not >>> by fixed block size. So decrypt cannot be run before the get* helpers. So >>> decrypt-runGetPartial-decrypt-runGetPartial loop would not work. >>> >>> I need a "post process" in Get. For example, "portNumber <- liftM decrypt >>> getWord16be; return $ MyDataType portNumber". But currently I could not pass >>> decrypt into get function. >>> >>> On Wed, Apr 22, 2015 at 11:26 AM, Andrey Sverdlichenko >>> wrote: >>>> >>>> You can't really modify source bytestring inside Get monad, and this is >>>> what decryption effectively do. The only option I know about is to run >>>> another parser inside Get monad. I'd rather write >>>> decrypt-runGetPartial-decrypt-runGetPartial loop and return Fail from it on >>>> decryption error. >>>> >>>> >>>> >>>> On Tue, Apr 21, 2015 at 8:12 PM, Magicloud Magiclouds >>>> wrote: >>>>> >>>>> How about fail in Get monad if decrypt failed? So decrypt failure would >>>>> lead to a result of "Left String" on decode. >>>>> >>>>> On Wed, Apr 22, 2015 at 11:05 AM, Andrey Sverdlichenko >>>>> wrote: >>>>>> >>>>>> You probably should not merge decrypt and decode operations, it is bad >>>>>> crypto habit. Until you decrypted and verified integrity of data, parsing is >>>>>> dangerous and opening your service to attacks. Correct way of implementing >>>>>> this would be to pass ciphertext to decryption function and run parser only >>>>>> if decryption is successful. If bytestring is too big to be decrypted in one >>>>>> piece, consider encrypting it in blocks and feeding decrypted parts to >>>>>> parser. >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Apr 21, 2015 at 7:49 PM, Magicloud Magiclouds >>>>>> wrote: >>>>>>> >>>>>>> Similar as you envisaged. I would receive a bytestring data and a >>>>>>> config point out what cipher to use. Then I deserialize the data to a data >>>>>>> type with some fields. The serialize process is something like: >>>>>>> >>>>>>> msum $ map (encrypt . encode) [field1, field2, field3] >>>>>>> >>>>>>> I could parse the bytestring outside Get/Put monads. But I think that >>>>>>> looks ugly. I really want to embed the decrypt process into Get/Put monads. >>>>>>> >>>>>>> On Tue, Apr 21, 2015 at 10:08 PM, Ivan Lazar Miljenovic >>>>>>> wrote: >>>>>>>> >>>>>>>> On 21 April 2015 at 23:58, Magicloud Magiclouds >>>>>>>> wrote: >>>>>>>> > Thank you. But how if the cipher was specified outside the binary >>>>>>>> > data? I >>>>>>>> > mean I need to pass the decrypt/encrypt function to get/put while >>>>>>>> > they do >>>>>>>> > not accept parameters. Should I use Reader here? >>>>>>>> >>>>>>>> Maybe you could explain what you're doing better. >>>>>>>> >>>>>>>> I would envisage that you would get a Bytestring/Text value, then >>>>>>>> encrypt/decrypt and then put it back (though if you're dealing with >>>>>>>> Bytestrings, unless you're wanting to compose them with others >>>>>>>> there's >>>>>>>> no real need to use Get and Put as you'll have the resulting >>>>>>>> Bytestring already...). >>>>>>>> >>>>>>>> Or are you wanting to implement your own encryption/decryption >>>>>>>> scheme? >>>>>>>> In which case, you might want to either: >>>>>>>> >>>>>>>> a) write custom functions in the Get and Put monads OR >>>>>>>> >>>>>>>> b) write custom parsers (e.g. attoparsec) and builders (using the >>>>>>>> Builder module in bytestring); this is probably going to suit you >>>>>>>> better. >>>>>>>> >>>>>>>> > >>>>>>>> > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale >>>>>>>> > wrote: >>>>>>>> >> >>>>>>>> >> Magicloud Magiclouds wrote: >>>>>>>> >> > I am trying to work with some binary data that encrypted by >>>>>>>> >> > field >>>>>>>> >> > instead of >>>>>>>> >> > the result of serialization. I'd like to use Data.Serialize to >>>>>>>> >> > wrap the >>>>>>>> >> > data >>>>>>>> >> > structure. But I could not figure out how to apply an runtime >>>>>>>> >> > specified >>>>>>>> >> > cipher method to the bytestring. >>>>>>>> >> >>>>>>>> >> Are you using the set of crypto libraries written by >>>>>>>> >> Victor Hanquez, such as cryptocipher-types, >>>>>>>> >> crypto-pubkey-types, and cryptohash? >>>>>>>> >> >>>>>>>> >> Or the set of libraries written by Thomas DuBuisson, >>>>>>>> >> such as crypto-api, cipher-aes128, etc.? >>>>>>>> >> >>>>>>>> >> Here is an example of decoding for Victor's libraries. >>>>>>>> >> Encoding would be similar using Put instead of Get. >>>>>>>> >> Thomas' libraries would be similar using the other >>>>>>>> >> API. >>>>>>>> >> >>>>>>>> >> Let's say you have a type like this: >>>>>>>> >> >>>>>>>> >> data MyCipher = MyAES | MyBlowfish | ... >>>>>>>> >> >>>>>>>> >> Then in your cereal code you would have a Get monad >>>>>>>> >> expression something like this (assuming you have >>>>>>>> >> written all of the functions called parseSomething): >>>>>>>> >> >>>>>>>> >> getStuff = do >>>>>>>> >> cipher <- parseCipher :: Get MyCipher >>>>>>>> >> clearText <- case cipher of >>>>>>>> >> MyAES -> do >>>>>>>> >> keyBS <- parseAESKey :: Get ByteString >>>>>>>> >> let key = either (error "bad AES key") id $ makeKey keyBS >>>>>>>> >> cipher = cipherInit key >>>>>>>> >> cipherText <- parseAESCipherText :: Get ByteString >>>>>>>> >> return $ ecbDecrypt cipher cipherText >>>>>>>> >> MyBlowfish -> do ... >>>>>>>> >> >>>>>>>> >> etc. >>>>>>>> >> >>>>>>>> >> Hope this helps, >>>>>>>> >> Yitz >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > -- >>>>>>>> > ??????? >>>>>>>> > ??????? >>>>>>>> > >>>>>>>> > And for G+, please use magiclouds#gmail.com. >>>>>>>> > >>>>>>>> > _______________________________________________ >>>>>>>> > Haskell-Cafe mailing list >>>>>>>> > Haskell-Cafe at haskell.org >>>>>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>>>>>> > >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Ivan Lazar Miljenovic >>>>>>>> Ivan.Miljenovic at gmail.com >>>>>>>> http://IvanMiljenovic.wordpress.com >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> ??????? >>>>>>> ??????? >>>>>>> >>>>>>> And for G+, please use magiclouds#gmail.com. >>>>>> >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> ??????? >>>>> ??????? >>>>> >>>>> And for G+, please use magiclouds#gmail.com. >>>> >>>> >>> >>> >>> >>> -- >>> ??????? >>> ??????? >>> >>> And for G+, please use magiclouds#gmail.com. >> >> > > > > -- > ??????? > ??????? > > And for G+, please use magiclouds#gmail.com. > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From magicloud.magiclouds at gmail.com Wed Apr 22 03:58:18 2015 From: magicloud.magiclouds at gmail.com (Magicloud Magiclouds) Date: Wed, 22 Apr 2015 11:58:18 +0800 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: <1429674283598.189c259d@Nodemailer> Message-ID: Ah, sorry, the "encode" function is the one from cereal, not to "generate a particular encoding for a value". On Wed, Apr 22, 2015 at 11:56 AM, Ivan Lazar Miljenovic < ivan.miljenovic at gmail.com> wrote: > On 22 April 2015 at 13:52, Magicloud Magiclouds > wrote: > > Say the data structure is: > > > > data Person = Person { name :: String > > , gender :: Gender > > , age :: Int } > > > > Then the process to generate the binary is: > > > > msum $ map (encrypt . encode) [ length $ name person, name person, gender > > person, age person ] > > > > Above process is just persudo in Haskell, the actual is not coded in > > Haskell. > > Except that binary and cereal are for serializing Haskell values > directly; you seem to be wanting to parse and generate a particular > encoding for a value. In which case, I don't think binary or cereal > is really appropriate. > > > > > On Wed, Apr 22, 2015 at 11:44 AM, Andrey Sverdlichenko > > wrote: > >> > >> Could you describe encrypted data format? I can't understand problem > with > >> decryption. > >> > >> > >> > >> On Tue, Apr 21, 2015 at 8:41 PM, Magicloud Magiclouds > >> wrote: > >>> > >>> That is the ugliness of the original binary data. The encryption is not > >>> by fixed block size. So decrypt cannot be run before the get* helpers. > So > >>> decrypt-runGetPartial-decrypt-runGetPartial loop would not work. > >>> > >>> I need a "post process" in Get. For example, "portNumber <- liftM > decrypt > >>> getWord16be; return $ MyDataType portNumber". But currently I could > not pass > >>> decrypt into get function. > >>> > >>> On Wed, Apr 22, 2015 at 11:26 AM, Andrey Sverdlichenko > > >>> wrote: > >>>> > >>>> You can't really modify source bytestring inside Get monad, and this > is > >>>> what decryption effectively do. The only option I know about is to run > >>>> another parser inside Get monad. I'd rather write > >>>> decrypt-runGetPartial-decrypt-runGetPartial loop and return Fail from > it on > >>>> decryption error. > >>>> > >>>> > >>>> > >>>> On Tue, Apr 21, 2015 at 8:12 PM, Magicloud Magiclouds > >>>> wrote: > >>>>> > >>>>> How about fail in Get monad if decrypt failed? So decrypt failure > would > >>>>> lead to a result of "Left String" on decode. > >>>>> > >>>>> On Wed, Apr 22, 2015 at 11:05 AM, Andrey Sverdlichenko < > blaze at ruddy.ru> > >>>>> wrote: > >>>>>> > >>>>>> You probably should not merge decrypt and decode operations, it is > bad > >>>>>> crypto habit. Until you decrypted and verified integrity of data, > parsing is > >>>>>> dangerous and opening your service to attacks. Correct way of > implementing > >>>>>> this would be to pass ciphertext to decryption function and run > parser only > >>>>>> if decryption is successful. If bytestring is too big to be > decrypted in one > >>>>>> piece, consider encrypting it in blocks and feeding decrypted parts > to > >>>>>> parser. > >>>>>> > >>>>>> > >>>>>> > >>>>>> On Tue, Apr 21, 2015 at 7:49 PM, Magicloud Magiclouds > >>>>>> wrote: > >>>>>>> > >>>>>>> Similar as you envisaged. I would receive a bytestring data and a > >>>>>>> config point out what cipher to use. Then I deserialize the data > to a data > >>>>>>> type with some fields. The serialize process is something like: > >>>>>>> > >>>>>>> msum $ map (encrypt . encode) [field1, field2, field3] > >>>>>>> > >>>>>>> I could parse the bytestring outside Get/Put monads. But I think > that > >>>>>>> looks ugly. I really want to embed the decrypt process into > Get/Put monads. > >>>>>>> > >>>>>>> On Tue, Apr 21, 2015 at 10:08 PM, Ivan Lazar Miljenovic > >>>>>>> wrote: > >>>>>>>> > >>>>>>>> On 21 April 2015 at 23:58, Magicloud Magiclouds > >>>>>>>> wrote: > >>>>>>>> > Thank you. But how if the cipher was specified outside the > binary > >>>>>>>> > data? I > >>>>>>>> > mean I need to pass the decrypt/encrypt function to get/put > while > >>>>>>>> > they do > >>>>>>>> > not accept parameters. Should I use Reader here? > >>>>>>>> > >>>>>>>> Maybe you could explain what you're doing better. > >>>>>>>> > >>>>>>>> I would envisage that you would get a Bytestring/Text value, then > >>>>>>>> encrypt/decrypt and then put it back (though if you're dealing > with > >>>>>>>> Bytestrings, unless you're wanting to compose them with others > >>>>>>>> there's > >>>>>>>> no real need to use Get and Put as you'll have the resulting > >>>>>>>> Bytestring already...). > >>>>>>>> > >>>>>>>> Or are you wanting to implement your own encryption/decryption > >>>>>>>> scheme? > >>>>>>>> In which case, you might want to either: > >>>>>>>> > >>>>>>>> a) write custom functions in the Get and Put monads OR > >>>>>>>> > >>>>>>>> b) write custom parsers (e.g. attoparsec) and builders (using the > >>>>>>>> Builder module in bytestring); this is probably going to suit you > >>>>>>>> better. > >>>>>>>> > >>>>>>>> > > >>>>>>>> > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale > >>>>>>>> > wrote: > >>>>>>>> >> > >>>>>>>> >> Magicloud Magiclouds wrote: > >>>>>>>> >> > I am trying to work with some binary data that encrypted by > >>>>>>>> >> > field > >>>>>>>> >> > instead of > >>>>>>>> >> > the result of serialization. I'd like to use Data.Serialize > to > >>>>>>>> >> > wrap the > >>>>>>>> >> > data > >>>>>>>> >> > structure. But I could not figure out how to apply an runtime > >>>>>>>> >> > specified > >>>>>>>> >> > cipher method to the bytestring. > >>>>>>>> >> > >>>>>>>> >> Are you using the set of crypto libraries written by > >>>>>>>> >> Victor Hanquez, such as cryptocipher-types, > >>>>>>>> >> crypto-pubkey-types, and cryptohash? > >>>>>>>> >> > >>>>>>>> >> Or the set of libraries written by Thomas DuBuisson, > >>>>>>>> >> such as crypto-api, cipher-aes128, etc.? > >>>>>>>> >> > >>>>>>>> >> Here is an example of decoding for Victor's libraries. > >>>>>>>> >> Encoding would be similar using Put instead of Get. > >>>>>>>> >> Thomas' libraries would be similar using the other > >>>>>>>> >> API. > >>>>>>>> >> > >>>>>>>> >> Let's say you have a type like this: > >>>>>>>> >> > >>>>>>>> >> data MyCipher = MyAES | MyBlowfish | ... > >>>>>>>> >> > >>>>>>>> >> Then in your cereal code you would have a Get monad > >>>>>>>> >> expression something like this (assuming you have > >>>>>>>> >> written all of the functions called parseSomething): > >>>>>>>> >> > >>>>>>>> >> getStuff = do > >>>>>>>> >> cipher <- parseCipher :: Get MyCipher > >>>>>>>> >> clearText <- case cipher of > >>>>>>>> >> MyAES -> do > >>>>>>>> >> keyBS <- parseAESKey :: Get ByteString > >>>>>>>> >> let key = either (error "bad AES key") id $ makeKey keyBS > >>>>>>>> >> cipher = cipherInit key > >>>>>>>> >> cipherText <- parseAESCipherText :: Get ByteString > >>>>>>>> >> return $ ecbDecrypt cipher cipherText > >>>>>>>> >> MyBlowfish -> do ... > >>>>>>>> >> > >>>>>>>> >> etc. > >>>>>>>> >> > >>>>>>>> >> Hope this helps, > >>>>>>>> >> Yitz > >>>>>>>> > > >>>>>>>> > > >>>>>>>> > > >>>>>>>> > > >>>>>>>> > -- > >>>>>>>> > ??????? > >>>>>>>> > ??????? > >>>>>>>> > > >>>>>>>> > And for G+, please use magiclouds#gmail.com. > >>>>>>>> > > >>>>>>>> > _______________________________________________ > >>>>>>>> > Haskell-Cafe mailing list > >>>>>>>> > Haskell-Cafe at haskell.org > >>>>>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > >>>>>>>> > > >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> -- > >>>>>>>> Ivan Lazar Miljenovic > >>>>>>>> Ivan.Miljenovic at gmail.com > >>>>>>>> http://IvanMiljenovic.wordpress.com > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> -- > >>>>>>> ??????? > >>>>>>> ??????? > >>>>>>> > >>>>>>> And for G+, please use magiclouds#gmail.com. > >>>>>> > >>>>>> > >>>>> > >>>>> > >>>>> > >>>>> -- > >>>>> ??????? > >>>>> ??????? > >>>>> > >>>>> And for G+, please use magiclouds#gmail.com. > >>>> > >>>> > >>> > >>> > >>> > >>> -- > >>> ??????? > >>> ??????? > >>> > >>> And for G+, please use magiclouds#gmail.com. > >> > >> > > > > > > > > -- > > ??????? > > ??????? > > > > And for G+, please use magiclouds#gmail.com. > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > > > > -- > Ivan Lazar Miljenovic > Ivan.Miljenovic at gmail.com > http://IvanMiljenovic.wordpress.com > -- ??????? ??????? And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan.miljenovic at gmail.com Wed Apr 22 04:01:05 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Wed, 22 Apr 2015 14:01:05 +1000 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: <1429674283598.189c259d@Nodemailer> Message-ID: On 22 April 2015 at 13:58, Magicloud Magiclouds wrote: > Ah, sorry, the "encode" function is the one from cereal, not to "generate a > particular encoding for a value". I meant "encode" as in "use a specific representation rather than directly converting it on a constructor-by-constructor basis as is the default for Get/Put". > > On Wed, Apr 22, 2015 at 11:56 AM, Ivan Lazar Miljenovic > wrote: >> >> On 22 April 2015 at 13:52, Magicloud Magiclouds >> wrote: >> > Say the data structure is: >> > >> > data Person = Person { name :: String >> > , gender :: Gender >> > , age :: Int } >> > >> > Then the process to generate the binary is: >> > >> > msum $ map (encrypt . encode) [ length $ name person, name person, >> > gender >> > person, age person ] >> > >> > Above process is just persudo in Haskell, the actual is not coded in >> > Haskell. >> >> Except that binary and cereal are for serializing Haskell values >> directly; you seem to be wanting to parse and generate a particular >> encoding for a value. In which case, I don't think binary or cereal >> is really appropriate. >> >> > >> > On Wed, Apr 22, 2015 at 11:44 AM, Andrey Sverdlichenko >> > wrote: >> >> >> >> Could you describe encrypted data format? I can't understand problem >> >> with >> >> decryption. >> >> >> >> >> >> >> >> On Tue, Apr 21, 2015 at 8:41 PM, Magicloud Magiclouds >> >> wrote: >> >>> >> >>> That is the ugliness of the original binary data. The encryption is >> >>> not >> >>> by fixed block size. So decrypt cannot be run before the get* helpers. >> >>> So >> >>> decrypt-runGetPartial-decrypt-runGetPartial loop would not work. >> >>> >> >>> I need a "post process" in Get. For example, "portNumber <- liftM >> >>> decrypt >> >>> getWord16be; return $ MyDataType portNumber". But currently I could >> >>> not pass >> >>> decrypt into get function. >> >>> >> >>> On Wed, Apr 22, 2015 at 11:26 AM, Andrey Sverdlichenko >> >>> >> >>> wrote: >> >>>> >> >>>> You can't really modify source bytestring inside Get monad, and this >> >>>> is >> >>>> what decryption effectively do. The only option I know about is to >> >>>> run >> >>>> another parser inside Get monad. I'd rather write >> >>>> decrypt-runGetPartial-decrypt-runGetPartial loop and return Fail from >> >>>> it on >> >>>> decryption error. >> >>>> >> >>>> >> >>>> >> >>>> On Tue, Apr 21, 2015 at 8:12 PM, Magicloud Magiclouds >> >>>> wrote: >> >>>>> >> >>>>> How about fail in Get monad if decrypt failed? So decrypt failure >> >>>>> would >> >>>>> lead to a result of "Left String" on decode. >> >>>>> >> >>>>> On Wed, Apr 22, 2015 at 11:05 AM, Andrey Sverdlichenko >> >>>>> >> >>>>> wrote: >> >>>>>> >> >>>>>> You probably should not merge decrypt and decode operations, it is >> >>>>>> bad >> >>>>>> crypto habit. Until you decrypted and verified integrity of data, >> >>>>>> parsing is >> >>>>>> dangerous and opening your service to attacks. Correct way of >> >>>>>> implementing >> >>>>>> this would be to pass ciphertext to decryption function and run >> >>>>>> parser only >> >>>>>> if decryption is successful. If bytestring is too big to be >> >>>>>> decrypted in one >> >>>>>> piece, consider encrypting it in blocks and feeding decrypted parts >> >>>>>> to >> >>>>>> parser. >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> On Tue, Apr 21, 2015 at 7:49 PM, Magicloud Magiclouds >> >>>>>> wrote: >> >>>>>>> >> >>>>>>> Similar as you envisaged. I would receive a bytestring data and a >> >>>>>>> config point out what cipher to use. Then I deserialize the data >> >>>>>>> to a data >> >>>>>>> type with some fields. The serialize process is something like: >> >>>>>>> >> >>>>>>> msum $ map (encrypt . encode) [field1, field2, field3] >> >>>>>>> >> >>>>>>> I could parse the bytestring outside Get/Put monads. But I think >> >>>>>>> that >> >>>>>>> looks ugly. I really want to embed the decrypt process into >> >>>>>>> Get/Put monads. >> >>>>>>> >> >>>>>>> On Tue, Apr 21, 2015 at 10:08 PM, Ivan Lazar Miljenovic >> >>>>>>> wrote: >> >>>>>>>> >> >>>>>>>> On 21 April 2015 at 23:58, Magicloud Magiclouds >> >>>>>>>> wrote: >> >>>>>>>> > Thank you. But how if the cipher was specified outside the >> >>>>>>>> > binary >> >>>>>>>> > data? I >> >>>>>>>> > mean I need to pass the decrypt/encrypt function to get/put >> >>>>>>>> > while >> >>>>>>>> > they do >> >>>>>>>> > not accept parameters. Should I use Reader here? >> >>>>>>>> >> >>>>>>>> Maybe you could explain what you're doing better. >> >>>>>>>> >> >>>>>>>> I would envisage that you would get a Bytestring/Text value, then >> >>>>>>>> encrypt/decrypt and then put it back (though if you're dealing >> >>>>>>>> with >> >>>>>>>> Bytestrings, unless you're wanting to compose them with others >> >>>>>>>> there's >> >>>>>>>> no real need to use Get and Put as you'll have the resulting >> >>>>>>>> Bytestring already...). >> >>>>>>>> >> >>>>>>>> Or are you wanting to implement your own encryption/decryption >> >>>>>>>> scheme? >> >>>>>>>> In which case, you might want to either: >> >>>>>>>> >> >>>>>>>> a) write custom functions in the Get and Put monads OR >> >>>>>>>> >> >>>>>>>> b) write custom parsers (e.g. attoparsec) and builders (using the >> >>>>>>>> Builder module in bytestring); this is probably going to suit you >> >>>>>>>> better. >> >>>>>>>> >> >>>>>>>> > >> >>>>>>>> > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale >> >>>>>>>> > wrote: >> >>>>>>>> >> >> >>>>>>>> >> Magicloud Magiclouds wrote: >> >>>>>>>> >> > I am trying to work with some binary data that encrypted by >> >>>>>>>> >> > field >> >>>>>>>> >> > instead of >> >>>>>>>> >> > the result of serialization. I'd like to use Data.Serialize >> >>>>>>>> >> > to >> >>>>>>>> >> > wrap the >> >>>>>>>> >> > data >> >>>>>>>> >> > structure. But I could not figure out how to apply an >> >>>>>>>> >> > runtime >> >>>>>>>> >> > specified >> >>>>>>>> >> > cipher method to the bytestring. >> >>>>>>>> >> >> >>>>>>>> >> Are you using the set of crypto libraries written by >> >>>>>>>> >> Victor Hanquez, such as cryptocipher-types, >> >>>>>>>> >> crypto-pubkey-types, and cryptohash? >> >>>>>>>> >> >> >>>>>>>> >> Or the set of libraries written by Thomas DuBuisson, >> >>>>>>>> >> such as crypto-api, cipher-aes128, etc.? >> >>>>>>>> >> >> >>>>>>>> >> Here is an example of decoding for Victor's libraries. >> >>>>>>>> >> Encoding would be similar using Put instead of Get. >> >>>>>>>> >> Thomas' libraries would be similar using the other >> >>>>>>>> >> API. >> >>>>>>>> >> >> >>>>>>>> >> Let's say you have a type like this: >> >>>>>>>> >> >> >>>>>>>> >> data MyCipher = MyAES | MyBlowfish | ... >> >>>>>>>> >> >> >>>>>>>> >> Then in your cereal code you would have a Get monad >> >>>>>>>> >> expression something like this (assuming you have >> >>>>>>>> >> written all of the functions called parseSomething): >> >>>>>>>> >> >> >>>>>>>> >> getStuff = do >> >>>>>>>> >> cipher <- parseCipher :: Get MyCipher >> >>>>>>>> >> clearText <- case cipher of >> >>>>>>>> >> MyAES -> do >> >>>>>>>> >> keyBS <- parseAESKey :: Get ByteString >> >>>>>>>> >> let key = either (error "bad AES key") id $ makeKey >> >>>>>>>> >> keyBS >> >>>>>>>> >> cipher = cipherInit key >> >>>>>>>> >> cipherText <- parseAESCipherText :: Get ByteString >> >>>>>>>> >> return $ ecbDecrypt cipher cipherText >> >>>>>>>> >> MyBlowfish -> do ... >> >>>>>>>> >> >> >>>>>>>> >> etc. >> >>>>>>>> >> >> >>>>>>>> >> Hope this helps, >> >>>>>>>> >> Yitz >> >>>>>>>> > >> >>>>>>>> > >> >>>>>>>> > >> >>>>>>>> > >> >>>>>>>> > -- >> >>>>>>>> > ??????? >> >>>>>>>> > ??????? >> >>>>>>>> > >> >>>>>>>> > And for G+, please use magiclouds#gmail.com. >> >>>>>>>> > >> >>>>>>>> > _______________________________________________ >> >>>>>>>> > Haskell-Cafe mailing list >> >>>>>>>> > Haskell-Cafe at haskell.org >> >>>>>>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >>>>>>>> > >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> >> >>>>>>>> -- >> >>>>>>>> Ivan Lazar Miljenovic >> >>>>>>>> Ivan.Miljenovic at gmail.com >> >>>>>>>> http://IvanMiljenovic.wordpress.com >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> -- >> >>>>>>> ??????? >> >>>>>>> ??????? >> >>>>>>> >> >>>>>>> And for G+, please use magiclouds#gmail.com. >> >>>>>> >> >>>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> -- >> >>>>> ??????? >> >>>>> ??????? >> >>>>> >> >>>>> And for G+, please use magiclouds#gmail.com. >> >>>> >> >>>> >> >>> >> >>> >> >>> >> >>> -- >> >>> ??????? >> >>> ??????? >> >>> >> >>> And for G+, please use magiclouds#gmail.com. >> >> >> >> >> > >> > >> > >> > -- >> > ??????? >> > ??????? >> > >> > And for G+, please use magiclouds#gmail.com. >> > >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > Haskell-Cafe at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > >> >> >> >> -- >> Ivan Lazar Miljenovic >> Ivan.Miljenovic at gmail.com >> http://IvanMiljenovic.wordpress.com > > > > > -- > ??????? > ??????? > > And for G+, please use magiclouds#gmail.com. -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From magicloud.magiclouds at gmail.com Wed Apr 22 04:02:07 2015 From: magicloud.magiclouds at gmail.com (Magicloud Magiclouds) Date: Wed, 22 Apr 2015 12:02:07 +0800 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: <1429674283598.189c259d@Nodemailer> Message-ID: I see. Thank you. So I will find other solutions. On Wed, Apr 22, 2015 at 12:01 PM, Ivan Lazar Miljenovic < ivan.miljenovic at gmail.com> wrote: > On 22 April 2015 at 13:58, Magicloud Magiclouds > wrote: > > Ah, sorry, the "encode" function is the one from cereal, not to > "generate a > > particular encoding for a value". > > I meant "encode" as in "use a specific representation rather than > directly converting it on a constructor-by-constructor basis as is the > default for Get/Put". > > > > > On Wed, Apr 22, 2015 at 11:56 AM, Ivan Lazar Miljenovic > > wrote: > >> > >> On 22 April 2015 at 13:52, Magicloud Magiclouds > >> wrote: > >> > Say the data structure is: > >> > > >> > data Person = Person { name :: String > >> > , gender :: Gender > >> > , age :: Int } > >> > > >> > Then the process to generate the binary is: > >> > > >> > msum $ map (encrypt . encode) [ length $ name person, name person, > >> > gender > >> > person, age person ] > >> > > >> > Above process is just persudo in Haskell, the actual is not coded in > >> > Haskell. > >> > >> Except that binary and cereal are for serializing Haskell values > >> directly; you seem to be wanting to parse and generate a particular > >> encoding for a value. In which case, I don't think binary or cereal > >> is really appropriate. > >> > >> > > >> > On Wed, Apr 22, 2015 at 11:44 AM, Andrey Sverdlichenko < > blaze at ruddy.ru> > >> > wrote: > >> >> > >> >> Could you describe encrypted data format? I can't understand problem > >> >> with > >> >> decryption. > >> >> > >> >> > >> >> > >> >> On Tue, Apr 21, 2015 at 8:41 PM, Magicloud Magiclouds > >> >> wrote: > >> >>> > >> >>> That is the ugliness of the original binary data. The encryption is > >> >>> not > >> >>> by fixed block size. So decrypt cannot be run before the get* > helpers. > >> >>> So > >> >>> decrypt-runGetPartial-decrypt-runGetPartial loop would not work. > >> >>> > >> >>> I need a "post process" in Get. For example, "portNumber <- liftM > >> >>> decrypt > >> >>> getWord16be; return $ MyDataType portNumber". But currently I could > >> >>> not pass > >> >>> decrypt into get function. > >> >>> > >> >>> On Wed, Apr 22, 2015 at 11:26 AM, Andrey Sverdlichenko > >> >>> > >> >>> wrote: > >> >>>> > >> >>>> You can't really modify source bytestring inside Get monad, and > this > >> >>>> is > >> >>>> what decryption effectively do. The only option I know about is to > >> >>>> run > >> >>>> another parser inside Get monad. I'd rather write > >> >>>> decrypt-runGetPartial-decrypt-runGetPartial loop and return Fail > from > >> >>>> it on > >> >>>> decryption error. > >> >>>> > >> >>>> > >> >>>> > >> >>>> On Tue, Apr 21, 2015 at 8:12 PM, Magicloud Magiclouds > >> >>>> wrote: > >> >>>>> > >> >>>>> How about fail in Get monad if decrypt failed? So decrypt failure > >> >>>>> would > >> >>>>> lead to a result of "Left String" on decode. > >> >>>>> > >> >>>>> On Wed, Apr 22, 2015 at 11:05 AM, Andrey Sverdlichenko > >> >>>>> > >> >>>>> wrote: > >> >>>>>> > >> >>>>>> You probably should not merge decrypt and decode operations, it > is > >> >>>>>> bad > >> >>>>>> crypto habit. Until you decrypted and verified integrity of data, > >> >>>>>> parsing is > >> >>>>>> dangerous and opening your service to attacks. Correct way of > >> >>>>>> implementing > >> >>>>>> this would be to pass ciphertext to decryption function and run > >> >>>>>> parser only > >> >>>>>> if decryption is successful. If bytestring is too big to be > >> >>>>>> decrypted in one > >> >>>>>> piece, consider encrypting it in blocks and feeding decrypted > parts > >> >>>>>> to > >> >>>>>> parser. > >> >>>>>> > >> >>>>>> > >> >>>>>> > >> >>>>>> On Tue, Apr 21, 2015 at 7:49 PM, Magicloud Magiclouds > >> >>>>>> wrote: > >> >>>>>>> > >> >>>>>>> Similar as you envisaged. I would receive a bytestring data and > a > >> >>>>>>> config point out what cipher to use. Then I deserialize the data > >> >>>>>>> to a data > >> >>>>>>> type with some fields. The serialize process is something like: > >> >>>>>>> > >> >>>>>>> msum $ map (encrypt . encode) [field1, field2, field3] > >> >>>>>>> > >> >>>>>>> I could parse the bytestring outside Get/Put monads. But I think > >> >>>>>>> that > >> >>>>>>> looks ugly. I really want to embed the decrypt process into > >> >>>>>>> Get/Put monads. > >> >>>>>>> > >> >>>>>>> On Tue, Apr 21, 2015 at 10:08 PM, Ivan Lazar Miljenovic > >> >>>>>>> wrote: > >> >>>>>>>> > >> >>>>>>>> On 21 April 2015 at 23:58, Magicloud Magiclouds > >> >>>>>>>> wrote: > >> >>>>>>>> > Thank you. But how if the cipher was specified outside the > >> >>>>>>>> > binary > >> >>>>>>>> > data? I > >> >>>>>>>> > mean I need to pass the decrypt/encrypt function to get/put > >> >>>>>>>> > while > >> >>>>>>>> > they do > >> >>>>>>>> > not accept parameters. Should I use Reader here? > >> >>>>>>>> > >> >>>>>>>> Maybe you could explain what you're doing better. > >> >>>>>>>> > >> >>>>>>>> I would envisage that you would get a Bytestring/Text value, > then > >> >>>>>>>> encrypt/decrypt and then put it back (though if you're dealing > >> >>>>>>>> with > >> >>>>>>>> Bytestrings, unless you're wanting to compose them with others > >> >>>>>>>> there's > >> >>>>>>>> no real need to use Get and Put as you'll have the resulting > >> >>>>>>>> Bytestring already...). > >> >>>>>>>> > >> >>>>>>>> Or are you wanting to implement your own encryption/decryption > >> >>>>>>>> scheme? > >> >>>>>>>> In which case, you might want to either: > >> >>>>>>>> > >> >>>>>>>> a) write custom functions in the Get and Put monads OR > >> >>>>>>>> > >> >>>>>>>> b) write custom parsers (e.g. attoparsec) and builders (using > the > >> >>>>>>>> Builder module in bytestring); this is probably going to suit > you > >> >>>>>>>> better. > >> >>>>>>>> > >> >>>>>>>> > > >> >>>>>>>> > On Tue, Apr 21, 2015 at 6:43 PM, Yitzchak Gale < > gale at sefer.org> > >> >>>>>>>> > wrote: > >> >>>>>>>> >> > >> >>>>>>>> >> Magicloud Magiclouds wrote: > >> >>>>>>>> >> > I am trying to work with some binary data that encrypted > by > >> >>>>>>>> >> > field > >> >>>>>>>> >> > instead of > >> >>>>>>>> >> > the result of serialization. I'd like to use > Data.Serialize > >> >>>>>>>> >> > to > >> >>>>>>>> >> > wrap the > >> >>>>>>>> >> > data > >> >>>>>>>> >> > structure. But I could not figure out how to apply an > >> >>>>>>>> >> > runtime > >> >>>>>>>> >> > specified > >> >>>>>>>> >> > cipher method to the bytestring. > >> >>>>>>>> >> > >> >>>>>>>> >> Are you using the set of crypto libraries written by > >> >>>>>>>> >> Victor Hanquez, such as cryptocipher-types, > >> >>>>>>>> >> crypto-pubkey-types, and cryptohash? > >> >>>>>>>> >> > >> >>>>>>>> >> Or the set of libraries written by Thomas DuBuisson, > >> >>>>>>>> >> such as crypto-api, cipher-aes128, etc.? > >> >>>>>>>> >> > >> >>>>>>>> >> Here is an example of decoding for Victor's libraries. > >> >>>>>>>> >> Encoding would be similar using Put instead of Get. > >> >>>>>>>> >> Thomas' libraries would be similar using the other > >> >>>>>>>> >> API. > >> >>>>>>>> >> > >> >>>>>>>> >> Let's say you have a type like this: > >> >>>>>>>> >> > >> >>>>>>>> >> data MyCipher = MyAES | MyBlowfish | ... > >> >>>>>>>> >> > >> >>>>>>>> >> Then in your cereal code you would have a Get monad > >> >>>>>>>> >> expression something like this (assuming you have > >> >>>>>>>> >> written all of the functions called parseSomething): > >> >>>>>>>> >> > >> >>>>>>>> >> getStuff = do > >> >>>>>>>> >> cipher <- parseCipher :: Get MyCipher > >> >>>>>>>> >> clearText <- case cipher of > >> >>>>>>>> >> MyAES -> do > >> >>>>>>>> >> keyBS <- parseAESKey :: Get ByteString > >> >>>>>>>> >> let key = either (error "bad AES key") id $ makeKey > >> >>>>>>>> >> keyBS > >> >>>>>>>> >> cipher = cipherInit key > >> >>>>>>>> >> cipherText <- parseAESCipherText :: Get ByteString > >> >>>>>>>> >> return $ ecbDecrypt cipher cipherText > >> >>>>>>>> >> MyBlowfish -> do ... > >> >>>>>>>> >> > >> >>>>>>>> >> etc. > >> >>>>>>>> >> > >> >>>>>>>> >> Hope this helps, > >> >>>>>>>> >> Yitz > >> >>>>>>>> > > >> >>>>>>>> > > >> >>>>>>>> > > >> >>>>>>>> > > >> >>>>>>>> > -- > >> >>>>>>>> > ??????? > >> >>>>>>>> > ??????? > >> >>>>>>>> > > >> >>>>>>>> > And for G+, please use magiclouds#gmail.com. > >> >>>>>>>> > > >> >>>>>>>> > _______________________________________________ > >> >>>>>>>> > Haskell-Cafe mailing list > >> >>>>>>>> > Haskell-Cafe at haskell.org > >> >>>>>>>> > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > >> >>>>>>>> > > >> >>>>>>>> > >> >>>>>>>> > >> >>>>>>>> > >> >>>>>>>> -- > >> >>>>>>>> Ivan Lazar Miljenovic > >> >>>>>>>> Ivan.Miljenovic at gmail.com > >> >>>>>>>> http://IvanMiljenovic.wordpress.com > >> >>>>>>> > >> >>>>>>> > >> >>>>>>> > >> >>>>>>> > >> >>>>>>> -- > >> >>>>>>> ??????? > >> >>>>>>> ??????? > >> >>>>>>> > >> >>>>>>> And for G+, please use magiclouds#gmail.com. > >> >>>>>> > >> >>>>>> > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> -- > >> >>>>> ??????? > >> >>>>> ??????? > >> >>>>> > >> >>>>> And for G+, please use magiclouds#gmail.com. > >> >>>> > >> >>>> > >> >>> > >> >>> > >> >>> > >> >>> -- > >> >>> ??????? > >> >>> ??????? > >> >>> > >> >>> And for G+, please use magiclouds#gmail.com. > >> >> > >> >> > >> > > >> > > >> > > >> > -- > >> > ??????? > >> > ??????? > >> > > >> > And for G+, please use magiclouds#gmail.com. > >> > > >> > _______________________________________________ > >> > Haskell-Cafe mailing list > >> > Haskell-Cafe at haskell.org > >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > >> > > >> > >> > >> > >> -- > >> Ivan Lazar Miljenovic > >> Ivan.Miljenovic at gmail.com > >> http://IvanMiljenovic.wordpress.com > > > > > > > > > > -- > > ??????? > > ??????? > > > > And for G+, please use magiclouds#gmail.com. > > > > -- > Ivan Lazar Miljenovic > Ivan.Miljenovic at gmail.com > http://IvanMiljenovic.wordpress.com > -- ??????? ??????? And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hawu.bnu at gmail.com Wed Apr 22 11:58:13 2015 From: hawu.bnu at gmail.com (Jean Lopes) Date: Wed, 22 Apr 2015 04:58:13 -0700 (PDT) Subject: [Haskell-cafe] cabal install glade In-Reply-To: <55345DB6.10802@gmail.com> References: <9992f880-81e3-4041-8307-3b5857687c97@googlegroups.com> <552F65B3.6010004@gmail.com> <55309B40.5010601@gmail.com> <55345DB6.10802@gmail.com> Message-ID: <518a65ee-7eb3-431d-9fee-1dd4300d5d2a@googlegroups.com> Hi, I did wipe the constraints of gtk and cairo < 0.13 from the glade.cabal file, and it installed correctly only to see a error happening because I used the wrong glade version to generate the UI, later I found out the gtk3 library... which installed correctly from scratch and worked just fine.. the only downside so far is lack of samples written in haskell for the gtk3... I found more about gtk2... and there is some changes on "how to write"... the first difference for me was on how to set events for widgets... I did download the sources now, and started looking in the gtk3 demos... Em domingo, 19 de abril de 2015 23:00:28 UTC-3, Zilin Chen escreveu: > > Sorry for late reply. This issue has been fixed as per > https://github.com/gtk2hs/gtk2hs/issues/100 which appears in a later > version of cairo. It seems than glade requires gtk < 0.13 ==> cairo < 0.13 > and doesn't include the fix. > > On 19/04/15 02:10, Jean Lopes wrote: > > Ok, I will report the commands I am using: > $ cd glade (Matthew Pickering's glade repository clone) > $ cabal sandbox init > $ cabal update > $ cabal install --only-dependencies --dry-run > > Resolving dependencies... > > In order, the following would be installed (use -v for more details): > > mtl-2.2.1 > > utf8-string-0.3.8 (latest: 1) > > cairo-0.12.5.3 (latest: 0.13.1.0) > > glib-0.12.5.4 (latest: 0.13.1.0) > > gio-0.12.5.3 (latest: 0.13.1.0) > > pango-0.12.5.3 (latest: 0.13.1.0) > > gtk-0.12.5.7 (latest: 0.13.6) > $ cabal install --only-dependencies > > ... here comes the first error ... > > [1 of 2] Compiling SetupWrapper ( > /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs, > /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/dist/dist-sandbox-de3654e1/setup/SetupWrapper.o > ) > > > > /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs:91:17: > > Ambiguous occurrence ?die? > > It could refer to either ?Distribution.Simple.Utils.die?, > > imported from ?Distribution.Simple.Utils? > at /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 > > or ?System.Exit.die?, > > imported from ?System.Exit? at > /tmp/cairo-0.12.5.3-1305/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 > > Failed to install cairo-0.12.5.3 > > [1 of 2] Compiling SetupWrapper ( > /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs, > /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/dist/dist-sandbox-de3654e1/setup/SetupWrapper.o > ) > > > > /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs:91:17: > > Ambiguous occurrence ?die? > > It could refer to either ?Distribution.Simple.Utils.die?, > > imported from ?Distribution.Simple.Utils? > at /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs:8:1-32 > > or ?System.Exit.die?, > > imported from ?System.Exit? at > /tmp/glib-0.12.5.4-1305/glib-0.12.5.4/SetupWrapper.hs:21:1-18 > > Failed to install glib-0.12.5.4 > > cabal: Error: some packages failed to install: > > cairo-0.12.5.3 failed during the configure step. The exception was: > > ExitFailure 1 > > gio-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > > glib-0.12.5.4 failed during the configure step. The exception was: > > ExitFailure 1 > > gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to install. > > pango-0.12.5.3 depends on glib-0.12.5.4 which failed to install. > > So I supose the problem lies within the cairo package, right? > > Em sexta-feira, 17 de abril de 2015 02:34:00 UTC-3, Zilin Chen escreveu: >> >> Do you still get the same errors? I think the "Sandboxes: basic usage" >> section in [0] is what you'd follow. >> >> [0] >> https://www.haskell.org/cabal/users-guide/installing-packages.html#sandboxes-advanced-usage >> >> On 17/04/15 12:11, Jean Lopes wrote: >> >> Still no success...I am missing some very basic things probably.. >> >> Em quinta-feira, 16 de abril de 2015 04:33:20 UTC-3, Zilin Chen escreveu: >>> >>> Hi Jean, >>> >>> Simply do `$ cabal sandbox add-source ' and >>> then `$ cabal install --only-dependencies' as normal. I think it should >>> work. >>> >>> Cheers, >>> Zilin >>> >>> >>> On 15/04/15 22:01, Jean Lopes wrote: >>> >>> I will try to use your branch before going back to GHC 7.8... >>> >>> But, how exactly should I do that ? >>> Clone your branch; >>> Build from local source code with cabal ? (I just scrolled this part >>> while reading cabal tutorials, guess I'll have to take a look now) >>> What about dependencies ? I should use $ cabal install glade >>> --only-dependencies and than install glade from your branch ? >>> >>> Em quarta-feira, 15 de abril de 2015 05:48:42 UTC-3, Matthew Pickering >>> escreveu: >>>> >>>> Hi Jean, >>>> >>>> You can try cloning my branch until a push gets accepted upstream. >>>> >>>> https://github.com/mpickering/glade >>>> >>>> The fixes to get it working with 7.10 were fairly minimal. >>>> >>>> Matt >>>> >>>> On Wed, Apr 15, 2015 at 4:33 AM, Jean Lopes wrote: >>>> > Hello, I am trying to install the Glade package from hackage, and I >>>> > keep getting exit failure... >>>> > >>>> > Hope someone can help me solve it! >>>> > >>>> > What I did: >>>> > $ mkdir ~/haskell/project >>>> > $ cd ~/haskell/project >>>> > $ cabal sandbox init >>>> > $ cabal update >>>> > $ cabal install alex >>>> > $ cabal install happy >>>> > $ cabal install gtk2hs-buildtools >>>> > $ cabal install gtk #successful until here >>>> > $ cabal install glade >>>> > >>>> > The last statement gave me the following error: >>>> > >>>> > $ [1 of 2] Compiling SetupWrapper ( >>>> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs, >>>> > >>>> /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o >>>> > ) >>>> > $ >>>> > $ /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:91:17: >>>> > $ Ambiguous occurrence ?die? >>>> > $ It could refer to either ?Distribution.Simple.Utils.die?, >>>> > $ imported from >>>> > ?Distribution.Simple.Utils? at >>>> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:8:1-32 >>>> > $ or ?System.Exit.die?, >>>> > $ imported from ?System.Exit? at >>>> > /tmp/cairo-0.12.5.3-5133/cairo-0.12.5.3/SetupWrapper.hs:21:1-18 >>>> > $ Failed to install cairo-0.12.5.3 >>>> > $ [1 of 2] Compiling SetupWrapper ( >>>> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs, >>>> > >>>> /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/dist/dist-sandbox-acbd4b7/setup/SetupWrapper.o >>>> > ) >>>> > $ >>>> > $ /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:91:17: >>>> > $ Ambiguous occurrence ?die? >>>> > $ It could refer to either ?Distribution.Simple.Utils.die?, >>>> > $ imported from >>>> > ?Distribution.Simple.Utils? at >>>> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:8:1-32 >>>> > $ or ?System.Exit.die?, >>>> > $ imported from ?System.Exit? at >>>> > /tmp/glib-0.12.5.4-5133/glib-0.12.5.4/SetupWrapper.hs:21:1-18 >>>> > $ Failed to install glib-0.12.5.4 >>>> > $ cabal: Error: some packages failed to install: >>>> > $ cairo-0.12.5.3 failed during the configure step. The exception was: >>>> > $ ExitFailure 1 >>>> > $ gio-0.12.5.3 depends on glib-0.12.5.4 which failed to install. >>>> > $ glade-0.12.5.0 depends on glib-0.12.5.4 which failed to install. >>>> > $ glib-0.12.5.4 failed during the configure step. The exception was: >>>> > $ ExitFailure 1 >>>> > $ gtk-0.12.5.7 depends on glib-0.12.5.4 which failed to install. >>>> > $ pango-0.12.5.3 depends on glib-0.12.5.4 which failed to install. >>>> > >>>> > Important: You can assume I don't know much. I'm rather new to >>>> Haskell/cabal >>>> > _______________________________________________ >>>> > Haskell-Cafe mailing list >>>> > Haskel... at haskell.org >>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>> _______________________________________________ >>>> Haskell-Cafe mailing list >>>> Haskel... at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>> >>> >>> >>> _______________________________________________ >>> Haskell-Cafe mailing listHaskel... at haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >>> >>> >> >> _______________________________________________ >> Haskell-Cafe mailing listHaskel... at haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> >> > > _______________________________________________ > Haskell-Cafe mailing listHaskel... at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From k-bx at k-bx.com Wed Apr 22 16:14:07 2015 From: k-bx at k-bx.com (Kostiantyn Rybnikov) Date: Wed, 22 Apr 2015 19:14:07 +0300 Subject: [Haskell-cafe] Timeout on pure code Message-ID: Hi! Our company's main commercial product is a Snap-based web app which we compile with GHC 7.8.4. It works on four app-servers currently load-balanced behind Haproxy. I recently implemented a new piece of functionality, which led to weird behavior which I have no idea how to debug, so I'm asking here for help and ideas! The new functionality is this: on specific url-handler, we need to query n external services concurrently with a timeout, gather and render results. Easy (in Haskell)! The implementation looks, as you might imagine, something like this (sorry for almost-real-haskell, I'm sure I forgot tons of imports and other things, but I hope everything is clear as-is, if not -- I'll be glad to update gist to make things more specific): https://gist.github.com/k-bx/0cf7035aaf1ad6306e76 Now, this works wonderful for some time, and in logs I can see both, successful fetches of external-content, and also lots of timeouts from our external providers. Life is good. But! After several days of work (sometimes a day, sometimes couple days), apps on all 4 servers go crazy. It might take some interval (like 20 minutes) before they're all crazy, so it's not super-synchronous. Now: how crazy, exactly? First of all, this endpoint timeouts. Haproxy requests for a response, and response times out, so they "hang". Secondly, logs are interesting. If you look at the code from gist once again, you can see, that some of CandidateProvider's don't actually require any networking work, so all they do is actually just logging that they're working (I added this as part of debugging actually) and return pure data. So what's weird is that they timeout also! Here's how output of our logs starts to look like after the bug happens: ``` [2015-04-22 09:56:20] provider: CandidateProvider1 [2015-04-22 09:56:20] provider: CandidateProvider2 [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider1 [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider2 [2015-04-22 09:56:22] provider: CandidateProvider1 [2015-04-22 09:56:22] provider: CandidateProvider2 [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider1 [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider2 ... and so on ``` What's also weird is that, even after timeout is logged, the string ""Got responses!" never gets logged also! So hanging happens somewhere in-between. I have to say I'm sorry that I don't have strace output now, I'll have to wait until this situation happens once again, but I'll get later to you with this info. So, how is this possible that almost-pure code gets timed-out? And why does it hang afterwards? CPU and other resource usage is quite low, number of open file-descriptors also (it seems). Thanks for all the suggestions in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at gregorycollins.net Wed Apr 22 16:56:12 2015 From: greg at gregorycollins.net (Gregory Collins) Date: Wed, 22 Apr 2015 09:56:12 -0700 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: Given your gist, the timeout on your requests is set to a half-second so it's conceivable that a highly-loaded server might have GC pause times approaching that long. Smells to me like a classic Haskell memory leak (that's why the problem occurs after the server has been up for a while): run your program with the heap profiler, and audit any shared tables/IORefs/MVars to make sure you are not building up thunks there. Greg On Wed, Apr 22, 2015 at 9:14 AM, Kostiantyn Rybnikov wrote: > Hi! > > Our company's main commercial product is a Snap-based web app which we > compile with GHC 7.8.4. It works on four app-servers currently > load-balanced behind Haproxy. > > I recently implemented a new piece of functionality, which led to weird > behavior which I have no idea how to debug, so I'm asking here for help and > ideas! > > The new functionality is this: on specific url-handler, we need to query n > external services concurrently with a timeout, gather and render results. > Easy (in Haskell)! > > The implementation looks, as you might imagine, something like this (sorry > for almost-real-haskell, I'm sure I forgot tons of imports and other > things, but I hope everything is clear as-is, if not -- I'll be glad to > update gist to make things more specific): > > https://gist.github.com/k-bx/0cf7035aaf1ad6306e76 > > Now, this works wonderful for some time, and in logs I can see both, > successful fetches of external-content, and also lots of timeouts from our > external providers. Life is good. > > But! After several days of work (sometimes a day, sometimes couple days), > apps on all 4 servers go crazy. It might take some interval (like 20 > minutes) before they're all crazy, so it's not super-synchronous. Now: how > crazy, exactly? > > First of all, this endpoint timeouts. Haproxy requests for a response, and > response times out, so they "hang". > > Secondly, logs are interesting. If you look at the code from gist once > again, you can see, that some of CandidateProvider's don't actually require > any networking work, so all they do is actually just logging that they're > working (I added this as part of debugging actually) and return pure data. > So what's weird is that they timeout also! Here's how output of our logs > starts to look like after the bug happens: > > ``` > [2015-04-22 09:56:20] provider: CandidateProvider1 > [2015-04-22 09:56:20] provider: CandidateProvider2 > [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider1 > [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider2 > [2015-04-22 09:56:22] provider: CandidateProvider1 > [2015-04-22 09:56:22] provider: CandidateProvider2 > [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider1 > [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider2 > ... and so on > ``` > > What's also weird is that, even after timeout is logged, the string ""Got > responses!" never gets logged also! So hanging happens somewhere in-between. > > I have to say I'm sorry that I don't have strace output now, I'll have to > wait until this situation happens once again, but I'll get later to you > with this info. > > So, how is this possible that almost-pure code gets timed-out? And why > does it hang afterwards? > > CPU and other resource usage is quite low, number of open file-descriptors > also (it seems). > > Thanks for all the suggestions in advance! > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -- Gregory Collins -------------- next part -------------- An HTML attachment was scrubbed... URL: From k-bx at k-bx.com Wed Apr 22 17:14:09 2015 From: k-bx at k-bx.com (Kostiantyn Rybnikov) Date: Wed, 22 Apr 2015 20:14:09 +0300 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: Gregory, Servers are far from being highly-overloaded, since they're currently under a much less load they used to be. Memory consumption is stable and low, and there's a lot of free RAM also. Would you say that given these factors this scenario is unlikely? On Wed, Apr 22, 2015 at 7:56 PM, Gregory Collins wrote: > Given your gist, the timeout on your requests is set to a half-second so > it's conceivable that a highly-loaded server might have GC pause times > approaching that long. Smells to me like a classic Haskell memory leak > (that's why the problem occurs after the server has been up for a while): > run your program with the heap profiler, and audit any shared > tables/IORefs/MVars to make sure you are not building up thunks there. > > Greg > > On Wed, Apr 22, 2015 at 9:14 AM, Kostiantyn Rybnikov > wrote: > >> Hi! >> >> Our company's main commercial product is a Snap-based web app which we >> compile with GHC 7.8.4. It works on four app-servers currently >> load-balanced behind Haproxy. >> >> I recently implemented a new piece of functionality, which led to weird >> behavior which I have no idea how to debug, so I'm asking here for help and >> ideas! >> >> The new functionality is this: on specific url-handler, we need to query >> n external services concurrently with a timeout, gather and render results. >> Easy (in Haskell)! >> >> The implementation looks, as you might imagine, something like this >> (sorry for almost-real-haskell, I'm sure I forgot tons of imports and other >> things, but I hope everything is clear as-is, if not -- I'll be glad to >> update gist to make things more specific): >> >> https://gist.github.com/k-bx/0cf7035aaf1ad6306e76 >> >> Now, this works wonderful for some time, and in logs I can see both, >> successful fetches of external-content, and also lots of timeouts from our >> external providers. Life is good. >> >> But! After several days of work (sometimes a day, sometimes couple days), >> apps on all 4 servers go crazy. It might take some interval (like 20 >> minutes) before they're all crazy, so it's not super-synchronous. Now: how >> crazy, exactly? >> >> First of all, this endpoint timeouts. Haproxy requests for a response, >> and response times out, so they "hang". >> >> Secondly, logs are interesting. If you look at the code from gist once >> again, you can see, that some of CandidateProvider's don't actually require >> any networking work, so all they do is actually just logging that they're >> working (I added this as part of debugging actually) and return pure data. >> So what's weird is that they timeout also! Here's how output of our logs >> starts to look like after the bug happens: >> >> ``` >> [2015-04-22 09:56:20] provider: CandidateProvider1 >> [2015-04-22 09:56:20] provider: CandidateProvider2 >> [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider1 >> [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider2 >> [2015-04-22 09:56:22] provider: CandidateProvider1 >> [2015-04-22 09:56:22] provider: CandidateProvider2 >> [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider1 >> [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider2 >> ... and so on >> ``` >> >> What's also weird is that, even after timeout is logged, the string ""Got >> responses!" never gets logged also! So hanging happens somewhere in-between. >> >> I have to say I'm sorry that I don't have strace output now, I'll have to >> wait until this situation happens once again, but I'll get later to you >> with this info. >> >> So, how is this possible that almost-pure code gets timed-out? And why >> does it hang afterwards? >> >> CPU and other resource usage is quite low, number of open >> file-descriptors also (it seems). >> >> Thanks for all the suggestions in advance! >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > > > -- > Gregory Collins > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Wed Apr 22 17:30:10 2015 From: mail at nh2.me (=?windows-1252?Q?Niklas_Hamb=FCchen?=) Date: Thu, 23 Apr 2015 02:30:10 +0900 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: <5537DAA2.9070208@nh2.me> You might already have considered it, but the GHC eventlog together with Threadscope and ghc-events-analyze (http://www.well-typed.com/blog/2014/02/ghc-events-analyze/) can be very helpful to debug such issues. From k-bx at k-bx.com Wed Apr 22 17:46:28 2015 From: k-bx at k-bx.com (Kostiantyn Rybnikov) Date: Wed, 22 Apr 2015 20:46:28 +0300 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: <5537DAA2.9070208@nh2.me> References: <5537DAA2.9070208@nh2.me> Message-ID: Niklas, This seems a very helpful tool indeed, but I'm not completely sure how could it be helpful in for specific problem. Was there anything specific you would suggest to measure with it, or just consider it as a general nice tool while approaching the problem? Thanks! On Wed, Apr 22, 2015 at 8:30 PM, Niklas Hamb?chen wrote: > You might already have considered it, but the GHC eventlog together with > Threadscope and ghc-events-analyze > (http://www.well-typed.com/blog/2014/02/ghc-events-analyze/) can be very > helpful to debug such issues. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mail at nh2.me Wed Apr 22 18:44:55 2015 From: mail at nh2.me (=?UTF-8?B?TmlrbGFzIEhhbWLDvGNoZW4=?=) Date: Thu, 23 Apr 2015 03:44:55 +0900 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: <5537DAA2.9070208@nh2.me> Message-ID: <5537EC27.3080500@nh2.me> I meant in the general sense of approaching the problem, and the linked blog post takes as an example situation a web server that doesn't respond well after some amount of time, I associated that with your problem description. On 23/04/15 02:46, Kostiantyn Rybnikov wrote: > Niklas, > > This seems a very helpful tool indeed, but I'm not completely sure how > could it be helpful in for specific problem. Was there anything specific > you would suggest to measure with it, or just consider it as a general > nice tool while approaching the problem? > > Thanks! > > On Wed, Apr 22, 2015 at 8:30 PM, Niklas Hamb?chen > wrote: > > You might already have considered it, but the GHC eventlog together with > Threadscope and ghc-events-analyze > (http://www.well-typed.com/blog/2014/02/ghc-events-analyze/) can be very > helpful to debug such issues. > > From greg at gregorycollins.net Wed Apr 22 20:09:33 2015 From: greg at gregorycollins.net (Gregory Collins) Date: Wed, 22 Apr 2015 13:09:33 -0700 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: Maybe but it would be helpful to rule the scenario out. Johan's ekg library is also useful, it exports a webserver on a different port that you can use to track metrics like gc times, etc. Other options for further debugging include gathering strace logs from the binary. You'll have to do some data gathering to narrow down the cause unfortunately -- http client? your code? Snap server? GHC event manager (System.timeout is implemented here)? GC? etc G On Wed, Apr 22, 2015 at 10:14 AM, Kostiantyn Rybnikov wrote: > Gregory, > > Servers are far from being highly-overloaded, since they're currently > under a much less load they used to be. Memory consumption is stable and > low, and there's a lot of free RAM also. > > Would you say that given these factors this scenario is unlikely? > > On Wed, Apr 22, 2015 at 7:56 PM, Gregory Collins > wrote: > >> Given your gist, the timeout on your requests is set to a half-second so >> it's conceivable that a highly-loaded server might have GC pause times >> approaching that long. Smells to me like a classic Haskell memory leak >> (that's why the problem occurs after the server has been up for a while): >> run your program with the heap profiler, and audit any shared >> tables/IORefs/MVars to make sure you are not building up thunks there. >> >> Greg >> >> On Wed, Apr 22, 2015 at 9:14 AM, Kostiantyn Rybnikov >> wrote: >> >>> Hi! >>> >>> Our company's main commercial product is a Snap-based web app which we >>> compile with GHC 7.8.4. It works on four app-servers currently >>> load-balanced behind Haproxy. >>> >>> I recently implemented a new piece of functionality, which led to weird >>> behavior which I have no idea how to debug, so I'm asking here for help and >>> ideas! >>> >>> The new functionality is this: on specific url-handler, we need to query >>> n external services concurrently with a timeout, gather and render results. >>> Easy (in Haskell)! >>> >>> The implementation looks, as you might imagine, something like this >>> (sorry for almost-real-haskell, I'm sure I forgot tons of imports and other >>> things, but I hope everything is clear as-is, if not -- I'll be glad to >>> update gist to make things more specific): >>> >>> https://gist.github.com/k-bx/0cf7035aaf1ad6306e76 >>> >>> Now, this works wonderful for some time, and in logs I can see both, >>> successful fetches of external-content, and also lots of timeouts from our >>> external providers. Life is good. >>> >>> But! After several days of work (sometimes a day, sometimes couple >>> days), apps on all 4 servers go crazy. It might take some interval (like 20 >>> minutes) before they're all crazy, so it's not super-synchronous. Now: how >>> crazy, exactly? >>> >>> First of all, this endpoint timeouts. Haproxy requests for a response, >>> and response times out, so they "hang". >>> >>> Secondly, logs are interesting. If you look at the code from gist once >>> again, you can see, that some of CandidateProvider's don't actually require >>> any networking work, so all they do is actually just logging that they're >>> working (I added this as part of debugging actually) and return pure data. >>> So what's weird is that they timeout also! Here's how output of our logs >>> starts to look like after the bug happens: >>> >>> ``` >>> [2015-04-22 09:56:20] provider: CandidateProvider1 >>> [2015-04-22 09:56:20] provider: CandidateProvider2 >>> [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider1 >>> [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider2 >>> [2015-04-22 09:56:22] provider: CandidateProvider1 >>> [2015-04-22 09:56:22] provider: CandidateProvider2 >>> [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider1 >>> [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider2 >>> ... and so on >>> ``` >>> >>> What's also weird is that, even after timeout is logged, the string >>> ""Got responses!" never gets logged also! So hanging happens somewhere >>> in-between. >>> >>> I have to say I'm sorry that I don't have strace output now, I'll have >>> to wait until this situation happens once again, but I'll get later to you >>> with this info. >>> >>> So, how is this possible that almost-pure code gets timed-out? And why >>> does it hang afterwards? >>> >>> CPU and other resource usage is quite low, number of open >>> file-descriptors also (it seems). >>> >>> Thanks for all the suggestions in advance! >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >>> >> >> >> -- >> Gregory Collins >> > > -- Gregory Collins -------------- next part -------------- An HTML attachment was scrubbed... URL: From erkokl at gmail.com Thu Apr 23 02:20:54 2015 From: erkokl at gmail.com (Levent Erkok) Date: Wed, 22 Apr 2015 19:20:54 -0700 Subject: [Haskell-cafe] [FUN] Cheryl's birthday; solved using Haskell/SBV Message-ID: For those of us who'd rather have Haskell do the thinking for us: https://gist.github.com/LeventErkok/654a86a3ec7d3799b624 Honestly, this is more an exercise in how to formalize such puzzles as opposed to demonstrating the capabilities of SBV or SMT-solvers in general; but fun nonetheless. The backend SMT solver (I used Z3) solves the puzzle instantly. Enjoy.. -Levent. PS. Thanks to Amit Goel for suggesting the formalization strategy used in the encoding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From k-bx at k-bx.com Thu Apr 23 07:30:15 2015 From: k-bx at k-bx.com (Kostiantyn Rybnikov) Date: Thu, 23 Apr 2015 10:30:15 +0300 Subject: [Haskell-cafe] compiling GHC 7.8 on raspberry pi In-Reply-To: <20150421183729.6555acce@sven.bartscher> References: <20150419161515.57bdd83b@sven.bartscher> <1429458158.7729.7.camel@debian.org> <20150421183729.6555acce@sven.bartscher> Message-ID: Please keep us informed on the progress (and share binary if it'll work :) ). Thanks! 21 ????. 2015 19:37 "Sven Bartscher" ????: > On Sun, 19 Apr 2015 17:42:38 +0200 > Joachim Breitner wrote: > > > Hi, > > > > Am Sonntag, den 19.04.2015, 16:15 +0200 schrieb Sven Bartscher: > > > I'm trying to get a haskell program to run on a raspberry pi (running > > > raspbian). Unfortunately it requires template haskell. > > > Since the GHC included in raspbian wheezy doesn't support TH I'm trying > > > to compile GHC 7.8.4 on the rpi. > > > Most of the compilation worked fine. I got problems with the memory > > > consumption, but adding a lot of swapspace solved this problem. > > > During the final phase the compilation process complains about a > > > "strange closure type 49200" (the exact number is varying, but most > > > often it's 49200). > > > Does anyone here have experience, how to compile GHC 7.8 on a raspberry > > > pi? > > > > > > As a side note: The compilation is running in QEMU while the compiled > > > program should run on a real rpi. > > > > you might be interested in the patches that Debian applies to GHC, in > > particular the ARM-related one, even more in particular the one that > > enforces the use of gold as the linker: > > https://sources.debian.net/src/ghc/7.8.20141223-1/debian/patches/ > > Many thanks. I will try that, but it will take some time, until I know > whether it worked. > > Regards > Sven > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From J.Hage at uu.nl Thu Apr 23 07:50:27 2015 From: J.Hage at uu.nl (Jurriaan Hage) Date: Thu, 23 Apr 2015 09:50:27 +0200 Subject: [Haskell-cafe] [ANN]: the Helium compiler, version 1.8.1 In-Reply-To: References: <647A7C27-F466-4975-9A51-B866FD417342@uu.nl> Message-ID: <7B6059BA-36E0-47AD-B5AD-617B3F706972@uu.nl> On 20Apr, 2015, at 12:26, Alberto G. Corona wrote: > Great! > Hi Alberto, > How the type rules detailed in the "scripting the type inference engine" paper are implemented? Euh? I guess you have to consult the implementation of the compiler. Most of the code you need to look at is in src/Helium/StaticAnalysis/Directives/ Essentially, we ``replace?? the original constraints by the explicitly written down constraints. With these constraints a function is associated that given the necessary context information can produce the domain specific report. The replacement is performed by pattern matching on the AST. > it is possible to script the inference engine with such rules? Sure > If so, are there some examples? Just run heliumpath Then take the path that ends in lib/helium-1.8.1/share add /lib to the path, and then you can find in that directory files that have extension .type. Those can serve as examples. best, Jur From gale at sefer.org Thu Apr 23 08:46:24 2015 From: gale at sefer.org (Yitzchak Gale) Date: Thu, 23 Apr 2015 11:46:24 +0300 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: <1429674283598.189c259d@Nodemailer> Message-ID: Ivan Lazar Miljenovic wrote: >>> Except that binary and cereal are for serializing Haskell values >>> directly; you seem to be wanting to parse and generate a particular >>> encoding for a value. In which case, I don't think binary or cereal >>> is really appropriate. > I meant "encode" as in "use a specific representation rather than > directly converting it on a constructor-by-constructor basis as is the > default for Get/Put". I must still not be understanding what you meant in your original comment. Because in my view, cereal and binary are *exactly* designed for that use, and do it extremely well. The only place where a particular "constructor-by-constructor" approach is built in is in the facilities for automated generation of parsers using generic techniques. But when writing parsers directly, you have full flexibility, and a powerful set of built-in tools to back you up. From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Thu Apr 23 08:58:15 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Thu, 23 Apr 2015 09:58:15 +0100 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: References: Message-ID: <20150423085815.GM19341@weber> On Tue, Apr 21, 2015 at 09:58:42PM +0800, Magicloud Magiclouds wrote: > Thank you. But how if the cipher was specified outside the binary data? I > mean I need to pass the decrypt/encrypt function to get/put while they do > not accept parameters. Should I use Reader here? I think the crux of the problem is that you are trying to use the Serialize typeclass. Don't. Just write functions. Tom From magicloud.magiclouds at gmail.com Thu Apr 23 09:17:57 2015 From: magicloud.magiclouds at gmail.com (Magicloud Magiclouds) Date: Thu, 23 Apr 2015 17:17:57 +0800 Subject: [Haskell-cafe] How to use an crypto with hackage cereal? In-Reply-To: <20150423085815.GM19341@weber> References: <20150423085815.GM19341@weber> Message-ID: Yes, I ended up writing my own FieldCryptoSerialize class. On Thu, Apr 23, 2015 at 4:58 PM, Tom Ellis < tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk> wrote: > On Tue, Apr 21, 2015 at 09:58:42PM +0800, Magicloud Magiclouds wrote: > > Thank you. But how if the cipher was specified outside the binary data? I > > mean I need to pass the decrypt/encrypt function to get/put while they do > > not accept parameters. Should I use Reader here? > > I think the crux of the problem is that you are trying to use the Serialize > typeclass. Don't. Just write functions. > > Tom > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- ??????? ??????? And for G+, please use magiclouds#gmail.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.dominikus71 at gmail.com Thu Apr 23 10:41:07 2015 From: erik.dominikus71 at gmail.com (Erik Dominikus) Date: Thu, 23 Apr 2015 17:41:07 +0700 Subject: [Haskell-cafe] ANN: dns-server: forward DNS queries to StatDNS REST API Message-ID: Bad news: ISP is intercepting packets to UDP port 53. Good news: There is DNS resolution over HTTP (http://www.statdns.com/api/). Bad news: The software bridging DNS clients and that HTTP service is missing. Good news: I made (a small but working part of) it. I've been using it on my computer. The code is here: https://github.com/edom/dns-server From allbery.b at gmail.com Thu Apr 23 10:46:55 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Thu, 23 Apr 2015 06:46:55 -0400 Subject: [Haskell-cafe] ANN: dns-server: forward DNS queries to StatDNS REST API In-Reply-To: References: Message-ID: On Thu, Apr 23, 2015 at 6:41 AM, Erik Dominikus wrote: > Bad news: ISP is intercepting packets to UDP port 53. > > Good news: There is DNS resolution over HTTP (http://www.statdns.com/api/ > > ). > Bad news: you're going to be trusting your ISP's DNS to get there, unless they can guarantee their IPv4 and/or IPv6 addresses won't change *and* you can remember those addresses *and* they're not using name based virtual hosts or other very common modern HTTP features. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From apfelmus at quantentunnel.de Thu Apr 23 10:49:16 2015 From: apfelmus at quantentunnel.de (Heinrich Apfelmus) Date: Thu, 23 Apr 2015 12:49:16 +0200 Subject: [Haskell-cafe] Is there a name for this algebraic structure? In-Reply-To: References: Message-ID: Gleb Peregud wrote: > I am wondering if there's a well known algebraic structure which follows > the following patterns. Let's call it S: > > It's update-able with some opaque "a" (which can be an element or an > operation with an element): > > update :: S -> a -> S > > There's a well defined zero for it: > > empty :: S > > Operations on it are idempotent: > > update s a == update (update s a) a > > Every S can be reconstructed from a sequence of updates: > > forall s. exists [a]. s == foldl update empty [a] If you reverse the arguments of the `update` function, then you get a map update :: A -> (S -> S) from the type `A` of "updates" to the monoid of maps Endo S . Another way to look at it is to consider a monoid `M[A]` which is generated by elements of the type `A`, subject to the condition that these elements are idempotent. In other words, you consider the following quotient M[A] = words in A / x^2 = x for each x\in A Then, the update function is a morphism of monoids: update :: M[A] -> S Mathematicians like to conflate the type `S` and the map `update` a little bit and speak of this as *representation* of your monoid. This is a useful way to look at things, for instance when it comes to the representation of groups. In any case, this is how I would approach the problem of turning this into an algebraic structure, there are probably many other ways to do that as well. Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com From erik.dominikus71 at gmail.com Thu Apr 23 11:46:44 2015 From: erik.dominikus71 at gmail.com (Erik Dominikus) Date: Thu, 23 Apr 2015 18:46:44 +0700 Subject: [Haskell-cafe] ANN: dns-server: forward DNS queries to StatDNS REST API In-Reply-To: References: Message-ID: True, but a little correction: name-based virtual host is no problem; http-client allows specifying the IP address and the Host header separately. Fortunately they are not using 'other very common modern HTTP features'. But yes, the problem persists. On Thu, Apr 23, 2015 at 5:46 PM, Brandon Allbery wrote: > On Thu, Apr 23, 2015 at 6:41 AM, Erik Dominikus > wrote: >> >> Bad news: ISP is intercepting packets to UDP port 53. >> >> Good news: There is DNS resolution over HTTP (http://www.statdns.com/api/ >> >> ). > > > Bad news: you're going to be trusting your ISP's DNS to get there, unless > they can guarantee their IPv4 and/or IPv6 addresses won't change *and* you > can remember those addresses *and* they're not using name based virtual > hosts or other very common modern HTTP features. > > -- > brandon s allbery kf8nh sine nomine associates > allbery.b at gmail.com ballbery at sinenomine.net > unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net From ertesx at gmx.de Thu Apr 23 11:49:20 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Thu, 23 Apr 2015 13:49:20 +0200 Subject: [Haskell-cafe] ANN: dns-server: forward DNS queries to StatDNS REST API In-Reply-To: References: Message-ID: > https://github.com/edom/dns-server Good news: It makes me happy each time I see a new Haskell package against any kind of oppression. Thanks for the initial work! Bad news: At some point someone might want to develop an actual DNS server rather than a proxy, and a really appropriate name will be already taken, if you upload your package to Hackage. Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From erik.dominikus71 at gmail.com Thu Apr 23 12:11:18 2015 From: erik.dominikus71 at gmail.com (Erik Dominikus) Date: Thu, 23 Apr 2015 19:11:18 +0700 Subject: [Haskell-cafe] ANN: dns-server: forward DNS queries to StatDNS REST API In-Reply-To: References: Message-ID: You're welcome! It's my pleasure. I'm aware of the naming issue. Actually, I had an actual DNS server in mind. This package would then be a library for making a DNS server where StatDNS is just one back-end among many (such as a hosts file or a SQL server). It's still a long way to that. This package is not yet good for Hackage. Best, Erik On Thu, Apr 23, 2015 at 6:49 PM, Ertugrul S?ylemez wrote: >> https://github.com/edom/dns-server > > Good news: It makes me happy each time I see a new Haskell package > against any kind of oppression. Thanks for the initial work! > > Bad news: At some point someone might want to develop an actual DNS > server rather than a proxy, and a really appropriate name will be > already taken, if you upload your package to Hackage. > > > Greets, > Ertugrul From ky3 at atamo.com Thu Apr 23 12:32:28 2015 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Thu, 23 Apr 2015 19:32:28 +0700 Subject: [Haskell-cafe] ANN: dns-server: forward DNS queries to StatDNS REST API In-Reply-To: References: Message-ID: On Thu, Apr 23, 2015 at 5:46 PM, Brandon Allbery wrote: > Bad news: you're going to be trusting your ISP's DNS to get there, unless > they can guarantee their IPv4 and/or IPv6 addresses won't change *and* you > can remember those addresses *and* they're not using name based virtual > hosts or other very common modern HTTP features. Erik didn't quite spell out the use-case. My guess is that it's to deal with national policies that restrict access to certain sites by blanking out at the ISP DNS level. So trusting the ISP to get to statdns.com should be fine, assuming that the ISP is only doing the barest minimum to obey the law. Certainly, given the scenario, there are multiple ways to route around the firewall. But Erik's is a cost-effective, low-maintenance solution. -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From k-bx at k-bx.com Thu Apr 23 13:08:44 2015 From: k-bx at k-bx.com (Kostiantyn Rybnikov) Date: Thu, 23 Apr 2015 16:08:44 +0300 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: All right, good news! After adding ekg, gathering its data via bosun and seeing nothing useful I actually figured out that I could try harder to reproduce issue by myself instead of waiting for users to do that. And I succeeded! :) So, after launching 20 infinite curl loops to that handler's url I was quickly able to reproduce the issue, so the task seems clear now: keep reducing the code, reproduce locally, possibly without external services etc. I'll write up after I get to something. Thanks. On Wed, Apr 22, 2015 at 11:09 PM, Gregory Collins wrote: > Maybe but it would be helpful to rule the scenario out. Johan's ekg > library is also useful, it exports a webserver on a different port that you > can use to track metrics like gc times, etc. > > Other options for further debugging include gathering strace logs from the > binary. You'll have to do some data gathering to narrow down the cause > unfortunately -- http client? your code? Snap server? GHC event manager > (System.timeout is implemented here)? GC? etc > > G > > On Wed, Apr 22, 2015 at 10:14 AM, Kostiantyn Rybnikov > wrote: > >> Gregory, >> >> Servers are far from being highly-overloaded, since they're currently >> under a much less load they used to be. Memory consumption is stable and >> low, and there's a lot of free RAM also. >> >> Would you say that given these factors this scenario is unlikely? >> >> On Wed, Apr 22, 2015 at 7:56 PM, Gregory Collins > > wrote: >> >>> Given your gist, the timeout on your requests is set to a half-second so >>> it's conceivable that a highly-loaded server might have GC pause times >>> approaching that long. Smells to me like a classic Haskell memory leak >>> (that's why the problem occurs after the server has been up for a while): >>> run your program with the heap profiler, and audit any shared >>> tables/IORefs/MVars to make sure you are not building up thunks there. >>> >>> Greg >>> >>> On Wed, Apr 22, 2015 at 9:14 AM, Kostiantyn Rybnikov >>> wrote: >>> >>>> Hi! >>>> >>>> Our company's main commercial product is a Snap-based web app which we >>>> compile with GHC 7.8.4. It works on four app-servers currently >>>> load-balanced behind Haproxy. >>>> >>>> I recently implemented a new piece of functionality, which led to weird >>>> behavior which I have no idea how to debug, so I'm asking here for help and >>>> ideas! >>>> >>>> The new functionality is this: on specific url-handler, we need to >>>> query n external services concurrently with a timeout, gather and render >>>> results. Easy (in Haskell)! >>>> >>>> The implementation looks, as you might imagine, something like this >>>> (sorry for almost-real-haskell, I'm sure I forgot tons of imports and other >>>> things, but I hope everything is clear as-is, if not -- I'll be glad to >>>> update gist to make things more specific): >>>> >>>> https://gist.github.com/k-bx/0cf7035aaf1ad6306e76 >>>> >>>> Now, this works wonderful for some time, and in logs I can see both, >>>> successful fetches of external-content, and also lots of timeouts from our >>>> external providers. Life is good. >>>> >>>> But! After several days of work (sometimes a day, sometimes couple >>>> days), apps on all 4 servers go crazy. It might take some interval (like 20 >>>> minutes) before they're all crazy, so it's not super-synchronous. Now: how >>>> crazy, exactly? >>>> >>>> First of all, this endpoint timeouts. Haproxy requests for a response, >>>> and response times out, so they "hang". >>>> >>>> Secondly, logs are interesting. If you look at the code from gist once >>>> again, you can see, that some of CandidateProvider's don't actually require >>>> any networking work, so all they do is actually just logging that they're >>>> working (I added this as part of debugging actually) and return pure data. >>>> So what's weird is that they timeout also! Here's how output of our logs >>>> starts to look like after the bug happens: >>>> >>>> ``` >>>> [2015-04-22 09:56:20] provider: CandidateProvider1 >>>> [2015-04-22 09:56:20] provider: CandidateProvider2 >>>> [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider1 >>>> [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider2 >>>> [2015-04-22 09:56:22] provider: CandidateProvider1 >>>> [2015-04-22 09:56:22] provider: CandidateProvider2 >>>> [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider1 >>>> [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider2 >>>> ... and so on >>>> ``` >>>> >>>> What's also weird is that, even after timeout is logged, the string >>>> ""Got responses!" never gets logged also! So hanging happens somewhere >>>> in-between. >>>> >>>> I have to say I'm sorry that I don't have strace output now, I'll have >>>> to wait until this situation happens once again, but I'll get later to you >>>> with this info. >>>> >>>> So, how is this possible that almost-pure code gets timed-out? And why >>>> does it hang afterwards? >>>> >>>> CPU and other resource usage is quite low, number of open >>>> file-descriptors also (it seems). >>>> >>>> Thanks for all the suggestions in advance! >>>> >>>> _______________________________________________ >>>> Haskell-Cafe mailing list >>>> Haskell-Cafe at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>> >>>> >>> >>> >>> -- >>> Gregory Collins >>> >> >> > > > -- > Gregory Collins > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.dominikus71 at gmail.com Thu Apr 23 13:12:46 2015 From: erik.dominikus71 at gmail.com (Erik Dominikus) Date: Thu, 23 Apr 2015 20:12:46 +0700 Subject: [Haskell-cafe] ANN: dns-server: forward DNS queries to StatDNS REST API In-Reply-To: References: Message-ID: That is exactly the reason why I made it. I did this to avoid spending $10/month to rent an Indonesian VPS to run dnsmasq to proxy Google DNS. The present value of that spending would be about $1779 assuming that the policy is forever and that the compound yearly inflation rate is 7%. (No credit card; can't rent US VPS or get AWS free tier.) On Thu, Apr 23, 2015 at 7:32 PM, Kim-Ee Yeoh wrote: > > On Thu, Apr 23, 2015 at 5:46 PM, Brandon Allbery > wrote: >> >> Bad news: you're going to be trusting your ISP's DNS to get there, unless >> they can guarantee their IPv4 and/or IPv6 addresses won't change *and* you >> can remember those addresses *and* they're not using name based virtual >> hosts or other very common modern HTTP features. > > > Erik didn't quite spell out the use-case. My guess is that it's to deal with > national policies that restrict access to certain sites by blanking out at > the ISP DNS level. > > So trusting the ISP to get to statdns.com should be fine, assuming that the > ISP is only doing the barest minimum to obey the law. > > Certainly, given the scenario, there are multiple ways to route around the > firewall. But Erik's is a cost-effective, low-maintenance solution. > > -- Kim-Ee From trevor.mcdonell at gmail.com Thu Apr 23 13:57:35 2015 From: trevor.mcdonell at gmail.com (Trevor McDonell) Date: Thu, 23 Apr 2015 13:57:35 +0000 Subject: [Haskell-cafe] CFP: FHPC 2015: Workshop on Functional High-Performance Computing [w/ICFP] Message-ID: Apologies if you receive this message more than once. ====================================================================== CALL FOR PAPERS FHPC 2015 The 4th ACM SIGPLAN Workshop on Functional High-Performance Computing Vancouver, British Columbia, Canada, Canada September 3, 2015 https://sites.google.com/site/fhpcworkshops/ Co-located with the International Conference on Functional Programming (ICFP 2015) Submission Deadline: Friday, 15 May, 2015 (anywhere on earth) ====================================================================== The FHPC workshop aims at bringing together researchers exploring uses of functional (or more generally, declarative or high-level) programming technology in application domains where high performance is essential. The aim of the meeting is to enable sharing of results, experiences, and novel ideas about how high-level, declarative specifications of computationally challenging problems can serve as maintainable and portable code that approaches (or even exceeds) the performance of machine-oriented imperative implementations. All aspects of performance critical programming and parallel programming are in-scope for the workshop, irrespective of hardware target. This includes both traditional large-scale scientific computing (HPC), as well as work targeting single node systems with SMPs, GPUs, FPGAs, or embedded processors. It is becoming apparent that radically new and well founded methodologies for programming such systems are required to address their inherent complexity and to reconcile execution performance with programming productivity. Proceedings: ============ Accepted papers will be published by the ACM and will appear in the ACM Digital Library. * Submissions due: Friday, 15 May, 2015 (anywhere on earth) * Author notification: Friday, 26 June, 2015 * Final copy due: Sunday, 19 July, 2015 Submitted papers must be in portable document format (PDF), formatted according to the ACM SIGPLAN style guidelines (2 column, 9pt format). See http://www.sigplan.org/authorInformation.htm for more information and style files. Typical papers are expected to be 8 pages (but up to four additional pages are permitted). Contributions to FHPC 2015 should be submitted via Easychair, at the following URL: * https://www.easychair.org/conferences/?conf=fhpc15 The submission site is now open. The FHPC workshops adhere to the ACM SIGPLAN policies regarding programme committee contributions and republication. Any paper submitted must adhere to ACM SIGPLAN's republication policy. PC member submissions are welcome, but will be reviewed to a higher standard. http://www.sigplan.org/Resources/Policies/Review http://www.sigplan.org/Resources/Policies/Republication Travel Support: =============== Student attendees with accepted papers can apply for a SIGPLAN PAC grant to help cover travel expenses. PAC also offers other support, such as for child-care expenses during the meeting or for travel costs for companions of SIGPLAN members with physical disabilities, as well as for travel from locations outside of North America and Europe. For details on the PAC programme, see its web page (http://www.sigplan.org/PAC.htm). Programme Committee: ==================== Tiark Rompf (co-chair) Purdue University, USA Geoffrey Mainland (co-chair) Drexel University, USA Kevin Brown Stanford University, USA James Cheney University of Edinburgh, UK Albert Cohen INRIA, France David Duke University of Leeds, UK Yukiyoshi Kameyama University of Tsukuba, Japan Gabriele Keller University of New South Wales, Australia Paul H J Kelly Imperial College London, UK Trevor L. Mcdonell Indiana University, USA Greg Michaelson Heriot-Watt University, UK Cosmin E. Oancea University of Copenhagen, Denmark Markus Pueschel ETH Zurich, Switzerland Sukyoung Ryu KAIST, Korea Alexander Slesarenko Huawei, Russia Josef Svenningsson Chalmers University of Technology, Sweden -------------- next part -------------- An HTML attachment was scrubbed... URL: From haskell at bunix.org Thu Apr 23 17:27:08 2015 From: haskell at bunix.org (Martijn Rijkeboer) Date: Thu, 23 Apr 2015 19:27:08 +0200 Subject: [Haskell-cafe] Space leak with recursion Message-ID: <5092e85fdd3575049fdc31a794bc5706.squirrel@secure.bunix.org> Hi, I'm trying to make an observable (code below) that listens to two ZeroMQ sockets and publishes the received messages on a third. Every message that is received gets an incremented sequence number before it is send on the publishing socket. To keep the current sequence number a state object is passed to the polling function and after handling the message, the polling function is called again with the updated state. Unfortunately this recursive calling of the polling function creates a space leak. Any suggestions how to fix this? Note: for brevity I left the incrementing of the sequence number and the sending on the publishing socket out of the code. Kind regards, Martijn Rijkeboer --- Code --- module Observable ( run ) where import Control.Monad (void) import Data.Int (Int64) import System.ZMQ4 data State = State { nextSeqNum :: !Int64 , listenSocket :: !(Socket Pull) , publishSocket :: !(Socket Pub) , snapSocket :: !(Socket Router) } run :: IO () run = do withContext $ \ctx -> withSocket ctx Pull $ \observer -> withSocket ctx Pub $ \publisher -> withSocket ctx Router $ \snapshot -> do setLinger (restrict (0::Int)) observer bind observer "tcp://*:7010" setLinger (restrict (0::Int)) publisher bind publisher "tcp://*:7011" setLinger (restrict (0::Int)) snapshot setSendHighWM (restrict (0::Int)) snapshot bind snapshot "tcp://*:7013" let state = State { nextSeqNum = 0 , listenSocket = observer , publishSocket = publisher , snapSocket = snapshot } pollSockets state pollSockets :: State -> IO () pollSockets state = void $ poll (-1) [ Sock (listenSocket state) [In] (Just $ observerHandleEvts state) , Sock (snapSocket state) [In] (Just $ snapshotHandleEvts state) ] observerHandleEvts :: State -> [Event] -> IO () observerHandleEvts state _ = do void $ receiveMulti $ listenSocket state pollSockets state snapshotHandleEvts :: State -> [Event] -> IO () snapshotHandleEvts state _ = do void $ receiveMulti $ snapSocket state pollSockets state From joehillen at gmail.com Thu Apr 23 17:30:58 2015 From: joehillen at gmail.com (Joe Hillenbrand) Date: Thu, 23 Apr 2015 10:30:58 -0700 Subject: [Haskell-cafe] ANN: dns-server: forward DNS queries to StatDNS REST API In-Reply-To: References: Message-ID: Could you please change the name to something more specific to your use case? I've considered writing an actual fully-featured DNS server in haskell and that is not what you've built. On Thu, Apr 23, 2015 at 3:41 AM, Erik Dominikus wrote: > Bad news: ISP is intercepting packets to UDP port 53. > > Good news: There is DNS resolution over HTTP (http://www.statdns.com/api/). > > Bad news: The software bridging DNS clients and that HTTP service is missing. > > Good news: I made (a small but working part of) it. > > I've been using it on my computer. > > The code is here: > > https://github.com/edom/dns-server > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From erik.dominikus71 at gmail.com Thu Apr 23 18:13:59 2015 From: erik.dominikus71 at gmail.com (Erik Dominikus) Date: Fri, 24 Apr 2015 01:13:59 +0700 Subject: [Haskell-cafe] ANN: dns-server: forward DNS queries to StatDNS REST API In-Reply-To: References: Message-ID: Hi Joe, I picked that name because I'm writing a library for writing DNS servers (where StatDNS is just a backend besides plain text files and SQL databases). The forwarder is just one use case of this more general library. That being said, I don't plan to put it on Hackage. Feel free to upload yours. I'll think of a better name in the meanwhile. Good luck with your project! Best, Erik (Perhaps I made this ANN too early.) On Fri, Apr 24, 2015 at 12:30 AM, Joe Hillenbrand wrote: > Could you please change the name to something more specific to your > use case? I've considered writing an actual fully-featured DNS server > in haskell and that is not what you've built. > > On Thu, Apr 23, 2015 at 3:41 AM, Erik Dominikus > wrote: >> Bad news: ISP is intercepting packets to UDP port 53. >> >> Good news: There is DNS resolution over HTTP (http://www.statdns.com/api/). >> >> Bad news: The software bridging DNS clients and that HTTP service is missing. >> >> Good news: I made (a small but working part of) it. >> >> I've been using it on my computer. >> >> The code is here: >> >> https://github.com/edom/dns-server >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From mwm at mired.org Thu Apr 23 22:34:03 2015 From: mwm at mired.org (Mike Meyer) Date: Thu, 23 Apr 2015 17:34:03 -0500 Subject: [Haskell-cafe] low-cost matrix rank? Message-ID: Noticing that diagrams 1.3 has moved from vector-space to linear, I decided to check them both for a function to compute the rank of a matrix. Neither seems to have it. While I'm doing quite a bit of work with 2 and 3-element vectors, the only thing I do with matrices is take their rank, as part of verifying that the faces of a polyhedron actually make a polyhedron. So I'm looking for a relatively light-weight way of doing so that will work with a recent (7.8 or 7.10) ghc release. Or maybe getting such a function added to an existing library. Anyone have any suggestions? Thanks, Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From semen at trygub.com Thu Apr 23 22:58:41 2015 From: semen at trygub.com (Semen Trygubenko / =?utf-8?B?0KHQtdC80LXQvSDQotGA0LjQs9GD0LHQtdC9?= =?utf-8?B?0LrQvg==?=) Date: Thu, 23 Apr 2015 23:58:41 +0100 Subject: [Haskell-cafe] Haskell Weekly News: Issue 326 In-Reply-To: <20150409225936.GA1070@inanna.trygub.com> References: <20150326222444.GA91822@inanna.trygub.com> <20150409225936.GA1070@inanna.trygub.com> Message-ID: <20150423225841.GA8001@inanna.trygub.com> New Releases darcs 2.10.0 New version of darcs is out packed with features and resolved issues. http://lists.osuosl.org/pipermail/darcs-users/2015-April/027119.html Stackage CLI This new tool helps to manage cabal files and share sandboxes. https://www.fpcomplete.com/blog/2015/04/announcing-stackage-cli Diagrams 1.3 Diagrams has switched from vector-space to linear for its linear algebra package, the internal representation of Measure has changed, a new Direction type has been added as well as a number of new transform isomorphisms (transformed, translated, movedTo, movedFrom and rotated) and new features, and two new backends ? PGF and HTML5. http://projects.haskell.org/diagrams/ https://wiki.haskell.org/Diagrams/Dev/Migrate1.3 Discussion Improving Hackage security by Duncan Coutts A TUF-based system is being designed and implemented that will significantly improve Hackage security. http://www.well-typed.com/blog/2015/04/improving-hackage-security/ http://theupdateframework.com/ Cartesian Closed Comic #26: IDE Should Haskell have an IDE (and if yes should it be web-based), or does it already have one? http://www.reddit.com/r/haskell/comments/334x2v/cartesian_closed_comic_26_ide/ https://ro-che.info/ccc/26 Two "camps" of Haskell programmers A comment by Tekmo. http://www.reddit.com/r/haskell/comments/33chyv/from_imperative_to_functional_programming_things/cqk34z6 What databases are most Haskellers using? It seems that PostgreSQL with persistent or postgresql-simple, and esqueleto for more complex queries. http://www.reddit.com/r/haskell/comments/33k8zx/what_databases_are_most_haskellers_using/ https://hackage.haskell.org/package/persistent http://hackage.haskell.org/package/esqueleto https://hackage.haskell.org/package/postgresql-simple Podcasts Episode 4: Stephanie Weirich on Zombie and Dependent Haskell "Zombie is a different kind of dependently typed language, eschewing automatic ?-reduction in the type checker for an approach based on explicit equality rewriting, which enables new ways of combining proofs and programs, as well as new forms of proof automation. Meanwhile, as languages designed for dependently typed programming come closer to practical applicability, Haskell is also moving towards full dependent types." http://typetheorypodcast.com/2015/04/episode-4-stephanie-weirich-on-zombie-and-dependent-haskell/ Quotes of the Week "My children are in IT, two of them ? both graduated from MIT. One of them browsed a book and said, ?Here, read this?. It said ?Haskell ? learn you a Haskell for great good?, and one day that will be my retirement reading." (Lee Hsien Loong) http://www.pmo.gov.sg/mediacentre/transcript-speech-prime-minister-lee-hsien-loong-founders-forum-smart-nation-singapore "Of course if darcs got the best of git, it's probably better than git now ;-)" (maxigit) http://www.reddit.com/r/haskell/comments/33646i/darcs_210_is_here_rebase_importexport_to_git/cqipao8 "Yes changesets and snaphosts are isomorphic, therefore being based on changesets can't be a selling point. (maxigit) But it can, because the tooling evolves around the philosophy shaped by the underlying structure. Can you get branches for free in git? Sure. Do you? Nope." (kqr) http://www.reddit.com/r/haskell/comments/33646i/darcs_210_is_here_rebase_importexport_to_git/cqiwqze "Leksah, Eclipse FP and various newer attempts are so ignored by everyone that they are not even in this Comic." (hamishmack) http://www.reddit.com/r/haskell/comments/334x2v/cartesian_closed_comic_26_ide/cqhvwuu "I don't actually want an IDE, but if I did, I'd want one that was free as in freedom, not free as in freemium." (get-your-shinebox) http://www.reddit.com/r/haskell/comments/334x2v/cartesian_closed_comic_26_ide/cqhst0l "[Type-level reasoning] is more powerful but entails a hard dependency on a computer. Equational reasoning using abstract algebra is more "portable"; you can easily do it in your head or with pencil and paper." (Tekmo) http://www.reddit.com/r/haskell/comments/334x2v/cartesian_closed_comic_26_ide/cqhst0l "? without strong typing or the sequestering of side effects that Haskell allows you, I felt really lost and confused as to why the hell anyone created a language that wasn't Haskell." (scientia_est_ars) http://www.reddit.com/r/haskell/comments/33chyv/from_imperative_to_functional_programming_things/cqkqps6 "More polymorphism generally restricts the implementation, allowing us to better predict it's behavior. It's a trade-off, like most programming decisions." (bss03) http://www.reddit.com/r/haskell/comments/33chyv/from_imperative_to_functional_programming_things/cqju96h "(cons cat (cons cat nil))" (Dmitry Ignatiev) https://twitter.com/lvsn/status/533685461957349376 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 181 bytes Desc: not available URL: From agocorona at gmail.com Fri Apr 24 07:23:59 2015 From: agocorona at gmail.com (Alberto G. Corona ) Date: Fri, 24 Apr 2015 09:23:59 +0200 Subject: [Haskell-cafe] [ANN]: the Helium compiler, version 1.8.1 In-Reply-To: <7B6059BA-36E0-47AD-B5AD-617B3F706972@uu.nl> References: <647A7C27-F466-4975-9A51-B866FD417342@uu.nl> <7B6059BA-36E0-47AD-B5AD-617B3F706972@uu.nl> Message-ID: Thanks. I'll take a look. 2015-04-23 9:50 GMT+02:00 Jurriaan Hage : > > On 20Apr, 2015, at 12:26, Alberto G. Corona wrote: > > > Great! > > > Hi Alberto, > > > How the type rules detailed in the "scripting the type inference engine" > paper are implemented? > Euh? I guess you have to consult the implementation of the compiler. Most > of the code you need > to look at is in src/Helium/StaticAnalysis/Directives/ > Essentially, we ``replace?? the original constraints by the explicitly > written down constraints. With these > constraints a function is associated that given the necessary context > information can produce > the domain specific report. The replacement is performed by pattern > matching on the AST. > > > it is possible to script the inference engine with such rules? > Sure > > If so, are there some examples? > Just run > > heliumpath > > Then take the path that ends in > > lib/helium-1.8.1/share > > add > > /lib > > to the path, and then you can find in that directory files that have > extension .type. > Those can serve as examples. > > best, > Jur > > -- Alberto. -------------- next part -------------- An HTML attachment was scrubbed... URL: From haskell at bunix.org Fri Apr 24 07:45:28 2015 From: haskell at bunix.org (Martijn Rijkeboer) Date: Fri, 24 Apr 2015 09:45:28 +0200 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <5092e85fdd3575049fdc31a794bc5706.squirrel@secure.bunix.org> References: <5092e85fdd3575049fdc31a794bc5706.squirrel@secure.bunix.org> Message-ID: Hi, > I'm trying to make an observable (code below) that listens to two > ZeroMQ sockets and publishes the received messages on a third. Every > message that is received gets an incremented sequence number before it > is send on the publishing socket. To keep the current sequence number a > state object is passed to the polling function and after handling the > message, the polling function is called again with the updated state. > > Unfortunately this recursive calling of the polling function creates a > space leak. Any suggestions how to fix this? > > Note: for brevity I left the incrementing of the sequence number and > the sending on the publishing socket out of the code. As David Feuer pointed out in a private mail, I didn't include all necessary information: - OS: Windows 7 (64-bit) - GHC: 7.8.4 (64-bit) (minghc) - Zeromq4-haskell: 0.6.3 (Stackage LTS 2.4) - ZeroMQ DLL: 4.0.4 (64-bit) The memory usage increases with every message that is received and is probably due to the recursive pollSockets call, since when I remove the recursive pollSockets calls in the observerHandleEvts and snapshotHandleEvts functions and use forever in pollSockets the memory usage stays constant. Kind regards, Martijn Rijkeboer > --- Code --- > > module Observable > ( run > ) where > > import Control.Monad (void) > import Data.Int (Int64) > import System.ZMQ4 > > > data State = State > { nextSeqNum :: !Int64 > , listenSocket :: !(Socket Pull) > , publishSocket :: !(Socket Pub) > , snapSocket :: !(Socket Router) > } > > > run :: IO () > run = do > withContext $ \ctx -> > withSocket ctx Pull $ \observer -> > withSocket ctx Pub $ \publisher -> > withSocket ctx Router $ \snapshot -> do > setLinger (restrict (0::Int)) observer > bind observer "tcp://*:7010" > > setLinger (restrict (0::Int)) publisher > bind publisher "tcp://*:7011" > > setLinger (restrict (0::Int)) snapshot > setSendHighWM (restrict (0::Int)) snapshot > bind snapshot "tcp://*:7013" > > let state = State > { nextSeqNum = 0 > , listenSocket = observer > , publishSocket = publisher > , snapSocket = snapshot > } > > pollSockets state > > > pollSockets :: State -> IO () > pollSockets state = > void $ poll (-1) > [ Sock (listenSocket state) [In] (Just $ observerHandleEvts state) > , Sock (snapSocket state) [In] (Just $ snapshotHandleEvts state) > ] > > > observerHandleEvts :: State -> [Event] -> IO () > observerHandleEvts state _ = do > void $ receiveMulti $ listenSocket state > pollSockets state > > > snapshotHandleEvts :: State -> [Event] -> IO () > snapshotHandleEvts state _ = do > void $ receiveMulti $ snapSocket state > pollSockets state From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 08:06:34 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 09:06:34 +0100 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <5092e85fdd3575049fdc31a794bc5706.squirrel@secure.bunix.org> References: <5092e85fdd3575049fdc31a794bc5706.squirrel@secure.bunix.org> Message-ID: <20150424080634.GA27512@weber> On Thu, Apr 23, 2015 at 07:27:08PM +0200, Martijn Rijkeboer wrote: > pollSockets :: State -> IO () > pollSockets state = > void $ poll (-1) > [ Sock (listenSocket state) [In] (Just $ observerHandleEvts state) > , Sock (snapSocket state) [In] (Just $ snapshotHandleEvts state) > ] > > > observerHandleEvts :: State -> [Event] -> IO () > observerHandleEvts state _ = do > void $ receiveMulti $ listenSocket state > pollSockets state > > > snapshotHandleEvts :: State -> [Event] -> IO () > snapshotHandleEvts state _ = do > void $ receiveMulti $ snapSocket state > pollSockets state What happens here if there is an event waiting on both the listen socket *and* the snap socket? It looks like `observerHandleEvts` will be called and, since it recursively calles `pollSockets`, the `snapshotHandleEvts` handler will not be run, although its continuation will be kept around leaking space. It seems unwise to make a recursive call to the event loop inside a handler. Tom From aruiz at um.es Fri Apr 24 08:13:54 2015 From: aruiz at um.es (Alberto Ruiz) Date: Fri, 24 Apr 2015 10:13:54 +0200 Subject: [Haskell-cafe] low-cost matrix rank? In-Reply-To: References: Message-ID: <5539FB42.8050308@um.es> Hi Mike, If you need a robust numerical computation you can try "rcond" or "rank" from hmatrix. (It is based on the singular values, I don't know if the cost is low enough for your application.) http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29#Computation https://hackage.haskell.org/package/hmatrix-0.16.1.5/docs/Numeric-LinearAlgebra-HMatrix.html#g:10 Alberto On 24/04/15 00:34, Mike Meyer wrote: > Noticing that diagrams 1.3 has moved from vector-space to linear, I > decided to check them both for a function to compute the rank of a > matrix. Neither seems to have it. > > While I'm doing quite a bit of work with 2 and 3-element vectors, the > only thing I do with matrices is take their rank, as part of verifying > that the faces of a polyhedron actually make a polyhedron. > > So I'm looking for a relatively light-weight way of doing so that will > work with a recent (7.8 or 7.10) ghc release. Or maybe getting such a > function added to an existing library. Anyone have any suggestions? > > Thanks, > Mike From haskell at bunix.org Fri Apr 24 08:34:04 2015 From: haskell at bunix.org (Martijn Rijkeboer) Date: Fri, 24 Apr 2015 10:34:04 +0200 Subject: [Haskell-cafe] Space leak with recursion Message-ID: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> >> pollSockets :: State -> IO () >> pollSockets state = >> void $ poll (-1) >> [ Sock (listenSocket state) [In] (Just $ observerHandleEvts >> state) >> , Sock (snapSocket state) [In] (Just $ snapshotHandleEvts >> state) >> ] >> >> >> observerHandleEvts :: State -> [Event] -> IO () >> observerHandleEvts state _ = do >> void $ receiveMulti $ listenSocket state >> pollSockets state >> >> >> snapshotHandleEvts :: State -> [Event] -> IO () >> snapshotHandleEvts state _ = do >> void $ receiveMulti $ snapSocket state >> pollSockets state > > What happens here if there is an event waiting on both the listen > socket *and* the snap socket? It looks like `observerHandleEvts` > will be called and, since it recursively calles `pollSockets`, > the `snapshotHandleEvts` handler will not be run, although its > continuation will be kept around leaking space. This could be an issue, but during my testing there were no messages sent to the snapshot socket (I haven't implemented the snapshot socket in the client yet). > It seems unwise to make a recursive call to the event loop inside a > handler. How would I update my state and ensure that the next invocation of a handler gets the updated state? With the forever function my state updates are not propagated. Kind regards, Martijn Rijkeboer From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 08:36:28 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 09:36:28 +0100 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> Message-ID: <20150424083628.GB27512@weber> On Fri, Apr 24, 2015 at 10:34:04AM +0200, Martijn Rijkeboer wrote: > >> pollSockets :: State -> IO () > >> pollSockets state = > >> void $ poll (-1) > >> [ Sock (listenSocket state) [In] (Just $ observerHandleEvts > >> state) > >> , Sock (snapSocket state) [In] (Just $ snapshotHandleEvts > >> state) > >> ] > >> > >> > >> observerHandleEvts :: State -> [Event] -> IO () > >> observerHandleEvts state _ = do > >> void $ receiveMulti $ listenSocket state > >> pollSockets state > >> > >> > >> snapshotHandleEvts :: State -> [Event] -> IO () > >> snapshotHandleEvts state _ = do > >> void $ receiveMulti $ snapSocket state > >> pollSockets state > > > > What happens here if there is an event waiting on both the listen > > socket *and* the snap socket? It looks like `observerHandleEvts` > > will be called and, since it recursively calles `pollSockets`, > > the `snapshotHandleEvts` handler will not be run, although its > > continuation will be kept around leaking space. > > This could be an issue, but during my testing there were no messages sent > to the snapshot socket (I haven't implemented the snapshot socket in the > client yet). Then I suggest the first thing to try is to run a test without snapshot socket functionality at all, and see if you still get a space leak. > > It seems unwise to make a recursive call to the event loop inside a > > handler. > > How would I update my state and ensure that the next invocation of a > handler gets the updated state? With the forever function my state updates > are not propagated. I don't see where you are updating any state. Tom From haskell at bunix.org Fri Apr 24 08:41:30 2015 From: haskell at bunix.org (Martijn Rijkeboer) Date: Fri, 24 Apr 2015 10:41:30 +0200 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <20150424083628.GB27512@weber> References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> Message-ID: >> >> pollSockets :: State -> IO () >> >> pollSockets state = >> >> void $ poll (-1) >> >> [ Sock (listenSocket state) [In] (Just $ observerHandleEvts >> >> state) >> >> , Sock (snapSocket state) [In] (Just $ snapshotHandleEvts >> >> state) >> >> ] >> >> >> >> >> >> observerHandleEvts :: State -> [Event] -> IO () >> >> observerHandleEvts state _ = do >> >> void $ receiveMulti $ listenSocket state >> >> pollSockets state >> >> >> >> >> >> snapshotHandleEvts :: State -> [Event] -> IO () >> >> snapshotHandleEvts state _ = do >> >> void $ receiveMulti $ snapSocket state >> >> pollSockets state >> > >> > What happens here if there is an event waiting on both the listen >> > socket *and* the snap socket? It looks like `observerHandleEvts` >> > will be called and, since it recursively calles `pollSockets`, >> > the `snapshotHandleEvts` handler will not be run, although its >> > continuation will be kept around leaking space. >> >> This could be an issue, but during my testing there were no messages >> sent to the snapshot socket (I haven't implemented the snapshot socket >> in the client yet). > > Then I suggest the first thing to try is to run a test without snapshot > socket functionality at all, and see if you still get a space leak. Will do. >> > It seems unwise to make a recursive call to the event loop inside a >> > handler. >> >> How would I update my state and ensure that the next invocation of a >> handler gets the updated state? With the forever function my state >> updates are not propagated. > > I don't see where you are updating any state. The state contains a sequence number that needs to be incremented, but I left that out for brevity... Kind regards, Martijn Rijkeboer From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 08:47:27 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 09:47:27 +0100 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> Message-ID: <20150424084727.GC27512@weber> On Fri, Apr 24, 2015 at 10:41:30AM +0200, Martijn Rijkeboer wrote: > > I don't see where you are updating any state. > > The state contains a sequence number that needs to be incremented, > but I left that out for brevity... It's important to see that, because that kind of thing is exactly where space leaks can hide. Tom From haskell at bunix.org Fri Apr 24 08:57:41 2015 From: haskell at bunix.org (Martijn Rijkeboer) Date: Fri, 24 Apr 2015 10:57:41 +0200 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <20150424084727.GC27512@weber> References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> <20150424084727.GC27512@weber> Message-ID: > On Fri, Apr 24, 2015 at 10:41:30AM +0200, Martijn Rijkeboer wrote: >> > I don't see where you are updating any state. >> >> The state contains a sequence number that needs to be incremented, >> but I left that out for brevity... > > It's important to see that, because that kind of thing is exactly where > space leaks can hide. Sorry for the confusion. The version that has the space leak is the version that I included with my initial mail and doesn't increment the sequence number. Once thee space leak is fixed I will need to add code to increment the sequence number (not yet implemented). I tried to make a minimal version that reproduces the problem, but I need to be able to update the state in the "real" version. Kind regards, Martijn Rijkeboer From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 09:04:06 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 10:04:06 +0100 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> <20150424084727.GC27512@weber> Message-ID: <20150424090406.GD27512@weber> On Fri, Apr 24, 2015 at 10:57:41AM +0200, Martijn Rijkeboer wrote: > > On Fri, Apr 24, 2015 at 10:41:30AM +0200, Martijn Rijkeboer wrote: > >> > I don't see where you are updating any state. > >> > >> The state contains a sequence number that needs to be incremented, > >> but I left that out for brevity... > > > > It's important to see that, because that kind of thing is exactly where > > space leaks can hide. > > Sorry for the confusion. The version that has the space leak is the > version that I included with my initial mail and doesn't increment > the sequence number. I see. > Once thee space leak is fixed I will need to add code to increment the > sequence number (not yet implemented). In that case the most important thing to do is to try to reproduce the space leak without the snapshot socket. > I tried to make a minimal version that reproduces the problem, but I need > to be able to update the state in the "real" version. Very sensible. I was confused for a moment whether the minimal version did actually exhibit the space leak behaviour. To get the state-updating behaviour you want, I suggest you use `StateT IO` rather than `IO`. With handlers in the `StateT IO` monad the state updates will occur as you expect. Tom From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 09:15:31 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 10:15:31 +0100 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <20150424090406.GD27512@weber> References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> <20150424084727.GC27512@weber> <20150424090406.GD27512@weber> Message-ID: <20150424091531.GE27512@weber> On Fri, Apr 24, 2015 at 10:04:06AM +0100, Tom Ellis wrote: > In that case the most important thing to do is to try to reproduce the space > leak without the snapshot socket. In fact I think we can already deduce the answer from the implementation of `poll`: http://hackage.haskell.org/package/zeromq4-haskell-0.6.3/docs/src/System-ZMQ4.html#poll The key point is that the [Poll s m] list is `mapM`ed over. Each handler is checked for a match and then fired or not fired before the next handler is checked for a match. This means that if you call `pollSockets` in a handler you are leaking the space associated with the unprocessed entries. Tom From haskell at bunix.org Fri Apr 24 09:34:25 2015 From: haskell at bunix.org (Martijn Rijkeboer) Date: Fri, 24 Apr 2015 11:34:25 +0200 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <20150424090406.GD27512@weber> References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> <20150424084727.GC27512@weber> <20150424090406.GD27512@weber> Message-ID: >> Once thee space leak is fixed I will need to add code to increment the >> sequence number (not yet implemented). > > In that case the most important thing to do is to try to reproduce the > space leak without the snapshot socket. I've just ran the code below (removed the snapshot socket) and the space leak is still there. Since I don't have access to a Windows box at the moment this was tested on the following configuration: - OS: Ubuntu 14.04 (64-bit) - GHC: 7.8.4 (64-bit) - Zeromq4-haskell: 0.6.3 (Stackage LTS 2.4) - ZeroMQ: 4.0.4 (64-bit) > Very sensible. I was confused for a moment whether the minimal version > did actually exhibit the space leak behaviour. > > To get the state-updating behaviour you want, I suggest you use > `StateT IO` rather than `IO`. With handlers in the `StateT IO` monad > the state updates will occur as you expect. Thanks for the suggestion I'll try that, but it will take me some time. Kind regards, Martijn Rijkeboer --- code --- module Observable ( run ) where import Control.Monad (void) import Data.Int (Int64) import System.ZMQ4 data State = State { nextSeqNum :: !Int64 , listenSocket :: !(Socket Pull) } run :: IO () run = do withContext $ \ctx -> withSocket ctx Pull $ \observer -> do setLinger (restrict (0::Int)) observer bind observer "tcp://*:7010" let state = State { nextSeqNum = 0 , listenSocket = observer } pollSockets state pollSockets :: State -> IO () pollSockets state = void $ poll (-1) [ Sock (listenSocket state) [In] (Just $ observerHandleEvts state) ] observerHandleEvts :: State -> [Event] -> IO () observerHandleEvts state _ = do void $ receiveMulti $ listenSocket state -- TODO: update state by incrementing the nextSeqNum pollSockets state From haskell at bunix.org Fri Apr 24 09:39:23 2015 From: haskell at bunix.org (Martijn Rijkeboer) Date: Fri, 24 Apr 2015 11:39:23 +0200 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <20150424091531.GE27512@weber> References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> <20150424084727.GC27512@weber> <20150424090406.GD27512@weber> <20150424091531.GE27512@weber> Message-ID: > In fact I think we can already deduce the answer from the > implementation of `poll`: > > http://hackage.haskell.org/package/zeromq4-haskell-0.6.3/docs/src/System-ZMQ4.html#poll > > The key point is that the [Poll s m] list is `mapM`ed over. Each > handler is checked for a match and then fired or not fired before the > next handler is checked for a match. This means that if you call > `pollSockets` in a handler you are leaking the space associated with > the unprocessed entries. I should have checked the code. Unfortunately the idea to use a recursive call to pollSockets comes from the Haskell examples in the ZeroMQ guide. So I will probably not be the only one that has this issue... Kind regards, Martijn Rijkeboer From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 09:39:28 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 10:39:28 +0100 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> <20150424084727.GC27512@weber> <20150424090406.GD27512@weber> Message-ID: <20150424093928.GF27512@weber> On Fri, Apr 24, 2015 at 11:34:25AM +0200, Martijn Rijkeboer wrote: > >> Once thee space leak is fixed I will need to add code to increment the > >> sequence number (not yet implemented). > > > > In that case the most important thing to do is to try to reproduce the > > space leak without the snapshot socket. > > I've just ran the code below (removed the snapshot socket) and the > space leak is still there. Since I don't have access to a Windows > box at the moment this was tested on the following configuration: > - OS: Ubuntu 14.04 (64-bit) > - GHC: 7.8.4 (64-bit) > - Zeromq4-haskell: 0.6.3 (Stackage LTS 2.4) > - ZeroMQ: 4.0.4 (64-bit) This behaviour is consistent with my understanding of how `poll` behaves. `poll` has trivial work to do when the handler returns, and recording this work consumes space that won't be freed. Avoid calling `pollSockets` in your handlers and you'll be fine. Tom From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 09:39:54 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 10:39:54 +0100 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> <20150424084727.GC27512@weber> <20150424090406.GD27512@weber> <20150424091531.GE27512@weber> Message-ID: <20150424093954.GG27512@weber> On Fri, Apr 24, 2015 at 11:39:23AM +0200, Martijn Rijkeboer wrote: > > In fact I think we can already deduce the answer from the > > implementation of `poll`: > > > > http://hackage.haskell.org/package/zeromq4-haskell-0.6.3/docs/src/System-ZMQ4.html#poll > > > > The key point is that the [Poll s m] list is `mapM`ed over. Each > > handler is checked for a match and then fired or not fired before the > > next handler is checked for a match. This means that if you call > > `pollSockets` in a handler you are leaking the space associated with > > the unprocessed entries. > > I should have checked the code. Unfortunately the idea to use a > recursive call to pollSockets comes from the Haskell examples in > the ZeroMQ guide. So I will probably not be the only one that has > this issue... Strange. Can you link me to that? From haskell at bunix.org Fri Apr 24 09:43:44 2015 From: haskell at bunix.org (Martijn Rijkeboer) Date: Fri, 24 Apr 2015 11:43:44 +0200 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <20150424093954.GG27512@weber> References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> <20150424084727.GC27512@weber> <20150424090406.GD27512@weber> <20150424091531.GE27512@weber> <20150424093954.GG27512@weber> Message-ID: <3b874401b166d0cb9034de71e436af82.squirrel@secure.bunix.org> >> I should have checked the code. Unfortunately the idea to use a >> recursive call to pollSockets comes from the Haskell examples in >> the ZeroMQ guide. So I will probably not be the only one that has >> this issue... > > Strange. Can you link me to that? An example that uses the pollSockets idea (pollServer): - https://github.com/imatix/zguide/blob/master/examples/Haskell/lpclient.hs All Haskell examples from the ZeroMQ guide: - https://github.com/imatix/zguide/tree/master/examples/Haskell Kind regards, Martijn Rijkeboer From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 09:47:43 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 10:47:43 +0100 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <3b874401b166d0cb9034de71e436af82.squirrel@secure.bunix.org> References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> <20150424084727.GC27512@weber> <20150424090406.GD27512@weber> <20150424091531.GE27512@weber> <20150424093954.GG27512@weber> <3b874401b166d0cb9034de71e436af82.squirrel@secure.bunix.org> Message-ID: <20150424094743.GH27512@weber> On Fri, Apr 24, 2015 at 11:43:44AM +0200, Martijn Rijkeboer wrote: > >> I should have checked the code. Unfortunately the idea to use a > >> recursive call to pollSockets comes from the Haskell examples in > >> the ZeroMQ guide. So I will probably not be the only one that has > >> this issue... > > > > Strange. Can you link me to that? > > An example that uses the pollSockets idea (pollServer): > - https://github.com/imatix/zguide/blob/master/examples/Haskell/lpclient.hs That's quite different. `sendServer` is not being called by `poll`. From haskell at bunix.org Fri Apr 24 09:53:50 2015 From: haskell at bunix.org (Martijn Rijkeboer) Date: Fri, 24 Apr 2015 11:53:50 +0200 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <20150424094743.GH27512@weber> References: <9818ade761338feddfd0f8ce28e01d68.squirrel@secure.bunix.org> <20150424083628.GB27512@weber> <20150424084727.GC27512@weber> <20150424090406.GD27512@weber> <20150424091531.GE27512@weber> <20150424093954.GG27512@weber> <3b874401b166d0cb9034de71e436af82.squirrel@secure.bunix.org> <20150424094743.GH27512@weber> Message-ID: > On Fri, Apr 24, 2015 at 11:43:44AM +0200, Martijn Rijkeboer wrote: >> >> I should have checked the code. Unfortunately the idea to use a >> >> recursive call to pollSockets comes from the Haskell examples in >> >> the ZeroMQ guide. So I will probably not be the only one that has >> >> this issue... >> > >> > Strange. Can you link me to that? >> >> An example that uses the pollSockets idea (pollServer): >> - >> https://github.com/imatix/zguide/blob/master/examples/Haskell/lpclient.hs > > That's quite different. `sendServer` is not being called by `poll`. Maybe I don't understand, but inside the pollServer function a call to poll is done and on line 47 and 56 of that same function, pollServer is called again (recursive). Kind regards, Martijn Rijkeboer From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 09:59:05 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 10:59:05 +0100 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: References: <20150424084727.GC27512@weber> <20150424090406.GD27512@weber> <20150424091531.GE27512@weber> <20150424093954.GG27512@weber> <3b874401b166d0cb9034de71e436af82.squirrel@secure.bunix.org> <20150424094743.GH27512@weber> Message-ID: <20150424095905.GI27512@weber> On Fri, Apr 24, 2015 at 11:53:50AM +0200, Martijn Rijkeboer wrote: > > On Fri, Apr 24, 2015 at 11:43:44AM +0200, Martijn Rijkeboer wrote: > >> >> I should have checked the code. Unfortunately the idea to use a > >> >> recursive call to pollSockets comes from the Haskell examples in > >> >> the ZeroMQ guide. So I will probably not be the only one that has > >> >> this issue... > >> > > >> > Strange. Can you link me to that? > >> > >> An example that uses the pollSockets idea (pollServer): > >> - > >> https://github.com/imatix/zguide/blob/master/examples/Haskell/lpclient.hs > > > > That's quite different. `sendServer` is not being called by `poll`. > > Maybe I don't understand, but inside the pollServer function a call to > poll is done and on line 47 and 56 of that same function, pollServer is > called again (recursive). It's fine to call `pollServer` recursively, but it's not fine to call it recursively from a handler, i.e. something that occurs in the second argument to `poll`. Tom From haskell at bunix.org Fri Apr 24 10:16:55 2015 From: haskell at bunix.org (Martijn Rijkeboer) Date: Fri, 24 Apr 2015 12:16:55 +0200 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <20150424095905.GI27512@weber> References: <20150424084727.GC27512@weber> <20150424090406.GD27512@weber> <20150424091531.GE27512@weber> <20150424093954.GG27512@weber> <3b874401b166d0cb9034de71e436af82.squirrel@secure.bunix.org> <20150424094743.GH27512@weber> <20150424095905.GI27512@weber> Message-ID: <103088f9bb672fed8ac3c04854c3f5d0.squirrel@secure.bunix.org> >> Maybe I don't understand, but inside the pollServer function a call to >> poll is done and on line 47 and 56 of that same function, pollServer is >> called again (recursive). > > It's fine to call `pollServer` recursively, but it's not fine to > call it recursively from a handler, i.e. something that occurs in the > second argument to `poll`. Clear, my bad. I've just implemented a version with StateT (code below) and it doesn't leak space as you already expected. Thank you very much for your help. Kind regards, Martijn Rijkeboer --- code --- module Observable ( run ) where import Control.Monad (forever, void) import Control.Monad.State (StateT, get, liftIO, put, runStateT) import Data.Int (Int64) import System.ZMQ4 data State = State { nextSeqNum :: !Int64 , listenSocket :: !(Socket Pull) } run :: IO () run = do withContext $ \ctx -> withSocket ctx Pull $ \observer -> do setLinger (restrict (0::Int)) observer bind observer "tcp://*:7010" let state = State { nextSeqNum = 0 , listenSocket = observer } void $ runStateT pollSockets state return () pollSockets :: StateT State IO () pollSockets = do state <- get forever $ void $ poll (-1) [Sock (listenSocket state) [In] (Just observerHandleEvts)] observerHandleEvts :: [Event] -> StateT State IO () observerHandleEvts _ = do state <- get liftIO $ void $ receiveMulti $ listenSocket state liftIO $ printSeqNum state put $ incrSeqNum state printSeqNum :: State -> IO () printSeqNum state = putStrLn $ show $ nextSeqNum state incrSeqNum :: State -> State incrSeqNum state = state{nextSeqNum = currSeqNum + 1} where currSeqNum = nextSeqNum state From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 10:26:01 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 11:26:01 +0100 Subject: [Haskell-cafe] Space leak with recursion In-Reply-To: <103088f9bb672fed8ac3c04854c3f5d0.squirrel@secure.bunix.org> References: <20150424090406.GD27512@weber> <20150424091531.GE27512@weber> <20150424093954.GG27512@weber> <3b874401b166d0cb9034de71e436af82.squirrel@secure.bunix.org> <20150424094743.GH27512@weber> <20150424095905.GI27512@weber> <103088f9bb672fed8ac3c04854c3f5d0.squirrel@secure.bunix.org> Message-ID: <20150424102601.GJ27512@weber> On Fri, Apr 24, 2015 at 12:16:55PM +0200, Martijn Rijkeboer wrote: > >> Maybe I don't understand, but inside the pollServer function a call to > >> poll is done and on line 47 and 56 of that same function, pollServer is > >> called again (recursive). > > > > It's fine to call `pollServer` recursively, but it's not fine to > > call it recursively from a handler, i.e. something that occurs in the > > second argument to `poll`. > > Clear, my bad. I've just implemented a version with StateT (code below) > and it doesn't leak space as you already expected. Thank you very much > for your help. Looks good! You're welcome Martijn. Tom From mwm at mired.org Fri Apr 24 11:50:20 2015 From: mwm at mired.org (Mike Meyer) Date: Fri, 24 Apr 2015 06:50:20 -0500 Subject: [Haskell-cafe] low-cost matrix rank? In-Reply-To: <5539FB42.8050308@um.es> References: <5539FB42.8050308@um.es> Message-ID: My apologies,but my use of "low-cost" was ambiguous. I meant the cost of having it available - installation, size of the package, extra packages brought in, etc. I don't the rank calculation to be fast, or even cheap to compute, as it's not used very often, and not for very large matrices. I'd rather not have the size of the software multiplied by integers in order to get that one function. hmatrix is highly optimized for performance and parallelization, built on top of a large C libraries with lots of functionality. Nice to have if you're doing any serious work with matrices, but massive overkill for what I need. On Fri, Apr 24, 2015 at 3:13 AM, Alberto Ruiz wrote: > Hi Mike, > > If you need a robust numerical computation you can try "rcond" or "rank" > from hmatrix. (It is based on the singular values, I don't know if the cost > is low enough for your application.) > > http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29#Computation > > > https://hackage.haskell.org/package/hmatrix-0.16.1.5/docs/Numeric-LinearAlgebra-HMatrix.html#g:10 > > Alberto > > On 24/04/15 00:34, Mike Meyer wrote: > >> Noticing that diagrams 1.3 has moved from vector-space to linear, I >> decided to check them both for a function to compute the rank of a >> matrix. Neither seems to have it. >> >> While I'm doing quite a bit of work with 2 and 3-element vectors, the >> only thing I do with matrices is take their rank, as part of verifying >> that the faces of a polyhedron actually make a polyhedron. >> >> So I'm looking for a relatively light-weight way of doing so that will >> work with a recent (7.8 or 7.10) ghc release. Or maybe getting such a >> function added to an existing library. Anyone have any suggestions? >> >> Thanks, >> Mike >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan.miljenovic at gmail.com Fri Apr 24 13:06:07 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Fri, 24 Apr 2015 23:06:07 +1000 Subject: [Haskell-cafe] Ord for partially ordered sets Message-ID: What is the validity of defining an Ord instance for types for which mathematically the `compare` function is partially ordered? Specifically, I have a pull request for fgl [1] to add Ord instances for the graph types (based upon the Ord instances for Data.Map and Data.IntMap, which I believe are themselves partially ordered), and I'm torn as to the soundness of adding these instances. It might be useful in Haskell code (the example given is to use graphs as keys in a Map) but mathematically-speaking it is not possible to compare two arbitrary graphs. What are people's thoughts on this? What's more important: potential usefulness/practicality or mathematical correctness? (Of course, the correct answer is to have a function of type a -> a -> Maybe Ordering :p) [1]: https://github.com/haskell/fgl/pull/11 -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 13:17:04 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 14:17:04 +0100 Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: References: Message-ID: <20150424131704.GK27512@weber> On Fri, Apr 24, 2015 at 11:06:07PM +1000, Ivan Lazar Miljenovic wrote: > What is the validity of defining an Ord instance for types for which > mathematically the `compare` function is partially ordered? I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and `g2` are not comparable according to the partial order? From dennis at deathbytape.com Fri Apr 24 13:27:12 2015 From: dennis at deathbytape.com (Dennis J. McWherter, Jr.) Date: Fri, 24 Apr 2015 08:27:12 -0500 Subject: [Haskell-cafe] low-cost matrix rank? In-Reply-To: References: <5539FB42.8050308@um.es> Message-ID: I am not aware of any small library which does just this, but you could easily roll your own. Though not the most efficient method, implementing gaussian elimination is a straightforward task (you can even find the backtracking algorithm on google) and then you can find the rank from there. Dennis On Fri, Apr 24, 2015 at 6:50 AM, Mike Meyer wrote: > My apologies,but my use of "low-cost" was ambiguous. > > I meant the cost of having it available - installation, size of the > package, extra packages brought in, etc. I don't the rank calculation to be > fast, or even cheap to compute, as it's not used very often, and not for > very large matrices. I'd rather not have the size of the software > multiplied by integers in order to get that one function. > > hmatrix is highly optimized for performance and parallelization, built on > top of a large C libraries with lots of functionality. Nice to have if > you're doing any serious work with matrices, but massive overkill for what > I need. > > On Fri, Apr 24, 2015 at 3:13 AM, Alberto Ruiz wrote: > >> Hi Mike, >> >> If you need a robust numerical computation you can try "rcond" or "rank" >> from hmatrix. (It is based on the singular values, I don't know if the cost >> is low enough for your application.) >> >> http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29#Computation >> >> >> https://hackage.haskell.org/package/hmatrix-0.16.1.5/docs/Numeric-LinearAlgebra-HMatrix.html#g:10 >> >> Alberto >> >> On 24/04/15 00:34, Mike Meyer wrote: >> >>> Noticing that diagrams 1.3 has moved from vector-space to linear, I >>> decided to check them both for a function to compute the rank of a >>> matrix. Neither seems to have it. >>> >>> While I'm doing quite a bit of work with 2 and 3-element vectors, the >>> only thing I do with matrices is take their rank, as part of verifying >>> that the faces of a polyhedron actually make a polyhedron. >>> >>> So I'm looking for a relatively light-weight way of doing so that will >>> work with a recent (7.8 or 7.10) ghc release. Or maybe getting such a >>> function added to an existing library. Anyone have any suggestions? >>> >>> Thanks, >>> Mike >>> >> > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan.miljenovic at gmail.com Fri Apr 24 13:27:46 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Fri, 24 Apr 2015 23:27:46 +1000 Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: <20150424131704.GK27512@weber> References: <20150424131704.GK27512@weber> Message-ID: On 24 April 2015 at 23:17, Tom Ellis wrote: > On Fri, Apr 24, 2015 at 11:06:07PM +1000, Ivan Lazar Miljenovic wrote: >> What is the validity of defining an Ord instance for types for which >> mathematically the `compare` function is partially ordered? > > I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and > `g2` are not comparable according to the partial order? With the proposed patch, it's the result of <= on the underlying [Int]Maps. Does the definition of Ord on Data.Map make sense? e.g. what should be the result of (fromList [(1,'a'), (2,'b'), (3, 'c')]) <= (fromList [(1,'a'), (4,'d')])? What about (fromList [(1,'a'), (2,'b'), (3, 'c')]) <= (fromList [(1,'a'), (2,'e')])? -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From mwm at mired.org Fri Apr 24 13:30:23 2015 From: mwm at mired.org (Mike Meyer) Date: Fri, 24 Apr 2015 08:30:23 -0500 Subject: [Haskell-cafe] low-cost matrix rank? In-Reply-To: References: <5539FB42.8050308@um.es> Message-ID: The bed-and-breakfast isn't to bad, except for needing TH. But it's apparently not being maintained. I've started the process of replacing the maintainer, but may roll my own instead. Thanks, Mike On Fri, Apr 24, 2015 at 8:27 AM, Dennis J. McWherter, Jr. < dennis at deathbytape.com> wrote: > I am not aware of any small library which does just this, but you could > easily roll your own. Though not the most efficient method, implementing > gaussian elimination is a straightforward task (you can even find the > backtracking algorithm on google) and then you can find the rank from there. > > Dennis > > On Fri, Apr 24, 2015 at 6:50 AM, Mike Meyer wrote: > >> My apologies,but my use of "low-cost" was ambiguous. >> >> I meant the cost of having it available - installation, size of the >> package, extra packages brought in, etc. I don't the rank calculation to be >> fast, or even cheap to compute, as it's not used very often, and not for >> very large matrices. I'd rather not have the size of the software >> multiplied by integers in order to get that one function. >> >> hmatrix is highly optimized for performance and parallelization, built on >> top of a large C libraries with lots of functionality. Nice to have if >> you're doing any serious work with matrices, but massive overkill for what >> I need. >> >> On Fri, Apr 24, 2015 at 3:13 AM, Alberto Ruiz wrote: >> >>> Hi Mike, >>> >>> If you need a robust numerical computation you can try "rcond" or "rank" >>> from hmatrix. (It is based on the singular values, I don't know if the cost >>> is low enough for your application.) >>> >>> http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29#Computation >>> >>> >>> https://hackage.haskell.org/package/hmatrix-0.16.1.5/docs/Numeric-LinearAlgebra-HMatrix.html#g:10 >>> >>> Alberto >>> >>> On 24/04/15 00:34, Mike Meyer wrote: >>> >>>> Noticing that diagrams 1.3 has moved from vector-space to linear, I >>>> decided to check them both for a function to compute the rank of a >>>> matrix. Neither seems to have it. >>>> >>>> While I'm doing quite a bit of work with 2 and 3-element vectors, the >>>> only thing I do with matrices is take their rank, as part of verifying >>>> that the faces of a polyhedron actually make a polyhedron. >>>> >>>> So I'm looking for a relatively light-weight way of doing so that will >>>> work with a recent (7.8 or 7.10) ghc release. Or maybe getting such a >>>> function added to an existing library. Anyone have any suggestions? >>>> >>>> Thanks, >>>> Mike >>>> >>> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk Fri Apr 24 13:59:37 2015 From: tom-lists-haskell-cafe-2013 at jaguarpaw.co.uk (Tom Ellis) Date: Fri, 24 Apr 2015 14:59:37 +0100 Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: References: <20150424131704.GK27512@weber> Message-ID: <20150424135937.GL27512@weber> On Fri, Apr 24, 2015 at 11:27:46PM +1000, Ivan Lazar Miljenovic wrote: > On 24 April 2015 at 23:17, Tom Ellis > wrote: > > On Fri, Apr 24, 2015 at 11:06:07PM +1000, Ivan Lazar Miljenovic wrote: > >> What is the validity of defining an Ord instance for types for which > >> mathematically the `compare` function is partially ordered? > > > > I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and > > `g2` are not comparable according to the partial order? > > With the proposed patch, it's the result of <= on the underlying [Int]Maps. Ah, so it's a case of adding a valid Ord instance that isn't a natural one for the particular datatype. If you really need something like that, for example to add your graphs to a Data.Set, then I would suggest a newtype might be appropriate. Tom From audunskaugen at gmail.com Fri Apr 24 14:01:24 2015 From: audunskaugen at gmail.com (Audun Skaugen) Date: Fri, 24 Apr 2015 16:01:24 +0200 Subject: [Haskell-cafe] low-cost matrix rank? In-Reply-To: References: <5539FB42.8050308@um.es> Message-ID: What about dumping the matrix into a C array using the storable instance of the linear package's matrices, and then use a foreign-imported svd call from lapack? I don't know whether you can count on lapack being available in your systems. The lapack call is very clumsy, requiring lots of pointer inputs, but it should be doable in a few lines of code. The rank is then the number of nonzero singular values, for some accuracy-dependent definition of "nonzero". P? Fri, 24 Apr 2015 15:30:23 +0200, skrev Mike Meyer : > The bed-and-breakfast isn't to bad, except for needing TH. But it's > apparently not being maintained. I've started the process of >replacing > the maintainer, but may roll my own instead. > > Thanks, > Mike > > On Fri, Apr 24, 2015 at 8:27 AM, Dennis J. McWherter, Jr. > wrote: >> I am not aware of any small library which does just this, but you could >> easily roll your own. Though not the most efficient >>method, >> implementing gaussian elimination is a straightforward task (you can >> even find the backtracking algorithm on google) and >>then you can find >> the rank from there. >> >> Dennis >> >> On Fri, Apr 24, 2015 at 6:50 AM, Mike Meyer wrote: >>> My apologies,but my use of "low-cost" was ambiguous. >>> >>> I meant the cost of having it available - installation, size of the >>> package, extra packages brought in, etc. I don't the rank >>> >>>calculation to be fast, or even cheap to compute, as it's not used >>> very often, and not for very large matrices. I'd rather not >>>have >>> the size of the software multiplied by integers in order to get that >>> one function. >>> >>> hmatrix is highly optimized for performance and parallelization, built >>> on top of a large C libraries with lots of >>>functionality. Nice to >>> have if you're doing any serious work with matrices, but massive >>> overkill for what I need. >>> >>> On Fri, Apr 24, 2015 at 3:13 AM, Alberto Ruiz wrote: >>>> Hi Mike, >>>> >>>> If you need a robust numerical computation you can try "rcond" or >>>> "rank" from hmatrix. (It is based on the singular values, >>>>I don't >>>> know if the cost is low enough for your application.) >>>> >>>> http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29#Computation >>>> >>>> https://hackage.haskell.org/package/hmatrix-0.16.1.5/docs/Numeric-LinearAlgebra-HMatrix.html#g:10 >>>> >>>> Alberto >>>> >>>> On 24/04/15 00:34, Mike Meyer wrote: >>>>> Noticing that diagrams 1.3 has moved from vector-space to linear, I >>>>> decided to check them both for a function to compute the rank of a >>>>> matrix. Neither seems to have it. >>>>> >>>>> While I'm doing quite a bit of work with 2 and 3-element vectors, the >>>>> only thing I do with matrices is take their rank, as part of >>>>> verifying >>>>> that the faces of a polyhedron actually make a polyhedron. >>>>> >>>>> So I'm looking for a relatively light-weight way of doing so that >>>>> will >>>>> work with a recent (7.8 or 7.10) ghc release. Or maybe getting such a >>>>> function added to an existing library. Anyone have any suggestions? >>>>> >>>>> Thanks, >>>>> Mike >>> >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >> > -- Audun Skaugen -------------- next part -------------- An HTML attachment was scrubbed... URL: From lemming at henning-thielemann.de Fri Apr 24 14:01:31 2015 From: lemming at henning-thielemann.de (Henning Thielemann) Date: Fri, 24 Apr 2015 16:01:31 +0200 (CEST) Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: References: Message-ID: On Fri, 24 Apr 2015, Ivan Lazar Miljenovic wrote: > Specifically, I have a pull request for fgl [1] to add Ord instances for > the graph types (based upon the Ord instances for Data.Map and > Data.IntMap, which I believe are themselves partially ordered), and I'm > torn as to the soundness of adding these instances. In an application we needed to do some combinatorics of graphs and thus needed Set Graph. Nonetheless, I think that graph0 < graph1 should be a type error. We can still have a set of Graphs using a newtype. From ivan.miljenovic at gmail.com Fri Apr 24 14:23:42 2015 From: ivan.miljenovic at gmail.com (Ivan Lazar Miljenovic) Date: Sat, 25 Apr 2015 00:23:42 +1000 Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: References: Message-ID: On 25 April 2015 at 00:01, Henning Thielemann wrote: > > On Fri, 24 Apr 2015, Ivan Lazar Miljenovic wrote: > >> Specifically, I have a pull request for fgl [1] to add Ord instances for >> the graph types (based upon the Ord instances for Data.Map and Data.IntMap, >> which I believe are themselves partially ordered), and I'm torn as to the >> soundness of adding these instances. > > > In an application we needed to do some combinatorics of graphs and thus > needed Set Graph. > > Nonetheless, I think that graph0 < graph1 should be a type error. We can > still have a set of Graphs using a newtype. This could work; the possible problem would be one of efficiency: if it's done directly on the graph datatypes they can use the underlying (ordered) data structure; going purely by the Graph API, there's no guarantees of ordering and thus it would be needed to call sort, even though in practice it's redundant. -- Ivan Lazar Miljenovic Ivan.Miljenovic at gmail.com http://IvanMiljenovic.wordpress.com From andreas.abel at ifi.lmu.de Fri Apr 24 14:47:50 2015 From: andreas.abel at ifi.lmu.de (Andreas Abel) Date: Fri, 24 Apr 2015 16:47:50 +0200 Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: References: Message-ID: <553A5796.2020804@ifi.lmu.de> On 04/24/2015 03:06 PM, Ivan Lazar Miljenovic wrote: > What is the validity of defining an Ord instance for types for which > mathematically the `compare` function is partially ordered? I'd say this is harmful, as functions like min and max (and others) rely on the totality of the ordering. Partial orderings are useful in itself, I implemented my own library https://hackage.haskell.org/package/Agda-2.4.2/docs/Agda-Utils-PartialOrd.html mainly to use it for maintaining sets of incomparable elements: https://hackage.haskell.org/package/Agda-2.4.2/docs/Agda-Utils-Favorites.html > Specifically, I have a pull request for fgl [1] to add Ord instances > for the graph types (based upon the Ord instances for Data.Map and > Data.IntMap, which I believe are themselves partially ordered), and > I'm torn as to the soundness of adding these instances. It might be > useful in Haskell code (the example given is to use graphs as keys in > a Map) but mathematically-speaking it is not possible to compare two > arbitrary graphs. > > What are people's thoughts on this? What's more important: potential > usefulness/practicality or mathematical correctness? > > (Of course, the correct answer is to have a function of type a -> a -> > Maybe Ordering :p) > > [1]: https://github.com/haskell/fgl/pull/11 > -- Andreas Abel <>< Du bist der geliebte Mensch. Department of Computer Science and Engineering Chalmers and Gothenburg University, Sweden andreas.abel at gu.se http://www2.tcs.ifi.lmu.de/~abel/ From audunskaugen at gmail.com Fri Apr 24 15:17:15 2015 From: audunskaugen at gmail.com (Audun Skaugen) Date: Fri, 24 Apr 2015 17:17:15 +0200 Subject: [Haskell-cafe] low-cost matrix rank? In-Reply-To: References: <5539FB42.8050308@um.es> Message-ID: I found it a fun challenge, so I coded up a small demonstration in the attached file :) P? Fri, 24 Apr 2015 16:01:24 +0200, skrev Audun Skaugen : > What about dumping the matrix into a C array using the storable instance > of the linear package's matrices, and then use a foreign->imported svd > call from lapack? I don't know whether you can count on lapack being > available in your systems. The lapack call is >very clumsy, requiring > lots of pointer inputs, but it should be doable in a few lines of code. > > The rank is then the number of nonzero singular values, for some > accuracy-dependent definition of "nonzero". > > P? Fri, 24 Apr 2015 15:30:23 +0200, skrev Mike Meyer : > >> The bed-and-breakfast isn't to bad, except for needing TH. But it's >> apparently not being maintained. I've started the process of >> >>replacing the maintainer, but may roll my own instead. >> >> Thanks, >> Mike >> >> On Fri, Apr 24, 2015 at 8:27 AM, Dennis J. McWherter, Jr. >> wrote: >>> I am not aware of any small library which does just this, but you >>> could easily roll your own. Though not the most efficient >>>method, >>> implementing gaussian elimination is a straightforward task (you can >>> even find the backtracking algorithm on google) >>>and then you can >>> find the rank from there. >>> >>> Dennis >>> >>> On Fri, Apr 24, 2015 at 6:50 AM, Mike Meyer wrote: >>>> My apologies,but my use of "low-cost" was ambiguous. >>>> >>>> I meant the cost of having it available - installation, size of the >>>> package, extra packages brought in, etc. I don't the rank >>>> >>>>calculation to be fast, or even cheap to compute, as it's not >>>> used very often, and not for very large matrices. I'd rather >>>>not >>>> have the size of the software multiplied by integers in order to get >>>> that one function. >>>> >>>> hmatrix is highly optimized for performance and parallelization, >>>> built on top of a large C libraries with lots of >>>>functionality. >>>> Nice to have if you're doing any serious work with matrices, but >>>> massive overkill for what I need. >>>> >>>> On Fri, Apr 24, 2015 at 3:13 AM, Alberto Ruiz wrote: >>>>> Hi Mike, >>>>> >>>>> If you need a robust numerical computation you can try "rcond" or >>>>> "rank" from hmatrix. (It is based on the singular values, >>>>>I >>>>> don't know if the cost is low enough for your application.) >>>>> >>>>> http://en.wikipedia.org/wiki/Rank_%28linear_algebra%29#Computation >>>>> >>>>> https://hackage.haskell.org/package/hmatrix-0.16.1.5/docs/Numeric-LinearAlgebra-HMatrix.html#g:10 >>>>> >>>>> Alberto >>>>> >>>>> On 24/04/15 00:34, Mike Meyer wrote: >>>>>> Noticing that diagrams 1.3 has moved from vector-space to linear, I >>>>>> decided to check them both for a function to compute the rank of a >>>>>> matrix. Neither seems to have it. >>>>>> >>>>>> While I'm doing quite a bit of work with 2 and 3-element vectors, >>>>>> the >>>>>> only thing I do with matrices is take their rank, as part of >>>>>> verifying >>>>>> that the faces of a polyhedron actually make a polyhedron. >>>>>> >>>>>> So I'm looking for a relatively light-weight way of doing so that >>>>>> will >>>>>> work with a recent (7.8 or 7.10) ghc release. Or maybe getting such >>>>>> a >>>>>> function added to an existing library. Anyone have any suggestions? >>>>>> >>>>>> Thanks, >>>>>> Mike >>>> >>>> >>>> _______________________________________________ >>>> Haskell-Cafe mailing list >>>> Haskell-Cafe at haskell.org >>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>> >>> >> > > > > --Audun Skaugen -- Audun Skaugen -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: rank.hs Type: application/octet-stream Size: 1353 bytes Desc: not available URL: From tikhon at jelv.is Fri Apr 24 17:26:42 2015 From: tikhon at jelv.is (Tikhon Jelvis) Date: Fri, 24 Apr 2015 10:26:42 -0700 Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: <553A5796.2020804@ifi.lmu.de> References: <553A5796.2020804@ifi.lmu.de> Message-ID: I would be hesitant about adding an Ord instance normally, because there's no clear semantics for it. If we just pass it through to the underlying data structure, it might behave differently depending on how you implement the graph, which is something fgl should ideally abstract over. Maybe you could provide them in a newtype yourself, in the library? You could call it something like GrKey to make it clear that it has an Ord instance for practical reasons rather than because graphs are meaningfully orderable. This just forces people who need the capability to be a bit more explicit about it. On Fri, Apr 24, 2015 at 7:47 AM, Andreas Abel wrote: > On 04/24/2015 03:06 PM, Ivan Lazar Miljenovic wrote: > >> What is the validity of defining an Ord instance for types for which >> mathematically the `compare` function is partially ordered? >> > > I'd say this is harmful, as functions like min and max (and others) rely > on the totality of the ordering. > > Partial orderings are useful in itself, I implemented my own library > > > > https://hackage.haskell.org/package/Agda-2.4.2/docs/Agda-Utils-PartialOrd.html > > mainly to use it for maintaining sets of incomparable elements: > > > > https://hackage.haskell.org/package/Agda-2.4.2/docs/Agda-Utils-Favorites.html > > Specifically, I have a pull request for fgl [1] to add Ord instances >> for the graph types (based upon the Ord instances for Data.Map and >> Data.IntMap, which I believe are themselves partially ordered), and >> I'm torn as to the soundness of adding these instances. It might be >> useful in Haskell code (the example given is to use graphs as keys in >> a Map) but mathematically-speaking it is not possible to compare two >> arbitrary graphs. >> >> What are people's thoughts on this? What's more important: potential >> usefulness/practicality or mathematical correctness? >> >> (Of course, the correct answer is to have a function of type a -> a -> >> Maybe Ordering :p) >> >> [1]: https://github.com/haskell/fgl/pull/11 >> >> > > -- > Andreas Abel <>< Du bist der geliebte Mensch. > > Department of Computer Science and Engineering > Chalmers and Gothenburg University, Sweden > > andreas.abel at gu.se > http://www2.tcs.ifi.lmu.de/~abel/ > > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.solla at gmail.com Fri Apr 24 17:47:15 2015 From: alex.solla at gmail.com (Alexander Solla) Date: Fri, 24 Apr 2015 10:47:15 -0700 Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: References: Message-ID: I see it as "morally wrong". It's like a Monad instance that doesn't obey the monad laws. The kind of ticking timebomb strong typing is supposed to protect us against. But that only works if we do our part and don't make non-sense instances. Can your consumer get away with using a Hashable instance? (I.e., for use in unordered-containers). This would be morally correct -- the graph could presumably have a valid Hashable instance. On Fri, Apr 24, 2015 at 6:06 AM, Ivan Lazar Miljenovic < ivan.miljenovic at gmail.com> wrote: > What is the validity of defining an Ord instance for types for which > mathematically the `compare` function is partially ordered? > > Specifically, I have a pull request for fgl [1] to add Ord instances > for the graph types (based upon the Ord instances for Data.Map and > Data.IntMap, which I believe are themselves partially ordered), and > I'm torn as to the soundness of adding these instances. It might be > useful in Haskell code (the example given is to use graphs as keys in > a Map) but mathematically-speaking it is not possible to compare two > arbitrary graphs. > > What are people's thoughts on this? What's more important: potential > usefulness/practicality or mathematical correctness? > > (Of course, the correct answer is to have a function of type a -> a -> > Maybe Ordering :p) > > [1]: https://github.com/haskell/fgl/pull/11 > > -- > Ivan Lazar Miljenovic > Ivan.Miljenovic at gmail.com > http://IvanMiljenovic.wordpress.com > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From trebla at vex.net Fri Apr 24 18:55:14 2015 From: trebla at vex.net (Albert Y. C. Lai) Date: Fri, 24 Apr 2015 14:55:14 -0400 Subject: [Haskell-cafe] Is there a name for this algebraic structure? In-Reply-To: References: <5234BE03-8D7E-4EF9-92BE-7F134C17E165@cs.otago.ac.nz> Message-ID: <553A9192.5080601@vex.net> On 2015-04-21 05:18 AM, Gleb Peregud wrote: > I think I need to think a bit more about this to find a proper > definitions and laws. Remember to specify accessors, not just constructors. Last time, there was no accessor, so S = () would fit the bill. From jays at panix.com Fri Apr 24 19:19:10 2015 From: jays at panix.com (Jay Sulzberger) Date: Fri, 24 Apr 2015 15:19:10 -0400 (EDT) Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: References: Message-ID: On Fri, 24 Apr 2015, Ivan Lazar Miljenovic wrote: > What is the validity of defining an Ord instance for types for which > mathematically the `compare` function is partially ordered? > > Specifically, I have a pull request for fgl [1] to add Ord instances > for the graph types (based upon the Ord instances for Data.Map and > Data.IntMap, which I believe are themselves partially ordered), and > I'm torn as to the soundness of adding these instances. It might be > useful in Haskell code (the example given is to use graphs as keys in > a Map) but mathematically-speaking it is not possible to compare two > arbitrary graphs. > > What are people's thoughts on this? What's more important: potential > usefulness/practicality or mathematical correctness? > > (Of course, the correct answer is to have a function of type a -> a -> > Maybe Ordering :p) > > [1]: https://github.com/haskell/fgl/pull/11 > > -- > Ivan Lazar Miljenovic Of course these type-classes (I hope I am using the word correctly) should be standard: 1. Ord, which is the class of all totally ordered set-like things 2. PoSet, which is the class of all partially ordered set-like things 3. NonStrictPoSet, which is the class of all partially ordered set-like things, but without the requirement that a <= b and b <= a implies a Equal b. 4. Things like above, but with the requirement of a Zero, with the requirement of a One, and the requirement fo both a Zero and a One. oo--JS. > Ivan.Miljenovic at gmail.com > http://IvanMiljenovic.wordpress.com > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > From k-bx at k-bx.com Fri Apr 24 19:25:54 2015 From: k-bx at k-bx.com (Kostiantyn Rybnikov) Date: Fri, 24 Apr 2015 22:25:54 +0300 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: An update for everyone interested (and not). Turned out it's neither GHC RTS, Snap or networking issues, it's hslogger being very slow. I thought it's slow when used concurrently, but just did a test when it writes 2000 5kb messages sequentially and that finishes in 111 seconds (while minimal program that writes same 2000 messages finishes in 0.12s). I hope I'll have a chance to investigate why hslogger is so slow in future, but meanwhile will just remove logging. On Thu, Apr 23, 2015 at 4:08 PM, Kostiantyn Rybnikov wrote: > All right, good news! > > After adding ekg, gathering its data via bosun and seeing nothing useful I > actually figured out that I could try harder to reproduce issue by myself > instead of waiting for users to do that. And I succeeded! :) > > So, after launching 20 infinite curl loops to that handler's url I was > quickly able to reproduce the issue, so the task seems clear now: keep > reducing the code, reproduce locally, possibly without external services > etc. I'll write up after I get to something. > > Thanks. > > On Wed, Apr 22, 2015 at 11:09 PM, Gregory Collins > wrote: > >> Maybe but it would be helpful to rule the scenario out. Johan's ekg >> library is also useful, it exports a webserver on a different port that you >> can use to track metrics like gc times, etc. >> >> Other options for further debugging include gathering strace logs from >> the binary. You'll have to do some data gathering to narrow down the cause >> unfortunately -- http client? your code? Snap server? GHC event manager >> (System.timeout is implemented here)? GC? etc >> >> G >> >> On Wed, Apr 22, 2015 at 10:14 AM, Kostiantyn Rybnikov >> wrote: >> >>> Gregory, >>> >>> Servers are far from being highly-overloaded, since they're currently >>> under a much less load they used to be. Memory consumption is stable and >>> low, and there's a lot of free RAM also. >>> >>> Would you say that given these factors this scenario is unlikely? >>> >>> On Wed, Apr 22, 2015 at 7:56 PM, Gregory Collins < >>> greg at gregorycollins.net> wrote: >>> >>>> Given your gist, the timeout on your requests is set to a half-second >>>> so it's conceivable that a highly-loaded server might have GC pause times >>>> approaching that long. Smells to me like a classic Haskell memory leak >>>> (that's why the problem occurs after the server has been up for a while): >>>> run your program with the heap profiler, and audit any shared >>>> tables/IORefs/MVars to make sure you are not building up thunks there. >>>> >>>> Greg >>>> >>>> On Wed, Apr 22, 2015 at 9:14 AM, Kostiantyn Rybnikov >>>> wrote: >>>> >>>>> Hi! >>>>> >>>>> Our company's main commercial product is a Snap-based web app which we >>>>> compile with GHC 7.8.4. It works on four app-servers currently >>>>> load-balanced behind Haproxy. >>>>> >>>>> I recently implemented a new piece of functionality, which led to >>>>> weird behavior which I have no idea how to debug, so I'm asking here for >>>>> help and ideas! >>>>> >>>>> The new functionality is this: on specific url-handler, we need to >>>>> query n external services concurrently with a timeout, gather and render >>>>> results. Easy (in Haskell)! >>>>> >>>>> The implementation looks, as you might imagine, something like this >>>>> (sorry for almost-real-haskell, I'm sure I forgot tons of imports and other >>>>> things, but I hope everything is clear as-is, if not -- I'll be glad to >>>>> update gist to make things more specific): >>>>> >>>>> https://gist.github.com/k-bx/0cf7035aaf1ad6306e76 >>>>> >>>>> Now, this works wonderful for some time, and in logs I can see both, >>>>> successful fetches of external-content, and also lots of timeouts from our >>>>> external providers. Life is good. >>>>> >>>>> But! After several days of work (sometimes a day, sometimes couple >>>>> days), apps on all 4 servers go crazy. It might take some interval (like 20 >>>>> minutes) before they're all crazy, so it's not super-synchronous. Now: how >>>>> crazy, exactly? >>>>> >>>>> First of all, this endpoint timeouts. Haproxy requests for a response, >>>>> and response times out, so they "hang". >>>>> >>>>> Secondly, logs are interesting. If you look at the code from gist once >>>>> again, you can see, that some of CandidateProvider's don't actually require >>>>> any networking work, so all they do is actually just logging that they're >>>>> working (I added this as part of debugging actually) and return pure data. >>>>> So what's weird is that they timeout also! Here's how output of our logs >>>>> starts to look like after the bug happens: >>>>> >>>>> ``` >>>>> [2015-04-22 09:56:20] provider: CandidateProvider1 >>>>> [2015-04-22 09:56:20] provider: CandidateProvider2 >>>>> [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider1 >>>>> [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider2 >>>>> [2015-04-22 09:56:22] provider: CandidateProvider1 >>>>> [2015-04-22 09:56:22] provider: CandidateProvider2 >>>>> [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider1 >>>>> [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider2 >>>>> ... and so on >>>>> ``` >>>>> >>>>> What's also weird is that, even after timeout is logged, the string >>>>> ""Got responses!" never gets logged also! So hanging happens somewhere >>>>> in-between. >>>>> >>>>> I have to say I'm sorry that I don't have strace output now, I'll have >>>>> to wait until this situation happens once again, but I'll get later to you >>>>> with this info. >>>>> >>>>> So, how is this possible that almost-pure code gets timed-out? And why >>>>> does it hang afterwards? >>>>> >>>>> CPU and other resource usage is quite low, number of open >>>>> file-descriptors also (it seems). >>>>> >>>>> Thanks for all the suggestions in advance! >>>>> >>>>> _______________________________________________ >>>>> Haskell-Cafe mailing list >>>>> Haskell-Cafe at haskell.org >>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>>> >>>>> >>>> >>>> >>>> -- >>>> Gregory Collins >>>> >>> >>> >> >> >> -- >> Gregory Collins >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.dead.shall.rise at gmail.com Fri Apr 24 19:29:52 2015 From: the.dead.shall.rise at gmail.com (Mikhail Glushenkov) Date: Fri, 24 Apr 2015 21:29:52 +0200 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: Hi, On 24 April 2015 at 21:25, Kostiantyn Rybnikov wrote: > I hope I'll have a chance to investigate why hslogger is so slow in future, > but meanwhile will just remove logging. Have you considered using fast-logger? From k-bx at k-bx.com Fri Apr 24 19:38:43 2015 From: k-bx at k-bx.com (Kostiantyn Rybnikov) Date: Fri, 24 Apr 2015 22:38:43 +0300 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: Well I do now :) I'll also check out new but interesting haskell-logger [0] I just think that it would be a good idea to investigate hslogger because it seems that a lot of people are using it already, so providing an update that would upgrade their performance with no additional work would make them happy. [0]: https://github.com/wdanilo/haskell-logger On Fri, Apr 24, 2015 at 10:29 PM, Mikhail Glushenkov < the.dead.shall.rise at gmail.com> wrote: > Hi, > > On 24 April 2015 at 21:25, Kostiantyn Rybnikov wrote: > > I hope I'll have a chance to investigate why hslogger is so slow in > future, > > but meanwhile will just remove logging. > > Have you considered using fast-logger? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From barak at cs.nuim.ie Fri Apr 24 20:22:38 2015 From: barak at cs.nuim.ie (Barak A. Pearlmutter) Date: Fri, 24 Apr 2015 21:22:38 +0100 Subject: [Haskell-cafe] Ord for partially ordered sets Message-ID: > I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and > `g2` are not comparable according to the partial order? Surely it would make things less confusing to simply follow existing precedent already in the standard prelude? $ ghci Prelude> let nan=0/0::Double Prelude> nan==nan False Prelude> compare 0 nan GT Prelude> compare nan 0 GT Prelude> compare nan nan GT Prelude> 0<=nan False Prelude> nan<=0 False Prelude> nan<=nan False Prelude> let infty=1/0::Double Prelude> infty <= nan False Prelude> nan <= infty False Perhaps not. From ertesx at gmx.de Fri Apr 24 20:26:20 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Fri, 24 Apr 2015 22:26:20 +0200 Subject: [Haskell-cafe] Class-like features for explicit arguments (was: Ord for partially ordered sets) In-Reply-To: References: Message-ID: > 3. NonStrictPoSet, which is the class of all partially ordered > set-like things, but without the requirement that a <= b and b <= a > implies a Equal b. Those are preorders. An antisymmetric preorder is a non-strict poset. Also it's difficult to capture all of those various order types in Haskell's class system. A type can have many orders and many underlying equivalence relations in the case of partial and total orders, and there are different ways to combine them. For example equality is a partial order, modular equivalence is a preorder, etc. Those denote bags and groups more than sequences or trees. Perhaps it's time to add a type class-like system to Haskell, but for explicitly passed arguments: record Relation a where related :: a -> a -> Bool unrelated :: a -> a -> Bool unrelated x y = not (related x y) func1 :: Relation A -> A -> A -> A func1 _ x y = ... related x y ... func2 :: Relation A -> Relation A -> A -> A -> A func2 r1 r2 x y = ... r1.related x y ... r2.unrelated x y ... In a lot of cases this is much more appropriate than a type class, and it would turn many things that are currently types into regular functions, thus making them a lot more composable: down :: Ord a -> Ord a down o = Ord { compare x y = o.compare y x } -- The remaining Ord functions are defaulted. Perhaps all we need is to generalise default definitions to data types and add module-like dot syntax for records (mainly to preserve infix operators). Formally speaking there is also little that prevents us From having associated types in those records that can be used on the type level. For actual record types (i.e. single-constructor types) we could even have specialisation and get a nice performance boost that way, if we ask for it: {-# SPECIALISE down someOrder :: Ord SomeType #-} This would be extremely useful. > 4. Things like above, but with the requirement of a Zero, with the > requirement of a One, and the requirement fo both a Zero and a One. Zero and one as in minBound and maxBound or rather as in Monoid and a hypothetical Semiring? In the latter case I believe they don't really belong into an additional class, unless you have some ordering-related laws for the zeroes and ones. If not, you can always simply use an Ord+Semiring constraint. There may be some motivation to make Bounded a subclass of Ord though. Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From ertesx at gmx.de Fri Apr 24 20:32:00 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Fri, 24 Apr 2015 22:32:00 +0200 Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: <20150424131704.GK27512@weber> References: <20150424131704.GK27512@weber> Message-ID: > I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and > `g2` are not comparable according to the partial order? False. Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From mike at izbicki.me Fri Apr 24 23:39:20 2015 From: mike at izbicki.me (Mike Izbicki) Date: Fri, 24 Apr 2015 16:39:20 -0700 Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: References: <20150424131704.GK27512@weber> Message-ID: >> I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and >> `g2` are not comparable according to the partial order? > >False. The operators aren't a problem for this reason. The real problem is what does `compare` return? On Fri, Apr 24, 2015 at 1:32 PM, Ertugrul S?ylemez wrote: >> I'm confused. What is supposed to be the result of `g1 <= g2` when `g1` and >> `g2` are not comparable according to the partial order? > > False. > > > Greets, > Ertugrul > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From carter.schonwald at gmail.com Sat Apr 25 01:56:44 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Fri, 24 Apr 2015 21:56:44 -0400 Subject: [Haskell-cafe] Class-like features for explicit arguments (was: Ord for partially ordered sets) In-Reply-To: References: Message-ID: hrm, wouldn't your proposed extension be largely accomplished by using Record pun and Record WildCards? eg {-# LANGUAGE RecordWildCards #-} {-# LANGUAGE RecordPuns #-} module Foo where data Relation a = Rel{related :: a -> a ->Bool,unrelated :: a -> a -> Bool} foo :: Relation A -> A -> A -> Bool foo Rel{..} x y = related x y ------ or am i over looking something? I do realize this may not quite be what youre suggesting, and if so, could you help me understand better? :) On Fri, Apr 24, 2015 at 4:26 PM, Ertugrul S?ylemez wrote: > > 3. NonStrictPoSet, which is the class of all partially ordered > > set-like things, but without the requirement that a <= b and b <= a > > implies a Equal b. > > Those are preorders. An antisymmetric preorder is a non-strict poset. > > Also it's difficult to capture all of those various order types in > Haskell's class system. A type can have many orders and many underlying > equivalence relations in the case of partial and total orders, and there > are different ways to combine them. For example equality is a partial > order, modular equivalence is a preorder, etc. Those denote bags and > groups more than sequences or trees. > > Perhaps it's time to add a type class-like system to Haskell, but for > explicitly passed arguments: > > record Relation a where > related :: a -> a -> Bool > > unrelated :: a -> a -> Bool > unrelated x y = not (related x y) > > func1 :: Relation A -> A -> A -> A > func1 _ x y = ... related x y ... > > func2 :: Relation A -> Relation A -> A -> A -> A > func2 r1 r2 x y = ... r1.related x y ... r2.unrelated x y ... > > In a lot of cases this is much more appropriate than a type class, and > it would turn many things that are currently types into regular > functions, thus making them a lot more composable: > > down :: Ord a -> Ord a > down o = > Ord { compare x y = o.compare y x } > -- The remaining Ord functions are defaulted. > > Perhaps all we need is to generalise default definitions to data types > and add module-like dot syntax for records (mainly to preserve infix > operators). Formally speaking there is also little that prevents us > From having associated types in those records that can be used on the > type level. > > For actual record types (i.e. single-constructor types) we could even > have specialisation and get a nice performance boost that way, if we ask > for it: > > {-# SPECIALISE down someOrder :: Ord SomeType #-} > > This would be extremely useful. > > > > 4. Things like above, but with the requirement of a Zero, with the > > requirement of a One, and the requirement fo both a Zero and a One. > > Zero and one as in minBound and maxBound or rather as in Monoid and a > hypothetical Semiring? In the latter case I believe they don't really > belong into an additional class, unless you have some ordering-related > laws for the zeroes and ones. If not, you can always simply use an > Ord+Semiring constraint. > > There may be some motivation to make Bounded a subclass of Ord though. > > > Greets, > Ertugrul > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dvde at gmx.net Sat Apr 25 12:21:01 2015 From: dvde at gmx.net (Daniel van den Eijkel) Date: Sat, 25 Apr 2015 14:21:01 +0200 Subject: [Haskell-cafe] GHCi shows result of (IO a) only if (a) is in class Show Message-ID: <553B86AD.5030707@gmx.net> I wrote a parser and it took me a while to realize why GHCi suddenly did not show any result nor an error message anymore. My parsing function has type (IO Expression), Expression is in class Show. After changing the parser to (IO Declaration), it did not show anything anymore, because Declaration was not in class Show. When I typed (parseFile "input.txt" >>= print), I got the error message and understood what was going on. But for I while I was really confused what's happening. Just wanted to share this. Best, Daniel From ertesx at gmx.de Sat Apr 25 12:51:13 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Sat, 25 Apr 2015 14:51:13 +0200 Subject: [Haskell-cafe] Class-like features for explicit arguments (was: Ord for partially ordered sets) In-Reply-To: References: Message-ID: > hrm, wouldn't your proposed extension be largely accomplished by using > Record pun and Record WildCards? A part of it would, but it wouldn't preserve operators. For example instead of `x r.<> y` you would have to write `(<>) r x y`. Also defaults are not available. Other class features are not accessible, most notably type-level features like associated types. The idea is that a record would be completely equivalent to a class with the only difference being that you define values instead of instances, that there are no constraints on which values can exist and that those values must be passed explicitly to functions as regular arguments. >> Perhaps it's time to add a type class-like system to Haskell, but for >> explicitly passed arguments: >> >> record Relation a where >> related :: a -> a -> Bool >> >> unrelated :: a -> a -> Bool >> unrelated x y = not (related x y) >> >> func1 :: Relation A -> A -> A -> A >> func1 _ x y = ... related x y ... >> >> func2 :: Relation A -> Relation A -> A -> A -> A >> func2 r1 r2 x y = ... r1.related x y ... r2.unrelated x y ... >> >> In a lot of cases this is much more appropriate than a type class, and >> it would turn many things that are currently types into regular >> functions, thus making them a lot more composable: >> >> down :: Ord a -> Ord a >> down o = >> Ord { compare x y = o.compare y x } >> -- The remaining Ord functions are defaulted. >> >> Perhaps all we need is to generalise default definitions to data types >> and add module-like dot syntax for records (mainly to preserve infix >> operators). Formally speaking there is also little that prevents us >> From having associated types in those records that can be used on the >> type level. >> >> For actual record types (i.e. single-constructor types) we could even >> have specialisation and get a nice performance boost that way, if we ask >> for it: >> >> {-# SPECIALISE down someOrder :: Ord SomeType #-} >> >> This would be extremely useful. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From mwm at mired.org Sat Apr 25 13:53:44 2015 From: mwm at mired.org (Mike Meyer) Date: Sat, 25 Apr 2015 08:53:44 -0500 Subject: [Haskell-cafe] Coplanarity or Colinearity [Was: low-cost matrix rank?] Message-ID: Well, none of the suggested solutions for computing the rank of a matrix really suited my needs, as dragging in something like BLAS introduce more cost than just integrating the bed-and-breakfast library into my own library. So let me try a different track. My real problem is that I've got a list of points in R3 and want to decide if they determine a plane, meaning they are coplanar but not colinear. Similarly, given a list of points in R2, I want to verify that they aren't colinear. Both of these can be done by converting the list of points to a matrix and finding the rank of the matrix, but I only use the rank function in the definitions of colinear and coplanar. Maybe there's an easier way to tackle the underlying problems. Anyone got a suggestion for such? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertesx at gmx.de Sat Apr 25 14:12:55 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Sat, 25 Apr 2015 16:12:55 +0200 Subject: [Haskell-cafe] Coplanarity or Colinearity [Was: low-cost matrix rank?] In-Reply-To: References: Message-ID: > My real problem is that I've got a list of points in R3 and want to > decide if they determine a plane, meaning they are coplanar but not > colinear. Similarly, given a list of points in R2, I want to verify > that they aren't colinear. Both of these can be done by converting the > list of points to a matrix and finding the rank of the matrix, but I > only use the rank function in the definitions of colinear and > coplanar. > > Maybe there's an easier way to tackle the underlying problems. Anyone > got a suggestion for such? I have written an experimental [implementation] of a Gauss-Jordan solver and matrix inverter. You might find some use for it. It does work and is reasonably fast, though not as fast as hmatrix. One advantage is that you can feed the points incrementally, and it will tell you immediately when there is no solution. It will also quickly reject redundant points, even in the presence of rounding errors. Since I need it often enough I'm going to write a library for Gauss-Jordan at some point. [implementation]: http://hub.darcs.net/ertes-m/solvers/browse/Solver/Matrix.hs Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From carter.schonwald at gmail.com Sat Apr 25 14:16:09 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 25 Apr 2015 10:16:09 -0400 Subject: [Haskell-cafe] Coplanarity or Colinearity [Was: low-cost matrix rank?] In-Reply-To: References: Message-ID: Yes, solving it directly is probably a better tact. I believe There's quite a bit of research literature on this out there in the computational geometry literature. Have you looked at the CGal c++ lib to check if they have any specialized code for low dimensional geoemtry? CGal or something like it is very likely to have what you want. Perhaps more importantly: what are your precision needs? Cause some of these questions have very real precision trade offs depending on your goals On Saturday, April 25, 2015, Mike Meyer wrote: > Well, none of the suggested solutions for computing the rank of a matrix > really suited my needs, as dragging in something like BLAS introduce more > cost than just integrating the bed-and-breakfast library into my own > library. So let me try a different track. > > My real problem is that I've got a list of points in R3 and want to > decide if they determine a plane, meaning they are coplanar but not > colinear. Similarly, given a list of points in R2, I want to verify that > they aren't colinear. Both of these can be done by converting the list of > points to a matrix and finding the rank of the matrix, but I only use the > rank function in the definitions of colinear and coplanar. > > Maybe there's an easier way to tackle the underlying problems. Anyone got > a suggestion for such? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertesx at gmx.de Sat Apr 25 14:20:37 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Sat, 25 Apr 2015 16:20:37 +0200 Subject: [Haskell-cafe] Coplanarity or Colinearity [Was: low-cost matrix rank?] In-Reply-To: References: Message-ID: >> My real problem is that I've got a list of points in R3 and want to >> decide if they determine a plane, meaning they are coplanar but not >> colinear. Similarly, given a list of points in R2, I want to verify >> that they aren't colinear. Both of these can be done by converting the >> list of points to a matrix and finding the rank of the matrix, but I >> only use the rank function in the definitions of colinear and >> coplanar. >> >> Maybe there's an easier way to tackle the underlying problems. Anyone >> got a suggestion for such? > > I have written an experimental [implementation] of a Gauss-Jordan solver > and matrix inverter. You might find some use for it. It does work and > is reasonably fast, though not as fast as hmatrix. One advantage is > that you can feed the points incrementally, and it will tell you > immediately when there is no solution. It will also quickly reject > redundant points, even in the presence of rounding errors. I should note: The `solve` function isn't yet written, but it also doesn't do much. Once you have fed enough relations, the matrix will already have been reduced to the identity, so you can simply extract the solutions from the relations. Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From roma at ro-che.info Sat Apr 25 14:24:37 2015 From: roma at ro-che.info (Roman Cheplyaka) Date: Sat, 25 Apr 2015 17:24:37 +0300 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: Message-ID: <553BA3A5.6060608@ro-che.info> On 25/04/15 15:51, Ertugrul S?ylemez wrote: >> hrm, wouldn't your proposed extension be largely accomplished by using >> Record pun and Record WildCards? > > A part of it would, but it wouldn't preserve operators. For example > instead of `x r.<> y` you would have to write `(<>) r x y`. Not at all. {-# LANGUAGE RecordWildCards #-} import Prelude hiding (sum) data Monoid a = Monoid { empty :: a, (<>) :: a -> a -> a } sum :: Num a => Monoid a sum = Monoid 0 (+) three :: Integer three = let Monoid{..} = sum in 1 <> 2 > Other class features are not accessible, > most notably type-level features like associated types. Associated types become additional type variables of the record type. A class class C a where type T a is essentially equivalent to class C a t | a -> t But the functional dependency is not enforceable on the value level (isn't the whole point of this discussion not to restrict what "instances" can be defined), so you end up with class C a t, a simple MPTC. > Also defaults are not available. Now this is a good point. > The idea is that a record would be completely equivalent to a class with > the only difference being that you define values instead of instances, > that there are no constraints on which values can exist and that those > values must be passed explicitly to functions as regular arguments. Except we already have regular records (aka data types) which satisfy 90% of the requirements, and adding another language construct to satisfy those remaining 10% feels wrong to me. I'd rather improve the existing construct. Roman -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From carter.schonwald at gmail.com Sat Apr 25 15:00:01 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 25 Apr 2015 11:00:01 -0400 Subject: [Haskell-cafe] Coplanarity or Colinearity [Was: low-cost matrix rank?] In-Reply-To: References: Message-ID: Shewchuk has a good number of writings in this topic including this random one i found. page 9 appears to be the releavant one? http://www.cs.berkeley.edu/~jrs/meshpapers/robnotes.pdf On Sat, Apr 25, 2015 at 10:20 AM, Ertugrul S?ylemez wrote: > >> My real problem is that I've got a list of points in R3 and want to > >> decide if they determine a plane, meaning they are coplanar but not > >> colinear. Similarly, given a list of points in R2, I want to verify > >> that they aren't colinear. Both of these can be done by converting the > >> list of points to a matrix and finding the rank of the matrix, but I > >> only use the rank function in the definitions of colinear and > >> coplanar. > >> > >> Maybe there's an easier way to tackle the underlying problems. Anyone > >> got a suggestion for such? > > > > I have written an experimental [implementation] of a Gauss-Jordan solver > > and matrix inverter. You might find some use for it. It does work and > > is reasonably fast, though not as fast as hmatrix. One advantage is > > that you can feed the points incrementally, and it will tell you > > immediately when there is no solution. It will also quickly reject > > redundant points, even in the presence of rounding errors. > > I should note: The `solve` function isn't yet written, but it also > doesn't do much. Once you have fed enough relations, the matrix will > already have been reduced to the identity, so you can simply extract the > solutions from the relations. > > > Greets, > Ertugrul > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertesx at gmx.de Sat Apr 25 15:12:47 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Sat, 25 Apr 2015 17:12:47 +0200 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: <553BA3A5.6060608@ro-che.info> References: <553BA3A5.6060608@ro-che.info> Message-ID: >>> hrm, wouldn't your proposed extension be largely accomplished by >>> using Record pun and Record WildCards? >> >> A part of it would, but it wouldn't preserve operators. For example >> instead of `x r.<> y` you would have to write `(<>) r x y`. > > Not at all. > > three :: Integer > three = > let Monoid{..} = sum in > 1 <> 2 Puns become tedious and error-prone as soon as you need to refer to multiple records, when operators are involved. But it's not that important actually. I can live with the current record syntax. The most useful features would be defaults, a more suitable syntax for defining record types and potentially the following: >> Other class features are not accessible, most notably type-level >> features like associated types. > > Associated types become additional type variables of the record type. Indeed. However, when the type follows from other type arguments, it would often be convenient not to spell it out and instead bring an associated type constructor into scope. This is especially true when the AT refers to a type that isn't used very often. record Extractor a where type Elem a extract :: a -> Maybe (Elem a, a) extractTwo :: (e1 : Extractor a) -> (e2 : Extractor a) -> a -> Maybe (e1.Elem a, e2.Elem a, a) extractTwo e1 e2 xs0 = do (x1, xs1) <- e1.extract xs0 (x2, xs2) <- e1.extract xs1 return (x1, x2, xs2) > But the functional dependency is not enforceable on the value level > (isn't the whole point of this discussion not to restrict what > "instances" can be defined), so you end up with > > class C a t, > > a simple MPTC. I don't see a reason to enforce a dependency, since there is no equivalent to instance resolution. Regular unification should cover any ambiguities, and if it doesn't you need ScopedTypeVariables. >> The idea is that a record would be completely equivalent to a class >> with the only difference being that you define values instead of >> instances, that there are no constraints on which values can exist >> and that those values must be passed explicitly to functions as >> regular arguments. > > Except we already have regular records (aka data types) which satisfy > 90% of the requirements, and adding another language construct to > satisfy those remaining 10% feels wrong to me. I'd rather improve the > existing construct. That's actually what I'm proposing. The record syntax would simply be syntactic sugar for single-constructor data types that is more suitable for records, especially when defaults and other class-like features are involved. Most notably it would support layout. There is no reason why you shouldn't be able to use `data` to achieve the same thing, except with a clumsier syntax and the option to have multiple constructors. Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From carter.schonwald at gmail.com Sat Apr 25 15:32:43 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 25 Apr 2015 11:32:43 -0400 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: Isn't your associated type here more like a dependent record field/ existential that we can kinda expose? This does seem to veer into first class module territory. Especially wrt needing first class types in a fashion. Have you had a chance to peruse the Andreas Rossberg 1ml paper on embedding first class modules into f omega that has been circulating? Perhaps there are ideas There that could be adapted. Especially since core is an augmented f omega On Saturday, April 25, 2015, Ertugrul S?ylemez wrote: > >>> hrm, wouldn't your proposed extension be largely accomplished by > >>> using Record pun and Record WildCards? > >> > >> A part of it would, but it wouldn't preserve operators. For example > >> instead of `x r.<> y` you would have to write `(<>) r x y`. > > > > Not at all. > > > > three :: Integer > > three = > > let Monoid{..} = sum in > > 1 <> 2 > > Puns become tedious and error-prone as soon as you need to refer to > multiple records, when operators are involved. But it's not that > important actually. I can live with the current record syntax. > > The most useful features would be defaults, a more suitable syntax for > defining record types and potentially the following: > > > >> Other class features are not accessible, most notably type-level > >> features like associated types. > > > > Associated types become additional type variables of the record type. > > Indeed. However, when the type follows from other type arguments, it > would often be convenient not to spell it out and instead bring an > associated type constructor into scope. This is especially true when > the AT refers to a type that isn't used very often. > > record Extractor a where > type Elem a > extract :: a -> Maybe (Elem a, a) > > extractTwo > :: (e1 : Extractor a) > -> (e2 : Extractor a) > -> a > -> Maybe (e1.Elem a, e2.Elem a, a) > extractTwo e1 e2 xs0 = do > (x1, xs1) <- e1.extract xs0 > (x2, xs2) <- e1.extract xs1 > return (x1, x2, xs2) > > > > But the functional dependency is not enforceable on the value level > > (isn't the whole point of this discussion not to restrict what > > "instances" can be defined), so you end up with > > > > class C a t, > > > > a simple MPTC. > > I don't see a reason to enforce a dependency, since there is no > equivalent to instance resolution. Regular unification should cover any > ambiguities, and if it doesn't you need ScopedTypeVariables. > > > >> The idea is that a record would be completely equivalent to a class > >> with the only difference being that you define values instead of > >> instances, that there are no constraints on which values can exist > >> and that those values must be passed explicitly to functions as > >> regular arguments. > > > > Except we already have regular records (aka data types) which satisfy > > 90% of the requirements, and adding another language construct to > > satisfy those remaining 10% feels wrong to me. I'd rather improve the > > existing construct. > > That's actually what I'm proposing. The record syntax would simply be > syntactic sugar for single-constructor data types that is more suitable > for records, especially when defaults and other class-like features are > involved. Most notably it would support layout. There is no reason why > you shouldn't be able to use `data` to achieve the same thing, except > with a clumsier syntax and the option to have multiple constructors. > > > Greets, > Ertugrul > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertesx at gmx.de Sat Apr 25 15:43:03 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Sat, 25 Apr 2015 17:43:03 +0200 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: > Isn't your associated type here more like a dependent record field/ > existential that we can kinda expose? Not quite. There is still a clear distinction between type and value level. You cannot refer to an AT on the value level or to a member value on the type level. > This does seem to veer into first class module territory. Especially > wrt needing first class types in a fashion. I think formally there is little difference between a powerful record system and a first-class module system. However, even in a non-dependent language a first class module could still expect a value argument. A record type couldn't do this without essentially making the language dependent on the way. > Have you had a chance to peruse the Andreas Rossberg 1ml paper on > embedding first class modules into f omega that has been circulating? > Perhaps there are ideas There that could be adapted. Especially since > core is an augmented f omega I haven't read it, sorry. My proposal should conform to the current core language, as it's mostly just a syntax transformation. The only new semantics would be defaults. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From jerzy.karczmarczuk at unicaen.fr Sat Apr 25 15:49:25 2015 From: jerzy.karczmarczuk at unicaen.fr (Jerzy Karczmarczuk) Date: Sat, 25 Apr 2015 17:49:25 +0200 Subject: [Haskell-cafe] Coplanarity or Colinearity [Was: low-cost matrix rank?] In-Reply-To: References: Message-ID: <553BB785.1060901@unicaen.fr> Many people help Mike Meyer: > My real problem is that I've got a list of points in R3 and want to > decide if they determine a plane, meaning they are coplanar but not > colinear. Similarly, given a list of points in R2, I want to verify > that they aren't colinear. Both of these can be done by converting the > list of points to a matrix and finding the rank of the matrix, but I > only use the rank function in the definitions of colinear and coplanar. > > Maybe there's an easier way to tackle the underlying problems. Anyone > got a suggestion for such? I didn't follow this discussion, so I might have missed some essential issues, I apologize then. But if THIS is the problem... All these powerful universal libraries, with several hundreds of lines of code are important and useful, but if the problem is to find whether a list of pairs (x,y) is collinear or not, I presume that such program as below could do. I am ashamed showing something like that... *colin ((x,y):l) = all (\(c,d)->abs(px*d-py*c) From mwm at mired.org Sat Apr 25 16:26:50 2015 From: mwm at mired.org (Mike Meyer) Date: Sat, 25 Apr 2015 11:26:50 -0500 Subject: [Haskell-cafe] Coplanarity or Colinearity [Was: low-cost matrix rank?] In-Reply-To: <553BB785.1060901@unicaen.fr> References: <553BB785.1060901@unicaen.fr> Message-ID: On Sat, Apr 25, 2015 at 10:49 AM, Jerzy Karczmarczuk < jerzy.karczmarczuk at unicaen.fr> wrote: > Many people help Mike Meyer: > And I do appreciate it. > My real problem is that I've got a list of points in R3 and want to > decide if they determine a plane, meaning they are coplanar but not > colinear. Similarly, given a list of points in R2, I want to verify that > they aren't colinear. Both of these can be done by converting the list of > points to a matrix and finding the rank of the matrix, but I only use the > rank function in the definitions of colinear and coplanar. > > Maybe there's an easier way to tackle the underlying problems. Anyone > got a suggestion for such? > > > I didn't follow this discussion, so I might have missed some essential > issues, I apologize then. But if THIS is the problem... > > All these powerful universal libraries, with several hundreds of lines of > code are important and useful, but if the problem is to find whether a list > of pairs (x,y) is collinear or not, I presume that such program as below > could do. I am ashamed showing something like that... > > > > *colin ((x,y):l) = all (\(c,d)->abs(px*d-py*c) [(ax-x,ay-y) | (ax,ay) <- l] * > [The iterated subtraction puts the first vector at the origin. eps is the > precision; better avoid ==0] > That's not far from what I wound up with, except I generalized it to work for both 2d and 3d vectors. And yeah, I clearly got off on the wrong foot when I turned up "test the rank of the matrix" for finding collinearity and coplanarity. This pretty much solves my problem. Thanks to all who helped. -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Apr 25 16:39:13 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 25 Apr 2015 12:39:13 -0400 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: You miss apprehend. I'm saying that the first class modules encoding in that paper is expressible using a subset of ghc core. There IS the subtle issue that type class dictionaries / type classes have different sharing and strictness semantics than normal userland records. There's a space Of designs that could provide what you're asking for, and I do agree that it's worth exploring. I guess The main question is whether or not it turns into its own research engineering project. Likewise, to what extent aside from syntactic nicety does implicit parameters and the punning extensions not suffice? On Saturday, April 25, 2015, Ertugrul S?ylemez wrote: > > Isn't your associated type here more like a dependent record field/ > > existential that we can kinda expose? > > Not quite. There is still a clear distinction between type and value > level. You cannot refer to an AT on the value level or to a member > value on the type level. > > > > This does seem to veer into first class module territory. Especially > > wrt needing first class types in a fashion. > > I think formally there is little difference between a powerful record > system and a first-class module system. However, even in a > non-dependent language a first class module could still expect a value > argument. A record type couldn't do this without essentially making the > language dependent on the way. > > > > Have you had a chance to peruse the Andreas Rossberg 1ml paper on > > embedding first class modules into f omega that has been circulating? > > Perhaps there are ideas There that could be adapted. Especially since > > core is an augmented f omega > > I haven't read it, sorry. My proposal should conform to the current > core language, as it's mostly just a syntax transformation. The only > new semantics would be defaults. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertesx at gmx.de Sat Apr 25 18:12:22 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Sat, 25 Apr 2015 20:12:22 +0200 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: > Likewise, to what extent aside from syntactic nicety does implicit > parameters and the punning extensions not suffice? At the definition and instantiation sites I mostly miss defaults. At the application sites I would love to have specialisation for certain arguments. For example I would like to be able to tell GHC that I would like to have a version of my function `f` with a certain argument inlined. Note that I don't want to inline `f` itself. Rather I'd like to preapply certain arguments: f :: X -> Y -> Z {-# SPECIALISE f SomeX #-} {-# SPECIALISE f SomeOtherX #-} This would generate two specialised versions of `f` with exactly the given arguments inlined. That way I can get a very efficient `f` without having to inline it at the application sites. And as long as `f` is INLINABLE I can put those pragmas pretty much everywhere. I believe this is exactly what happens for type class dictionaries. This can (and probably should) be a separate feature though. For some of my applications I need to inline a huge chunk of code multiple times to compensate for the lack of this feature. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From carter.schonwald at gmail.com Sat Apr 25 18:21:20 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 25 Apr 2015 14:21:20 -0400 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: Could that specialization be accomplished today using eds reflection pkg? I guess not quite in terms of that pre apply pattern. This is interesting. And it's a good example of a larger problem of not enough support for composable specialization with good sharing across the use sites that doesn't require egregious Inlining. At least for code that isn't Type class driven. On Saturday, April 25, 2015, Ertugrul S?ylemez wrote: > > Likewise, to what extent aside from syntactic nicety does implicit > > parameters and the punning extensions not suffice? > > At the definition and instantiation sites I mostly miss defaults. At > the application sites I would love to have specialisation for certain > arguments. For example I would like to be able to tell GHC that I would > like to have a version of my function `f` with a certain argument > inlined. Note that I don't want to inline `f` itself. Rather I'd like > to preapply certain arguments: > > f :: X -> Y -> Z > > {-# SPECIALISE f SomeX #-} > {-# SPECIALISE f SomeOtherX #-} > > This would generate two specialised versions of `f` with exactly the > given arguments inlined. That way I can get a very efficient `f` > without having to inline it at the application sites. And as long as > `f` is INLINABLE I can put those pragmas pretty much everywhere. I > believe this is exactly what happens for type class dictionaries. > > This can (and probably should) be a separate feature though. For some > of my applications I need to inline a huge chunk of code multiple times > to compensate for the lack of this feature. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertesx at gmx.de Sat Apr 25 18:43:23 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Sat, 25 Apr 2015 20:43:23 +0200 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: > Could that specialization be accomplished today using eds reflection > pkg? I guess not quite in terms of that pre apply pattern. I'm not familiar with that package and couldn't find it on Hackage by a quick search. But I believe that it can only be done with compiler support, although with enough hackery you can probably get an ugly version of it using TH. > This is interesting. And it's a good example of a larger problem of > not enough support for composable specialization with good sharing > across the use sites that doesn't require egregious Inlining. At > least for code that isn't Type class driven. Indeed. Specialisation is a really good way to get very fast code without making your executable size explode. I believe that support for more fine-grained specialisation should and will improve. I'm not sure how to make it more composable though. >> At the definition and instantiation sites I mostly miss defaults. At >> the application sites I would love to have specialisation for certain >> arguments. For example I would like to be able to tell GHC that I >> would like to have a version of my function `f` with a certain >> argument inlined. Note that I don't want to inline `f` itself. >> Rather I'd like to preapply certain arguments: >> >> f :: X -> Y -> Z >> >> {-# SPECIALISE f SomeX #-} >> {-# SPECIALISE f SomeOtherX #-} >> >> This would generate two specialised versions of `f` with exactly the >> given arguments inlined. That way I can get a very efficient `f` >> without having to inline it at the application sites. And as long as >> `f` is INLINABLE I can put those pragmas pretty much everywhere. I >> believe this is exactly what happens for type class dictionaries. >> >> This can (and probably should) be a separate feature though. For some >> of my applications I need to inline a huge chunk of code multiple times >> to compensate for the lack of this feature. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From carter.schonwald at gmail.com Sat Apr 25 19:23:04 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 25 Apr 2015 15:23:04 -0400 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: Here yah go https://hackage.haskell.org/package/reflection It exploits how dictionary passing works in a pretty robust way that ghc is likely to at some point codify officially. On Saturday, April 25, 2015, Ertugrul S?ylemez wrote: > > Could that specialization be accomplished today using eds reflection > > pkg? I guess not quite in terms of that pre apply pattern. > > I'm not familiar with that package and couldn't find it on Hackage by a > quick search. But I believe that it can only be done with compiler > support, although with enough hackery you can probably get an ugly > version of it using TH. > > > > This is interesting. And it's a good example of a larger problem of > > not enough support for composable specialization with good sharing > > across the use sites that doesn't require egregious Inlining. At > > least for code that isn't Type class driven. > > Indeed. Specialisation is a really good way to get very fast code > without making your executable size explode. I believe that support for > more fine-grained specialisation should and will improve. I'm not sure > how to make it more composable though. > > > >> At the definition and instantiation sites I mostly miss defaults. At > >> the application sites I would love to have specialisation for certain > >> arguments. For example I would like to be able to tell GHC that I > >> would like to have a version of my function `f` with a certain > >> argument inlined. Note that I don't want to inline `f` itself. > >> Rather I'd like to preapply certain arguments: > >> > >> f :: X -> Y -> Z > >> > >> {-# SPECIALISE f SomeX #-} > >> {-# SPECIALISE f SomeOtherX #-} > >> > >> This would generate two specialised versions of `f` with exactly the > >> given arguments inlined. That way I can get a very efficient `f` > >> without having to inline it at the application sites. And as long as > >> `f` is INLINABLE I can put those pragmas pretty much everywhere. I > >> believe this is exactly what happens for type class dictionaries. > >> > >> This can (and probably should) be a separate feature though. For some > >> of my applications I need to inline a huge chunk of code multiple times > >> to compensate for the lack of this feature. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertesx at gmx.de Sat Apr 25 20:06:02 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Sat, 25 Apr 2015 22:06:02 +0200 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: > Here yah go https://hackage.haskell.org/package/reflection > It exploits how dictionary passing works in a pretty robust way that > ghc is likely to at some point codify officially. Oh, that one. Of course I'm familiar with it, but it's less an optimisation than a clever way to implement implicit configurations. >> > Could that specialization be accomplished today using eds reflection >> > pkg? I guess not quite in terms of that pre apply pattern. >> >> I'm not familiar with that package and couldn't find it on Hackage by a >> quick search. But I believe that it can only be done with compiler >> support, although with enough hackery you can probably get an ugly >> version of it using TH. >> >> >> > This is interesting. And it's a good example of a larger problem of >> > not enough support for composable specialization with good sharing >> > across the use sites that doesn't require egregious Inlining. At >> > least for code that isn't Type class driven. >> >> Indeed. Specialisation is a really good way to get very fast code >> without making your executable size explode. I believe that support for >> more fine-grained specialisation should and will improve. I'm not sure >> how to make it more composable though. >> >> >> >> At the definition and instantiation sites I mostly miss defaults. At >> >> the application sites I would love to have specialisation for certain >> >> arguments. For example I would like to be able to tell GHC that I >> >> would like to have a version of my function `f` with a certain >> >> argument inlined. Note that I don't want to inline `f` itself. >> >> Rather I'd like to preapply certain arguments: >> >> >> >> f :: X -> Y -> Z >> >> >> >> {-# SPECIALISE f SomeX #-} >> >> {-# SPECIALISE f SomeOtherX #-} >> >> >> >> This would generate two specialised versions of `f` with exactly the >> >> given arguments inlined. That way I can get a very efficient `f` >> >> without having to inline it at the application sites. And as long as >> >> `f` is INLINABLE I can put those pragmas pretty much everywhere. I >> >> believe this is exactly what happens for type class dictionaries. >> >> >> >> This can (and probably should) be a separate feature though. For some >> >> of my applications I need to inline a huge chunk of code multiple times >> >> to compensate for the lack of this feature. >> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From carter.schonwald at gmail.com Sat Apr 25 20:12:56 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 25 Apr 2015 16:12:56 -0400 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: are you sure you've evaluated how it interacts with this sort of optimization? I think it actually gets you pretty far! On Sat, Apr 25, 2015 at 4:06 PM, Ertugrul S?ylemez wrote: > > Here yah go https://hackage.haskell.org/package/reflection > > It exploits how dictionary passing works in a pretty robust way that > > ghc is likely to at some point codify officially. > > Oh, that one. Of course I'm familiar with it, but it's less an > optimisation than a clever way to implement implicit configurations. > > > >> > Could that specialization be accomplished today using eds reflection > >> > pkg? I guess not quite in terms of that pre apply pattern. > >> > >> I'm not familiar with that package and couldn't find it on Hackage by a > >> quick search. But I believe that it can only be done with compiler > >> support, although with enough hackery you can probably get an ugly > >> version of it using TH. > >> > >> > >> > This is interesting. And it's a good example of a larger problem of > >> > not enough support for composable specialization with good sharing > >> > across the use sites that doesn't require egregious Inlining. At > >> > least for code that isn't Type class driven. > >> > >> Indeed. Specialisation is a really good way to get very fast code > >> without making your executable size explode. I believe that support for > >> more fine-grained specialisation should and will improve. I'm not sure > >> how to make it more composable though. > >> > >> > >> >> At the definition and instantiation sites I mostly miss defaults. At > >> >> the application sites I would love to have specialisation for certain > >> >> arguments. For example I would like to be able to tell GHC that I > >> >> would like to have a version of my function `f` with a certain > >> >> argument inlined. Note that I don't want to inline `f` itself. > >> >> Rather I'd like to preapply certain arguments: > >> >> > >> >> f :: X -> Y -> Z > >> >> > >> >> {-# SPECIALISE f SomeX #-} > >> >> {-# SPECIALISE f SomeOtherX #-} > >> >> > >> >> This would generate two specialised versions of `f` with exactly the > >> >> given arguments inlined. That way I can get a very efficient `f` > >> >> without having to inline it at the application sites. And as long as > >> >> `f` is INLINABLE I can put those pragmas pretty much everywhere. I > >> >> believe this is exactly what happens for type class dictionaries. > >> >> > >> >> This can (and probably should) be a separate feature though. For > some > >> >> of my applications I need to inline a huge chunk of code multiple > times > >> >> to compensate for the lack of this feature. > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dct25-561bs at mythic-beasts.com Sat Apr 25 21:08:54 2015 From: dct25-561bs at mythic-beasts.com (David Turner) Date: Sat, 25 Apr 2015 22:08:54 +0100 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: Hi, I've had a look at this as we use hslogger too, so I'm keen to avoid this kind of performance issue. I threw a quick Criterion benchmark together: https://gist.github.com/DaveCTurner/f977123b4498c4c64569 The headline result on my test machine are that each log call takes ~540us, so 2000 should take about a second. Would be interested if you could run the same benchmark on your setup as it's possible that there's something else downstream that's causing you a problem. A couple of things that might be worth bearing in mind: if you're talking to syslog over /dev/log then that can block if the log daemon falls behind: unix datagram sockets don't drop datagrams when they're congested. If the /dev/log test is slow but the UDP test is fast then it could be that your syslog can't handle the load. I'm using rsyslogd and have enabled the feature that combines identical messages, so this test doesn't generate much disk IO and it keeps up easily, so the UDP and /dev/log tests run about equally fast for me. Is your syslog writing out every message? It may be flushing to disk after every message too, which would be terribly slow. If you're not logging to syslog, what's your hslogger config? Cheers, David On 24 April 2015 at 20:25, Kostiantyn Rybnikov wrote: > An update for everyone interested (and not). Turned out it's neither GHC > RTS, Snap or networking issues, it's hslogger being very slow. I thought > it's slow when used concurrently, but just did a test when it writes 2000 > 5kb messages sequentially and that finishes in 111 seconds (while minimal > program that writes same 2000 messages finishes in 0.12s). > > I hope I'll have a chance to investigate why hslogger is so slow in future, > but meanwhile will just remove logging. > > On Thu, Apr 23, 2015 at 4:08 PM, Kostiantyn Rybnikov wrote: >> >> All right, good news! >> >> After adding ekg, gathering its data via bosun and seeing nothing useful I >> actually figured out that I could try harder to reproduce issue by myself >> instead of waiting for users to do that. And I succeeded! :) >> >> So, after launching 20 infinite curl loops to that handler's url I was >> quickly able to reproduce the issue, so the task seems clear now: keep >> reducing the code, reproduce locally, possibly without external services >> etc. I'll write up after I get to something. >> >> Thanks. >> >> On Wed, Apr 22, 2015 at 11:09 PM, Gregory Collins >> wrote: >>> >>> Maybe but it would be helpful to rule the scenario out. Johan's ekg >>> library is also useful, it exports a webserver on a different port that you >>> can use to track metrics like gc times, etc. >>> >>> Other options for further debugging include gathering strace logs from >>> the binary. You'll have to do some data gathering to narrow down the cause >>> unfortunately -- http client? your code? Snap server? GHC event manager >>> (System.timeout is implemented here)? GC? etc >>> >>> G >>> >>> On Wed, Apr 22, 2015 at 10:14 AM, Kostiantyn Rybnikov >>> wrote: >>>> >>>> Gregory, >>>> >>>> Servers are far from being highly-overloaded, since they're currently >>>> under a much less load they used to be. Memory consumption is stable and >>>> low, and there's a lot of free RAM also. >>>> >>>> Would you say that given these factors this scenario is unlikely? >>>> >>>> On Wed, Apr 22, 2015 at 7:56 PM, Gregory Collins >>>> wrote: >>>>> >>>>> Given your gist, the timeout on your requests is set to a half-second >>>>> so it's conceivable that a highly-loaded server might have GC pause times >>>>> approaching that long. Smells to me like a classic Haskell memory leak >>>>> (that's why the problem occurs after the server has been up for a while): >>>>> run your program with the heap profiler, and audit any shared >>>>> tables/IORefs/MVars to make sure you are not building up thunks there. >>>>> >>>>> Greg >>>>> >>>>> On Wed, Apr 22, 2015 at 9:14 AM, Kostiantyn Rybnikov >>>>> wrote: >>>>>> >>>>>> Hi! >>>>>> >>>>>> Our company's main commercial product is a Snap-based web app which we >>>>>> compile with GHC 7.8.4. It works on four app-servers currently load-balanced >>>>>> behind Haproxy. >>>>>> >>>>>> I recently implemented a new piece of functionality, which led to >>>>>> weird behavior which I have no idea how to debug, so I'm asking here for >>>>>> help and ideas! >>>>>> >>>>>> The new functionality is this: on specific url-handler, we need to >>>>>> query n external services concurrently with a timeout, gather and render >>>>>> results. Easy (in Haskell)! >>>>>> >>>>>> The implementation looks, as you might imagine, something like this >>>>>> (sorry for almost-real-haskell, I'm sure I forgot tons of imports and other >>>>>> things, but I hope everything is clear as-is, if not -- I'll be glad to >>>>>> update gist to make things more specific): >>>>>> >>>>>> https://gist.github.com/k-bx/0cf7035aaf1ad6306e76 >>>>>> >>>>>> Now, this works wonderful for some time, and in logs I can see both, >>>>>> successful fetches of external-content, and also lots of timeouts from our >>>>>> external providers. Life is good. >>>>>> >>>>>> But! After several days of work (sometimes a day, sometimes couple >>>>>> days), apps on all 4 servers go crazy. It might take some interval (like 20 >>>>>> minutes) before they're all crazy, so it's not super-synchronous. Now: how >>>>>> crazy, exactly? >>>>>> >>>>>> First of all, this endpoint timeouts. Haproxy requests for a response, >>>>>> and response times out, so they "hang". >>>>>> >>>>>> Secondly, logs are interesting. If you look at the code from gist once >>>>>> again, you can see, that some of CandidateProvider's don't actually require >>>>>> any networking work, so all they do is actually just logging that they're >>>>>> working (I added this as part of debugging actually) and return pure data. >>>>>> So what's weird is that they timeout also! Here's how output of our logs >>>>>> starts to look like after the bug happens: >>>>>> >>>>>> ``` >>>>>> [2015-04-22 09:56:20] provider: CandidateProvider1 >>>>>> [2015-04-22 09:56:20] provider: CandidateProvider2 >>>>>> [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider1 >>>>>> [2015-04-22 09:56:21] Got timeout while requesting CandidateProvider2 >>>>>> [2015-04-22 09:56:22] provider: CandidateProvider1 >>>>>> [2015-04-22 09:56:22] provider: CandidateProvider2 >>>>>> [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider1 >>>>>> [2015-04-22 09:56:23] Got timeout while requesting CandidateProvider2 >>>>>> ... and so on >>>>>> ``` >>>>>> >>>>>> What's also weird is that, even after timeout is logged, the string >>>>>> ""Got responses!" never gets logged also! So hanging happens somewhere >>>>>> in-between. >>>>>> >>>>>> I have to say I'm sorry that I don't have strace output now, I'll have >>>>>> to wait until this situation happens once again, but I'll get later to you >>>>>> with this info. >>>>>> >>>>>> So, how is this possible that almost-pure code gets timed-out? And why >>>>>> does it hang afterwards? >>>>>> >>>>>> CPU and other resource usage is quite low, number of open >>>>>> file-descriptors also (it seems). >>>>>> >>>>>> Thanks for all the suggestions in advance! >>>>>> >>>>>> _______________________________________________ >>>>>> Haskell-Cafe mailing list >>>>>> Haskell-Cafe at haskell.org >>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Gregory Collins >>>> >>>> >>> >>> >>> >>> -- >>> Gregory Collins >> >> > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From ertesx at gmx.de Sat Apr 25 21:58:18 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Sat, 25 Apr 2015 23:58:18 +0200 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: > are you sure you've evaluated how it interacts with this sort of > optimization? I think it actually gets you pretty far! Most of the magic occurs in reification. The trouble is that it expects a polymorphic function of rank 2 with a nontrivial context (Reifies), a function that by construction cannot be specialised, unless you specialise the receiving function (reify) for every application case. The beauty of reflection is that it's free by virtue of sharing. Unfortunately sharing is the exact opposite of inlining. According to a benchmark I've done a few months ago it behaves exactly as if the reflected value was just a shared argument with no inlining performed; an expected and reasonable result. What I'm really after is a sort of controlled inlining. That's pretty much what instance-based specialisation currently does for dictionaries. Technically a dictionary is just another argument, so there is no fundamental reason why we shouldn't have a more general specialiser. >> > Here yah go https://hackage.haskell.org/package/reflection >> > It exploits how dictionary passing works in a pretty robust way that >> > ghc is likely to at some point codify officially. >> >> Oh, that one. Of course I'm familiar with it, but it's less an >> optimisation than a clever way to implement implicit configurations. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From lambda.fairy at gmail.com Sat Apr 25 22:08:26 2015 From: lambda.fairy at gmail.com (Chris Wong) Date: Sun, 26 Apr 2015 10:08:26 +1200 Subject: [Haskell-cafe] How to compile git-annex? In-Reply-To: References: Message-ID: On Apr 26, 2015 10:01 AM, "Peng Yu" wrote: > > I clean up all Haskell related things in my home directory. > > rm -rf ~/.cabal > rm -rf ~/Library/Haskell > rm -rf ~.ghc/ Is this a typo? You should have removed ~/.ghc, not ~.ghc. > cabal update > cabal install cabal-install > > > Then I try to install git-annex again, which shows me the following. > What I should do next? > > ~$ cabal install git-annex > Resolving dependencies... > In order, the following would be installed: > HUnit-1.2.5.2 (new package) > SafeSemaphore-0.10.1 (new package) > ansi-terminal-0.6.2.1 (new package) > ansi-wl-pprint-0.6.7.2 (new package) > appar-0.1.4 (new package) > async-2.0.2 (new package) > auto-update-0.1.2.1 (new package) > base16-bytestring-0.1.1.6 (new package) > base64-bytestring-1.0.0.1 (new package) > blaze-builder-0.4.0.1 (new package) > blaze-markup-0.7.0.2 (new package) > blaze-html-0.8.0.2 (new package) > bloomfilter-2.0.0.0 (new package) > byteable-0.1.1 (new package) > byteorder-1.0.4 (new package) > bytestring-builder-0.10.6.0.0 (new package) > cereal-0.4.1.1 (new package) > clock-0.4.5.0 (new package) > cryptohash-0.11.6 (new package) > data-default-class-0.0.1 (new package) > data-default-instances-base-0.0.1 (new package) > data-default-instances-containers-0.0.1 (new package) > data-default-instances-old-locale-0.0.1 (new package) > dataenc-0.14.0.7 (new package) > dlist-0.7.1.1 (new package) > data-default-instances-dlist-0.0.1 (new package) > data-default-0.5.3 (new package) > cookie-0.4.1.4 (new package) > easy-file-0.2.0 (new package) > edit-distance-0.2.1.2 (new package) > entropy-0.3.6 (new package) > fast-logger-2.3.1 (new package) > file-embed-0.0.8.2 (new package) > gnuidn-0.2.1 (new package) > hashable-1.2.3.2 (new package) > case-insensitive-1.2.0.4 (new package) > hourglass-0.2.9 (new package) > asn1-types-0.3.0 (new package) > http-types-0.8.6 (new package) > iproute-1.4.0 (new package) > mime-mail-0.4.8.2 (new package) > mime-types-0.1.0.6 (new package) > monad-loops-0.4.2.1 (new package) > nats-1 (new package) > network-info-0.2.0.5 (new package) > network-multicast-0.0.11 (new package) > parallel-3.2.0.6 (new package) > path-pieces-0.2.0 (new package) > prelude-extras-0.4 (new package) > reflection-1.5.1.2 (new package) > safe-0.3.8 (new package) > scientific-0.3.3.8 (new package) > attoparsec-0.12.1.6 (new package) > css-text-0.1.2.1 (new package) > email-validate-2.1.1 (new package) > http-date-0.0.6 (new package) > securemem-0.1.7 (new package) > crypto-cipher-types-0.0.9 (new package) > cipher-aes-0.2.10 (new package) > cipher-des-0.0.6 (new package) > cipher-rc4-0.1.4 (new package) > setenv-0.1.1.3 (new package) > silently-1.2.4.1 (new package) > simple-sendfile-0.2.18 (new package) > socks-0.5.4 (new package) > stm-chans-3.0.0.3 (new package) > stringsearch-0.3.6.6 (new package) > system-filepath-0.4.13.3 (new package) > system-fileio-0.3.16.2 (new package) > tagged-0.8.0.1 (new package) > tagsoup-0.13.3 (new package) > transformers-0.4.3.0 (new version) > StateVar-1.1.0.0 (new package) > crypto-api-0.13.2 (new package) > gsasl-0.3.5 (new package) > mmorph-1.0.4 (new package) > monads-tf-0.1.0.2 (new package) > gnutls-0.1.5 (new package) > mtl-2.2.1 (new version) > IfElse-0.85 (new package) > asn1-encoding-0.9.0 (new package) > asn1-parse-0.9.0 (new package) > crypto-pubkey-types-0.4.3 (new package) > hfsevents-0.1.5 (new package) > hslogger-1.2.8 (new package) > parsec-3.1.9 (reinstall) changes: mtl-2.1.3.1 -> 2.2.1 > bencode-0.5.0.1 (new package) > json-0.9.1 (new package) > network-uri-2.6.0.1 (reinstall) > pem-0.2.2 (new package) > primitive-0.6 (new package) > regex-base-0.93.2 (new package) > regex-posix-0.95.2 (new package) > regex-compat-0.95.1 (new package) > MissingH-1.3.0.1 (new package) > regex-tdfa-1.2.0 (new package) > skein-1.0.9.3 (new package) > streaming-commons-0.1.12 (new package) > torrent-10000.0.0 (new package) > transformers-compat-0.4.0.4 (new package) > MonadRandom-0.3.0.2 (new package) > distributive-0.4.4 (new package) > exceptions-0.8.0.2 (new package) > optparse-applicative-0.11.0.2 (new package) > transformers-base-0.4.4 (new package) > monad-control-1.0.0.4 (new package) > lifted-base-0.2.3.6 (new package) > enclosed-exceptions-1.0.1.1 (new package) > resourcet-1.1.4.1 (new package) > unix-compat-0.4.1.4 (new package) > unix-time-0.3.5 (new package) > unordered-containers-0.2.5.1 (new package) > semigroups-0.16.2.2 (new package) > utf8-string-1 (new package) > language-javascript-0.5.13.3 (new package) > hjsmin-0.1.4.7 (new package) > publicsuffixlist-0.1 (new package) > http-client-0.4.11.1 (new package) > uuid-types-1.0.1 (new package) > uuid-1.3.10 (new package) > vault-0.3.0.4 (new package) > vector-0.10.12.3 (new package) > aeson-0.8.0.2 +old-locale (new package) > crypto-random-0.0.9 (new package) > cprng-aes-0.6.1 (new package) > clientsession-0.9.1.1 (new package) > crypto-numbers-0.2.7 (new package) > crypto-pubkey-0.2.8 (new package) > mwc-random-0.13.3.2 (new package) > resource-pool-0.2.3.2 (new package) > shakespeare-2.0.4.1 (new package) > hamlet-1.2.0 (new package) > void-0.7 (new package) > conduit-1.2.4 (new package) > conduit-extra-1.1.7.2 (new package) > contravariant-1.3.1 (new package) > comonad-4.2.5 (new package) > cryptohash-conduit-0.1.1 (new package) > dns-1.4.5 (new package) > monad-logger-0.3.13.1 (new package) > persistent-2.1.3 (new package) > esqueleto-2.1.3 (new package) > persistent-sqlite-2.1.4.1 (new package) > persistent-template-2.1.3 (new package) > semigroupoids-4.3 (new package) > bifunctors-4.2.1 (new package) > profunctors-4.4.1 (new package) > free-4.11 (new package) > adjunctions-4.2 (new package) > either-4.3.3.2 (new package) > errors-1.4.7 (new package) > kan-extensions-4.2.1 (new package) > lens-4.9.1 (new package) > wai-3.0.2.3 (new package) > wai-logger-2.2.4 (new package) > warp-3.0.12.1 (new package) > word8-0.1.2 (new package) > wai-extra-3.0.7.1 (new package) > wai-app-static-3.0.1 (new package) > x509-1.5.0.1 (new package) > x509-store-1.5.0 (new package) > x509-system-1.5.0 (new package) > x509-validation-1.5.1 (new package) > tls-1.2.17 (new package) > connection-0.2.4 (new package) > http-client-tls-0.2.2 (new package) > http-conduit-2.1.5 (new package) > warp-tls-3.0.3 (new package) > xml-types-0.3.4 (new package) > libxml-sax-0.7.5 (new package) > network-protocol-xmpp-0.4.6 (new package) > xml-conduit-1.2.4 (new package) > aws-0.11.4 (new package) > tagstream-conduit-0.5.5.3 (new package) > authenticate-1.3.2.11 (new package) > xml-hamlet-0.4.0.10 (new package) > DAV-1.0.4 (new package) > xss-sanitize-0.3.5.5 (new package) > yaml-0.8.11 (new package) > yesod-core-1.4.9.1 (new package) > yesod-default-1.2.0 (new package) > yesod-persistent-1.4.0.2 (new package) > yesod-form-1.4.4.1 (new package) > yesod-auth-1.4.4 (new package) > yesod-1.4.1.5 (new package) > yesod-static-1.4.0.4 (new package) > git-annex-5.20150420 -testsuite -feed (new package) > cabal: The following packages are likely to be broken by the reinstalls: > HTTP-4000.2.19 > Use --force-reinstalls if you want to install anyway. > > > On Sat, Apr 25, 2015 at 4:18 PM, Brandon Allbery wrote: > > > > On Sat, Apr 25, 2015 at 3:45 PM, Peng Yu wrote: > >> > >> Why cabal install print so many irrelevant messages? Is it better to > >> follow Unix "Rule of Silence" to only print usage error messages? > > > > > > This is not an irrelevant message: > > > >> asn1-parse-0.9.0 (reinstall) changes: text-1.2.0.4 added > > > > > > It's an indication that things are about to go very wrong, as indeed they > > did. In fact, the errors you got indicate quite a lot of problems with your > > Haskell installation; you apparently have a bunch of broken packages. > > > > -- > > brandon s allbery kf8nh sine nomine associates > > allbery.b at gmail.com ballbery at sinenomine.net > > unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net > > > > -- > Regards, > Peng > _______________________________________________ > cabal-devel mailing list > cabal-devel at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/cabal-devel -- Chris Wong https://lambda.xyz -------------- next part -------------- An HTML attachment was scrubbed... URL: From carter.schonwald at gmail.com Sat Apr 25 22:17:44 2015 From: carter.schonwald at gmail.com (Carter Schonwald) Date: Sat, 25 Apr 2015 18:17:44 -0400 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: Ok cool. Sounds like we agree. Luite and I talked about something similar a few months ago, I'll try to dig up my notes and maybe we can turn this into a ghc feature request or patch On Saturday, April 25, 2015, Ertugrul S?ylemez wrote: > > are you sure you've evaluated how it interacts with this sort of > > optimization? I think it actually gets you pretty far! > > Most of the magic occurs in reification. The trouble is that it expects > a polymorphic function of rank 2 with a nontrivial context (Reifies), a > function that by construction cannot be specialised, unless you > specialise the receiving function (reify) for every application case. > The beauty of reflection is that it's free by virtue of sharing. > Unfortunately sharing is the exact opposite of inlining. > > According to a benchmark I've done a few months ago it behaves exactly > as if the reflected value was just a shared argument with no inlining > performed; an expected and reasonable result. > > What I'm really after is a sort of controlled inlining. That's pretty > much what instance-based specialisation currently does for dictionaries. > Technically a dictionary is just another argument, so there is no > fundamental reason why we shouldn't have a more general specialiser. > > > >> > Here yah go https://hackage.haskell.org/package/reflection > >> > It exploits how dictionary passing works in a pretty robust way that > >> > ghc is likely to at some point codify officially. > >> > >> Oh, that one. Of course I'm familiar with it, but it's less an > >> optimisation than a clever way to implement implicit configurations. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From k-bx at k-bx.com Sat Apr 25 22:44:21 2015 From: k-bx at k-bx.com (Kostiantyn Rybnikov) Date: Sun, 26 Apr 2015 01:44:21 +0300 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: Hi David. I planned to create a detailed bug-report at hslogger's issues to start investigation there (as a better place) on Monday, but since I have code prepared already, it's easy to share it right now: https://gist.github.com/k-bx/ccf6fd1c73680c8a4345 I'm launching it as: time ./dist/build/seq/seq &> /dev/null We don't use syslog driver, instead we have a separate file-to-syslog worker to decouple these components. On Sun, Apr 26, 2015 at 12:08 AM, David Turner < dct25-561bs at mythic-beasts.com> wrote: > Hi, > > I've had a look at this as we use hslogger too, so I'm keen to avoid > this kind of performance issue. I threw a quick Criterion benchmark > together: > > https://gist.github.com/DaveCTurner/f977123b4498c4c64569 > > The headline result on my test machine are that each log call takes > ~540us, so 2000 should take about a second. Would be interested if you > could run the same benchmark on your setup as it's possible that > there's something else downstream that's causing you a problem. > > A couple of things that might be worth bearing in mind: if you're > talking to syslog over /dev/log then that can block if the log daemon > falls behind: unix datagram sockets don't drop datagrams when they're > congested. If the /dev/log test is slow but the UDP test is fast then > it could be that your syslog can't handle the load. > > I'm using rsyslogd and have enabled the feature that combines > identical messages, so this test doesn't generate much disk IO and it > keeps up easily, so the UDP and /dev/log tests run about equally fast > for me. Is your syslog writing out every message? It may be flushing > to disk after every message too, which would be terribly slow. > > If you're not logging to syslog, what's your hslogger config? > > Cheers, > > David > > > On 24 April 2015 at 20:25, Kostiantyn Rybnikov wrote: > > An update for everyone interested (and not). Turned out it's neither GHC > > RTS, Snap or networking issues, it's hslogger being very slow. I thought > > it's slow when used concurrently, but just did a test when it writes 2000 > > 5kb messages sequentially and that finishes in 111 seconds (while minimal > > program that writes same 2000 messages finishes in 0.12s). > > > > I hope I'll have a chance to investigate why hslogger is so slow in > future, > > but meanwhile will just remove logging. > > > > On Thu, Apr 23, 2015 at 4:08 PM, Kostiantyn Rybnikov > wrote: > >> > >> All right, good news! > >> > >> After adding ekg, gathering its data via bosun and seeing nothing > useful I > >> actually figured out that I could try harder to reproduce issue by > myself > >> instead of waiting for users to do that. And I succeeded! :) > >> > >> So, after launching 20 infinite curl loops to that handler's url I was > >> quickly able to reproduce the issue, so the task seems clear now: keep > >> reducing the code, reproduce locally, possibly without external services > >> etc. I'll write up after I get to something. > >> > >> Thanks. > >> > >> On Wed, Apr 22, 2015 at 11:09 PM, Gregory Collins > >> wrote: > >>> > >>> Maybe but it would be helpful to rule the scenario out. Johan's ekg > >>> library is also useful, it exports a webserver on a different port > that you > >>> can use to track metrics like gc times, etc. > >>> > >>> Other options for further debugging include gathering strace logs from > >>> the binary. You'll have to do some data gathering to narrow down the > cause > >>> unfortunately -- http client? your code? Snap server? GHC event manager > >>> (System.timeout is implemented here)? GC? etc > >>> > >>> G > >>> > >>> On Wed, Apr 22, 2015 at 10:14 AM, Kostiantyn Rybnikov > >>> wrote: > >>>> > >>>> Gregory, > >>>> > >>>> Servers are far from being highly-overloaded, since they're currently > >>>> under a much less load they used to be. Memory consumption is stable > and > >>>> low, and there's a lot of free RAM also. > >>>> > >>>> Would you say that given these factors this scenario is unlikely? > >>>> > >>>> On Wed, Apr 22, 2015 at 7:56 PM, Gregory Collins > >>>> wrote: > >>>>> > >>>>> Given your gist, the timeout on your requests is set to a half-second > >>>>> so it's conceivable that a highly-loaded server might have GC pause > times > >>>>> approaching that long. Smells to me like a classic Haskell memory > leak > >>>>> (that's why the problem occurs after the server has been up for a > while): > >>>>> run your program with the heap profiler, and audit any shared > >>>>> tables/IORefs/MVars to make sure you are not building up thunks > there. > >>>>> > >>>>> Greg > >>>>> > >>>>> On Wed, Apr 22, 2015 at 9:14 AM, Kostiantyn Rybnikov > >>>>> wrote: > >>>>>> > >>>>>> Hi! > >>>>>> > >>>>>> Our company's main commercial product is a Snap-based web app which > we > >>>>>> compile with GHC 7.8.4. It works on four app-servers currently > load-balanced > >>>>>> behind Haproxy. > >>>>>> > >>>>>> I recently implemented a new piece of functionality, which led to > >>>>>> weird behavior which I have no idea how to debug, so I'm asking > here for > >>>>>> help and ideas! > >>>>>> > >>>>>> The new functionality is this: on specific url-handler, we need to > >>>>>> query n external services concurrently with a timeout, gather and > render > >>>>>> results. Easy (in Haskell)! > >>>>>> > >>>>>> The implementation looks, as you might imagine, something like this > >>>>>> (sorry for almost-real-haskell, I'm sure I forgot tons of imports > and other > >>>>>> things, but I hope everything is clear as-is, if not -- I'll be > glad to > >>>>>> update gist to make things more specific): > >>>>>> > >>>>>> https://gist.github.com/k-bx/0cf7035aaf1ad6306e76 > >>>>>> > >>>>>> Now, this works wonderful for some time, and in logs I can see both, > >>>>>> successful fetches of external-content, and also lots of timeouts > from our > >>>>>> external providers. Life is good. > >>>>>> > >>>>>> But! After several days of work (sometimes a day, sometimes couple > >>>>>> days), apps on all 4 servers go crazy. It might take some interval > (like 20 > >>>>>> minutes) before they're all crazy, so it's not super-synchronous. > Now: how > >>>>>> crazy, exactly? > >>>>>> > >>>>>> First of all, this endpoint timeouts. Haproxy requests for a > response, > >>>>>> and response times out, so they "hang". > >>>>>> > >>>>>> Secondly, logs are interesting. If you look at the code from gist > once > >>>>>> again, you can see, that some of CandidateProvider's don't actually > require > >>>>>> any networking work, so all they do is actually just logging that > they're > >>>>>> working (I added this as part of debugging actually) and return > pure data. > >>>>>> So what's weird is that they timeout also! Here's how output of our > logs > >>>>>> starts to look like after the bug happens: > >>>>>> > >>>>>> ``` > >>>>>> [2015-04-22 09:56:20] provider: CandidateProvider1 > >>>>>> [2015-04-22 09:56:20] provider: CandidateProvider2 > >>>>>> [2015-04-22 09:56:21] Got timeout while requesting > CandidateProvider1 > >>>>>> [2015-04-22 09:56:21] Got timeout while requesting > CandidateProvider2 > >>>>>> [2015-04-22 09:56:22] provider: CandidateProvider1 > >>>>>> [2015-04-22 09:56:22] provider: CandidateProvider2 > >>>>>> [2015-04-22 09:56:23] Got timeout while requesting > CandidateProvider1 > >>>>>> [2015-04-22 09:56:23] Got timeout while requesting > CandidateProvider2 > >>>>>> ... and so on > >>>>>> ``` > >>>>>> > >>>>>> What's also weird is that, even after timeout is logged, the string > >>>>>> ""Got responses!" never gets logged also! So hanging happens > somewhere > >>>>>> in-between. > >>>>>> > >>>>>> I have to say I'm sorry that I don't have strace output now, I'll > have > >>>>>> to wait until this situation happens once again, but I'll get later > to you > >>>>>> with this info. > >>>>>> > >>>>>> So, how is this possible that almost-pure code gets timed-out? And > why > >>>>>> does it hang afterwards? > >>>>>> > >>>>>> CPU and other resource usage is quite low, number of open > >>>>>> file-descriptors also (it seems). > >>>>>> > >>>>>> Thanks for all the suggestions in advance! > >>>>>> > >>>>>> _______________________________________________ > >>>>>> Haskell-Cafe mailing list > >>>>>> Haskell-Cafe at haskell.org > >>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > >>>>>> > >>>>> > >>>>> > >>>>> > >>>>> -- > >>>>> Gregory Collins > >>>> > >>>> > >>> > >>> > >>> > >>> -- > >>> Gregory Collins > >> > >> > > > > > > _______________________________________________ > > Haskell-Cafe mailing list > > Haskell-Cafe at haskell.org > > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sumit.sahrawat.apm13 at iitbhu.ac.in Sat Apr 25 23:48:47 2015 From: sumit.sahrawat.apm13 at iitbhu.ac.in (Sumit Sahrawat, Maths & Computing, IIT (BHU)) Date: Sun, 26 Apr 2015 05:18:47 +0530 Subject: [Haskell-cafe] GHCi shows result of (IO a) only if (a) is in class Show In-Reply-To: <553B86AD.5030707@gmx.net> References: <553B86AD.5030707@gmx.net> Message-ID: If something can't be shown (converted to a string), then it can't be printed (as a string). On 25 April 2015 at 17:51, Daniel van den Eijkel wrote: > I wrote a parser and it took me a while to realize why GHCi suddenly did > not show any result nor an error message anymore. > > My parsing function has type (IO Expression), Expression is in class Show. > > After changing the parser to (IO Declaration), it did not show anything > anymore, because Declaration was not in class Show. > > When I typed (parseFile "input.txt" >>= print), I got the error message > and understood what was going on. But for I while I was really confused > what's happening. > > Just wanted to share this. > > Best, > Daniel > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- Regards Sumit Sahrawat -------------- next part -------------- An HTML attachment was scrubbed... URL: From winterkoninkje at gmail.com Sun Apr 26 01:10:15 2015 From: winterkoninkje at gmail.com (wren romano) Date: Sat, 25 Apr 2015 21:10:15 -0400 Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: References: Message-ID: On Fri, Apr 24, 2015 at 9:06 AM, Ivan Lazar Miljenovic wrote: > What is the validity of defining an Ord instance for types for which > mathematically the `compare` function is partially ordered? Defining Ord instances for types which are not totally ordered is *wrong*. For example, due to the existence of NaN values, Double/Float are not totally ordered and therefore their Ord instances are buggy. In my logfloat package I have to explicitly add checks to work around the issues introduced by the buggy Ord Double instance. This is why I introduced the PartialOrd class, and I'm not the first one to create such a class. We really ought to have an official PartialOrd class as part of base/Prelude. The only question is whether to use Maybe Ordering or a specially defined PartialOrdering type (the latter optimizing for space and pointer indirection; the former optimizing for reducing code duplication for manipulating the Ordering/PartialOrdering types). -- Live well, ~wren From sumit.sahrawat.apm13 at iitbhu.ac.in Sun Apr 26 02:44:51 2015 From: sumit.sahrawat.apm13 at iitbhu.ac.in (Sumit Sahrawat, Maths & Computing, IIT (BHU)) Date: Sun, 26 Apr 2015 08:14:51 +0530 Subject: [Haskell-cafe] GHCi shows result of (IO a) only if (a) is in class Show In-Reply-To: <2c9be2bd-2531-4ef8-9bc5-85d0fb9fbbee@googlegroups.com> References: <553B86AD.5030707@gmx.net> <2c9be2bd-2531-4ef8-9bc5-85d0fb9fbbee@googlegroups.com> Message-ID: On 26 April 2015 at 06:10, Alexey Vagarenko wrote: > Yes, but ghci shows an error if it can't print a value, except when the > value is in IO monad. Compare: > Prelude> id > :6:1: > No instance for (Show (a0 -> a0)) > (maybe you haven't applied enough arguments to a function?) > arising from a use of `print' > In a stmt of an interactive GHCi command: print it > Prelude> > > and > > Prelude> return id > Prelude> > > This behavior is necessary. For example, if we used any function with result of type IO (), such as writeFile, we don't want an error as we are interested in the side-effects only. > > > ???????????, 26 ?????? 2015 ?., 5:48:55 UTC+6 ???????????? Sumit Sahrawat, > Maths & Computing, IIT (BHU) ???????: >> >> If something can't be shown (converted to a string), then it can't be >> printed (as a string). >> >> On 25 April 2015 at 17:51, Daniel van den Eijkel wrote: >> >>> I wrote a parser and it took me a while to realize why GHCi suddenly did >>> not show any result nor an error message anymore. >>> >>> My parsing function has type (IO Expression), Expression is in class >>> Show. >>> >>> After changing the parser to (IO Declaration), it did not show anything >>> anymore, because Declaration was not in class Show. >>> >>> When I typed (parseFile "input.txt" >>= print), I got the error message >>> and understood what was going on. But for I while I was really confused >>> what's happening. >>> >>> Just wanted to share this. >>> >>> Best, >>> Daniel >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskel... at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >> >> >> >> -- >> Regards >> >> Sumit Sahrawat >> > -- Regards Sumit Sahrawat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dct25-561bs at mythic-beasts.com Sun Apr 26 02:52:12 2015 From: dct25-561bs at mythic-beasts.com (David Turner) Date: Sun, 26 Apr 2015 03:52:12 +0100 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: I see. The issue seems to be the default handler which writes the log to stderr. Replacing 'addHandler h' with 'setHandlers [h]' makes it run in a reasonable time as 'setHandlers' starts afresh; conversely it's still slow even if you just use the default handler on its own, i.e. removing the call to 'addHandler'. I was a bit suspicious about the 'hFlush' in System.Log.Handler.Simple but removing that didn't help. However, suspecting that the issue may be too-much-flushing I eventually found https://ghc.haskell.org/trac/ghc/ticket/7418 which says that writing to stderr is slow because its default buffering mode is NoBuffering. I added 'hSetBuffering stderr LineBuffering' and boom, it runs at a sensible speed. Recommend you get rid of writing to stderr and just log to the file unless you've a good reason to send the output both ways, in which case switch to LineBuffering as above. If you're using rsyslog you also may be interested to read http://www.rsyslog.com/doc/queues.html - this describes how it can decouple the final log output from the input using a combination of in-memory and on-disk buffers, and even discard lower-priority messages from the queue if the going gets really tough. We judged it a lot of effort to implement our own on-disk spool and were particularly worried about it growing without bound if the downstream was too slow, so this feature of rsyslog was just what we needed. Hope that helps, David On 25 April 2015 at 23:44, Kostiantyn Rybnikov wrote: > Hi David. > > I planned to create a detailed bug-report at hslogger's issues to start > investigation there (as a better place) on Monday, but since I have code > prepared already, it's easy to share it right now: > https://gist.github.com/k-bx/ccf6fd1c73680c8a4345 > > I'm launching it as: > > time ./dist/build/seq/seq &> /dev/null > > We don't use syslog driver, instead we have a separate file-to-syslog worker > to decouple these components. > > On Sun, Apr 26, 2015 at 12:08 AM, David Turner > wrote: >> >> Hi, >> >> I've had a look at this as we use hslogger too, so I'm keen to avoid >> this kind of performance issue. I threw a quick Criterion benchmark >> together: >> >> https://gist.github.com/DaveCTurner/f977123b4498c4c64569 >> >> The headline result on my test machine are that each log call takes >> ~540us, so 2000 should take about a second. Would be interested if you >> could run the same benchmark on your setup as it's possible that >> there's something else downstream that's causing you a problem. >> >> A couple of things that might be worth bearing in mind: if you're >> talking to syslog over /dev/log then that can block if the log daemon >> falls behind: unix datagram sockets don't drop datagrams when they're >> congested. If the /dev/log test is slow but the UDP test is fast then >> it could be that your syslog can't handle the load. >> >> I'm using rsyslogd and have enabled the feature that combines >> identical messages, so this test doesn't generate much disk IO and it >> keeps up easily, so the UDP and /dev/log tests run about equally fast >> for me. Is your syslog writing out every message? It may be flushing >> to disk after every message too, which would be terribly slow. >> >> If you're not logging to syslog, what's your hslogger config? >> >> Cheers, >> >> David >> >> >> On 24 April 2015 at 20:25, Kostiantyn Rybnikov wrote: >> > An update for everyone interested (and not). Turned out it's neither GHC >> > RTS, Snap or networking issues, it's hslogger being very slow. I thought >> > it's slow when used concurrently, but just did a test when it writes >> > 2000 >> > 5kb messages sequentially and that finishes in 111 seconds (while >> > minimal >> > program that writes same 2000 messages finishes in 0.12s). >> > >> > I hope I'll have a chance to investigate why hslogger is so slow in >> > future, >> > but meanwhile will just remove logging. >> > >> > On Thu, Apr 23, 2015 at 4:08 PM, Kostiantyn Rybnikov >> > wrote: >> >> >> >> All right, good news! >> >> >> >> After adding ekg, gathering its data via bosun and seeing nothing >> >> useful I >> >> actually figured out that I could try harder to reproduce issue by >> >> myself >> >> instead of waiting for users to do that. And I succeeded! :) >> >> >> >> So, after launching 20 infinite curl loops to that handler's url I was >> >> quickly able to reproduce the issue, so the task seems clear now: keep >> >> reducing the code, reproduce locally, possibly without external >> >> services >> >> etc. I'll write up after I get to something. >> >> >> >> Thanks. >> >> >> >> On Wed, Apr 22, 2015 at 11:09 PM, Gregory Collins >> >> wrote: >> >>> >> >>> Maybe but it would be helpful to rule the scenario out. Johan's ekg >> >>> library is also useful, it exports a webserver on a different port >> >>> that you >> >>> can use to track metrics like gc times, etc. >> >>> >> >>> Other options for further debugging include gathering strace logs from >> >>> the binary. You'll have to do some data gathering to narrow down the >> >>> cause >> >>> unfortunately -- http client? your code? Snap server? GHC event >> >>> manager >> >>> (System.timeout is implemented here)? GC? etc >> >>> >> >>> G >> >>> >> >>> On Wed, Apr 22, 2015 at 10:14 AM, Kostiantyn Rybnikov >> >>> wrote: >> >>>> >> >>>> Gregory, >> >>>> >> >>>> Servers are far from being highly-overloaded, since they're currently >> >>>> under a much less load they used to be. Memory consumption is stable >> >>>> and >> >>>> low, and there's a lot of free RAM also. >> >>>> >> >>>> Would you say that given these factors this scenario is unlikely? >> >>>> >> >>>> On Wed, Apr 22, 2015 at 7:56 PM, Gregory Collins >> >>>> wrote: >> >>>>> >> >>>>> Given your gist, the timeout on your requests is set to a >> >>>>> half-second >> >>>>> so it's conceivable that a highly-loaded server might have GC pause >> >>>>> times >> >>>>> approaching that long. Smells to me like a classic Haskell memory >> >>>>> leak >> >>>>> (that's why the problem occurs after the server has been up for a >> >>>>> while): >> >>>>> run your program with the heap profiler, and audit any shared >> >>>>> tables/IORefs/MVars to make sure you are not building up thunks >> >>>>> there. >> >>>>> >> >>>>> Greg >> >>>>> >> >>>>> On Wed, Apr 22, 2015 at 9:14 AM, Kostiantyn Rybnikov >> >>>>> wrote: >> >>>>>> >> >>>>>> Hi! >> >>>>>> >> >>>>>> Our company's main commercial product is a Snap-based web app which >> >>>>>> we >> >>>>>> compile with GHC 7.8.4. It works on four app-servers currently >> >>>>>> load-balanced >> >>>>>> behind Haproxy. >> >>>>>> >> >>>>>> I recently implemented a new piece of functionality, which led to >> >>>>>> weird behavior which I have no idea how to debug, so I'm asking >> >>>>>> here for >> >>>>>> help and ideas! >> >>>>>> >> >>>>>> The new functionality is this: on specific url-handler, we need to >> >>>>>> query n external services concurrently with a timeout, gather and >> >>>>>> render >> >>>>>> results. Easy (in Haskell)! >> >>>>>> >> >>>>>> The implementation looks, as you might imagine, something like this >> >>>>>> (sorry for almost-real-haskell, I'm sure I forgot tons of imports >> >>>>>> and other >> >>>>>> things, but I hope everything is clear as-is, if not -- I'll be >> >>>>>> glad to >> >>>>>> update gist to make things more specific): >> >>>>>> >> >>>>>> https://gist.github.com/k-bx/0cf7035aaf1ad6306e76 >> >>>>>> >> >>>>>> Now, this works wonderful for some time, and in logs I can see >> >>>>>> both, >> >>>>>> successful fetches of external-content, and also lots of timeouts >> >>>>>> from our >> >>>>>> external providers. Life is good. >> >>>>>> >> >>>>>> But! After several days of work (sometimes a day, sometimes couple >> >>>>>> days), apps on all 4 servers go crazy. It might take some interval >> >>>>>> (like 20 >> >>>>>> minutes) before they're all crazy, so it's not super-synchronous. >> >>>>>> Now: how >> >>>>>> crazy, exactly? >> >>>>>> >> >>>>>> First of all, this endpoint timeouts. Haproxy requests for a >> >>>>>> response, >> >>>>>> and response times out, so they "hang". >> >>>>>> >> >>>>>> Secondly, logs are interesting. If you look at the code from gist >> >>>>>> once >> >>>>>> again, you can see, that some of CandidateProvider's don't actually >> >>>>>> require >> >>>>>> any networking work, so all they do is actually just logging that >> >>>>>> they're >> >>>>>> working (I added this as part of debugging actually) and return >> >>>>>> pure data. >> >>>>>> So what's weird is that they timeout also! Here's how output of our >> >>>>>> logs >> >>>>>> starts to look like after the bug happens: >> >>>>>> >> >>>>>> ``` >> >>>>>> [2015-04-22 09:56:20] provider: CandidateProvider1 >> >>>>>> [2015-04-22 09:56:20] provider: CandidateProvider2 >> >>>>>> [2015-04-22 09:56:21] Got timeout while requesting >> >>>>>> CandidateProvider1 >> >>>>>> [2015-04-22 09:56:21] Got timeout while requesting >> >>>>>> CandidateProvider2 >> >>>>>> [2015-04-22 09:56:22] provider: CandidateProvider1 >> >>>>>> [2015-04-22 09:56:22] provider: CandidateProvider2 >> >>>>>> [2015-04-22 09:56:23] Got timeout while requesting >> >>>>>> CandidateProvider1 >> >>>>>> [2015-04-22 09:56:23] Got timeout while requesting >> >>>>>> CandidateProvider2 >> >>>>>> ... and so on >> >>>>>> ``` >> >>>>>> >> >>>>>> What's also weird is that, even after timeout is logged, the string >> >>>>>> ""Got responses!" never gets logged also! So hanging happens >> >>>>>> somewhere >> >>>>>> in-between. >> >>>>>> >> >>>>>> I have to say I'm sorry that I don't have strace output now, I'll >> >>>>>> have >> >>>>>> to wait until this situation happens once again, but I'll get later >> >>>>>> to you >> >>>>>> with this info. >> >>>>>> >> >>>>>> So, how is this possible that almost-pure code gets timed-out? And >> >>>>>> why >> >>>>>> does it hang afterwards? >> >>>>>> >> >>>>>> CPU and other resource usage is quite low, number of open >> >>>>>> file-descriptors also (it seems). >> >>>>>> >> >>>>>> Thanks for all the suggestions in advance! >> >>>>>> >> >>>>>> _______________________________________________ >> >>>>>> Haskell-Cafe mailing list >> >>>>>> Haskell-Cafe at haskell.org >> >>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >>>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> -- >> >>>>> Gregory Collins >> >>>> >> >>>> >> >>> >> >>> >> >>> >> >>> -- >> >>> Gregory Collins >> >> >> >> >> > >> > >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > Haskell-Cafe at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > > > From ertesx at gmx.de Sun Apr 26 03:37:59 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Sun, 26 Apr 2015 05:37:59 +0200 Subject: [Haskell-cafe] Class-like features for explicit arguments In-Reply-To: References: <553BA3A5.6060608@ro-che.info> Message-ID: > Ok cool. Sounds like we agree. > Luite and I talked about something similar a few months ago, I'll try > to dig up my notes and maybe we can turn this into a ghc feature > request or patch Great, I would definitely help with that! >> > are you sure you've evaluated how it interacts with this sort of >> > optimization? I think it actually gets you pretty far! >> >> Most of the magic occurs in reification. The trouble is that it expects >> a polymorphic function of rank 2 with a nontrivial context (Reifies), a >> function that by construction cannot be specialised, unless you >> specialise the receiving function (reify) for every application case. >> The beauty of reflection is that it's free by virtue of sharing. >> Unfortunately sharing is the exact opposite of inlining. >> >> According to a benchmark I've done a few months ago it behaves exactly >> as if the reflected value was just a shared argument with no inlining >> performed; an expected and reasonable result. >> >> What I'm really after is a sort of controlled inlining. That's pretty >> much what instance-based specialisation currently does for dictionaries. >> Technically a dictionary is just another argument, so there is no >> fundamental reason why we shouldn't have a more general specialiser. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From sumit.sahrawat.apm13 at iitbhu.ac.in Sun Apr 26 04:26:11 2015 From: sumit.sahrawat.apm13 at iitbhu.ac.in (Sumit Sahrawat, Maths & Computing, IIT (BHU)) Date: Sun, 26 Apr 2015 09:56:11 +0530 Subject: [Haskell-cafe] GHCi shows result of (IO a) only if (a) is in class Show In-Reply-To: <2c9be2bd-2531-4ef8-9bc5-85d0fb9fbbee@googlegroups.com> References: <553B86AD.5030707@gmx.net> <2c9be2bd-2531-4ef8-9bc5-85d0fb9fbbee@googlegroups.com> Message-ID: On 26 April 2015 at 06:10, Alexey Vagarenko wrote: > Yes, but ghci shows an error if it can't print a value, except when the > value is in IO monad. Compare: > Prelude> id > :6:1: > No instance for (Show (a0 -> a0)) > (maybe you haven't applied enough arguments to a function?) > arising from a use of `print' > In a stmt of an interactive GHCi command: print it > Prelude> > > and > > Prelude> return id > Prelude> > This behavior is necessary. For example, if we used any function with result of type IO (), such as writeFile, we don't want an error as we are interested in the side-effects only. > > ???????????, 26 ?????? 2015 ?., 5:48:55 UTC+6 ???????????? Sumit Sahrawat, > Maths & Computing, IIT (BHU) ???????: >> >> If something can't be shown (converted to a string), then it can't be >> printed (as a string). >> >> On 25 April 2015 at 17:51, Daniel van den Eijkel wrote: >> >>> I wrote a parser and it took me a while to realize why GHCi suddenly did >>> not show any result nor an error message anymore. >>> >>> My parsing function has type (IO Expression), Expression is in class >>> Show. >>> >>> After changing the parser to (IO Declaration), it did not show anything >>> anymore, because Declaration was not in class Show. >>> >>> When I typed (parseFile "input.txt" >>= print), I got the error message >>> and understood what was going on. But for I while I was really confused >>> what's happening. >>> >>> Just wanted to share this. >>> >>> Best, >>> Daniel >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskel... at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >> >> >> >> -- >> Regards >> >> Sumit Sahrawat >> > -- Regards Sumit Sahrawat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sumit.sahrawat.apm13 at iitbhu.ac.in Sun Apr 26 04:30:50 2015 From: sumit.sahrawat.apm13 at iitbhu.ac.in (Sumit Sahrawat, Maths & Computing, IIT (BHU)) Date: Sun, 26 Apr 2015 10:00:50 +0530 Subject: [Haskell-cafe] GHCi shows result of (IO a) only if (a) is in class Show In-Reply-To: References: <553B86AD.5030707@gmx.net> <2c9be2bd-2531-4ef8-9bc5-85d0fb9fbbee@googlegroups.com> Message-ID: Sorry for double posting, the highlighted code example seems to be a problem for mailer-daemon at google.com. -- Regards Sumit Sahrawat -------------- next part -------------- An HTML attachment was scrubbed... URL: From magnus at therning.org Sun Apr 26 06:55:20 2015 From: magnus at therning.org (Magnus Therning) Date: Sun, 26 Apr 2015 08:55:20 +0200 Subject: [Haskell-cafe] 503 on upload to Hackage? Message-ID: <20150426065520.GA5404@tatooine> Yesterday and today I made a few releases of a package to Hackage. On each upload I get a 503 error. This is the text: ~~~ Error 503 backend read error backend read error Guru Mediation: Details: cache-fra1229-FRA 1430030130 1668038750 Varnish cache server ~~~ I'm guessing this isn't expected behaviour. Has anyone else seen this? /M -- Magnus Therning OpenPGP: 0xAB4DFBA4 email: magnus at therning.org jabber: magnus at therning.org twitter: magthe http://therning.org/magnus Finagle's Second Law: Always keep a record of data -- it indicates you've been working. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From michael at snoyman.com Sun Apr 26 06:56:56 2015 From: michael at snoyman.com (Michael Snoyman) Date: Sun, 26 Apr 2015 06:56:56 +0000 Subject: [Haskell-cafe] 503 on upload to Hackage? In-Reply-To: <20150426065520.GA5404@tatooine> References: <20150426065520.GA5404@tatooine> Message-ID: I see it regularly on my uploads, but it doesn't seem to prevent the upload from succeeding. On Sun, Apr 26, 2015 at 9:55 AM Magnus Therning wrote: > Yesterday and today I made a few releases of a package to Hackage. On > each upload I get a 503 error. This is the text: > > ~~~ > Error 503 backend read error > > backend read error > Guru Mediation: > > Details: cache-fra1229-FRA 1430030130 1668038750 > > Varnish cache server > ~~~ > > I'm guessing this isn't expected behaviour. Has anyone else seen > this? > > /M > > -- > Magnus Therning OpenPGP: 0xAB4DFBA4 > email: magnus at therning.org jabber: magnus at therning.org > twitter: magthe http://therning.org/magnus > > Finagle's Second Law: > Always keep a record of data -- it indicates you've been working. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From magnus at therning.org Sun Apr 26 07:05:14 2015 From: magnus at therning.org (Magnus Therning) Date: Sun, 26 Apr 2015 09:05:14 +0200 Subject: [Haskell-cafe] 503 on upload to Hackage? In-Reply-To: References: <20150426065520.GA5404@tatooine> Message-ID: <20150426070514.GB5404@tatooine> On Sun, Apr 26, 2015 at 06:56:56AM +0000, Michael Snoyman wrote: > I see it regularly on my uploads, but it doesn't seem to prevent the > upload from succeeding. Indeed, I should have added that all my uploads have arrived as expected on Hackage. /M -- Magnus Therning OpenPGP: 0xAB4DFBA4 email: magnus at therning.org jabber: magnus at therning.org twitter: magthe http://therning.org/magnus The results point out the fragility of programmer expertise: advanced programmers have strong expectations about what programs should look like, and when those expectations are violated--in seemingly innocuous ways--their performance drops drastically. -- Elliot Soloway and Kate Ehrlich -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 163 bytes Desc: not available URL: From martin.drautzburg at web.de Sun Apr 26 13:20:29 2015 From: martin.drautzburg at web.de (martin) Date: Sun, 26 Apr 2015 15:20:29 +0200 Subject: [Haskell-cafe] Function hanging in infinite input Message-ID: <553CE61D.7020905@web.de> Hello all, I was trying to implement >>= (tBind) on my "Temporal" data type and found that it hangs on an operation like takeInitialPart $ infiniteTemporal >>= (\x -> finiteTemporal) I am pretty sure the result is well defined and by no means infinite. Also the code works on finite Temporals. How does one address such problems? I attach the relevant pieces of code, in case someone would be so kind and inspect or run it. Feel free to point out any flaws as I might be completely off-track. If you call ex10 in GHCI, you get no result. The line marked with "< here" is executed over and over, but apparently without contributing to the result. The debugger shows, that "hd" does have a correct, finite value each time and that "tpr" is consumed as expected. -- -- -- -- -- -- -- -- -- examples -- -- -- -- -- -- -- ex1 = Temporal [(DPast, 1), (T 3,3), (T 7, 7)] :: Temporal Int ex10 = tUntil (T 5) $ outer `tBind` \_ -> ex1 :: Temporal Int where outer = Temporal $ (DPast,0):[(T (fromIntegral t), t)| t <- [5,10 ..]] -- -- -- -- -- -- -- -- -- tBind -- -- -- -- -- -- -- -- -- Changed (Temporal a) to (Temporal Int) for debugging tBind :: (Temporal Int) -> (Int -> Temporal Int) -> Temporal Int tBind tpr f -- tpr is infinite in this example, let's forget these cases -- | tNull tpr = error "empty Temporal" -- | tNull (tTail tpr) = laties | otherwise = let hd = (tUntil (tTt tpr) laties) in hd `tAppend` (tTail tpr `tBind` f) -- < here where laties = switchAt (tTh tpr) ( f (tVh tpr)) tTail (Temporal xs) = Temporal (tail xs) tAppend (Temporal as) (Temporal bs) = Temporal (as ++ bs) switchAt t tpx | tNull (tTail tpx) = Temporal (tot tpx) | between t (tTh tpx) (tTt tpx) = Temporal (tot tpx) | otherwise = switchAt t (tTail tpx) where tot (Temporal ((ty,vy):xs)) = ((max t ty, vy):xs) between t x y = t >= x && t < y -- -- -- -- -- -- -- -- -- helpers -- -- -- -- -- -- -- -- data Time = DPast | T Integer deriving (Eq, Show) -- DPast is "distant past" instance Ord Time where compare DPast DPast = EQ compare DPast _ = LT compare _ DPast = GT compare (T t1) (T t2) = compare t1 t2 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- data Temporal a = Temporal [(Time, a)] deriving (Eq, Show) tVh :: Temporal a -> a tVh = snd . head . toList -- value head tTt, tTh :: Temporal a -> Time tTt = fst . head . tail . toList -- time tail tTh = fst . head . toList -- time head tNull = null . toList tUntil :: Time -> Temporal a -> Temporal a tUntil t (Temporal xs) = Temporal $ (takeWhile (\(tx, vx) -> tx > t)) xs toList :: Temporal a -> [(Time, a)] toList (Temporal xs) = xs From jerzy.karczmarczuk at unicaen.fr Sun Apr 26 15:28:49 2015 From: jerzy.karczmarczuk at unicaen.fr (Jerzy Karczmarczuk) Date: Sun, 26 Apr 2015 17:28:49 +0200 Subject: [Haskell-cafe] Function hanging in infinite input In-Reply-To: <553CE61D.7020905@web.de> References: <553CE61D.7020905@web.de> Message-ID: <553D0431.4090403@unicaen.fr> Le 26/04/2015 15:20, martin a ?crit : > I was trying to implement >>= (tBind) on my "Temporal" data type and found that it hangs on an operation like > > takeInitialPart $ infiniteTemporal >>= (\x -> finiteTemporal) > > I am pretty sure the result is well defined and by no means infinite./*Also the code works on finite Temporals.*/ I am not sure about this... I replaced [5,10 ..] by [5,10 .. 100], so outer becomes Temporal [(DPast,0),(T 5,5),(T 10,10),(T 15,15),(T 20,20),(T 25,25),(T 30,30),(T 35,35),(T 40,40),(T 45,45),(T 50,50),(T 55,55),(T 60,60),(T 65,65),(T 70,70),(T 75,75),(T 80,80),(T 85,85),(T 90,90),(T 95,95),(T 100,100)] and GHCi says: *Main> ex10 *** Exception: Prelude.head: empty list == Jerzy Karczmarczuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From lsp at informatik.uni-kiel.de Sun Apr 26 16:08:32 2015 From: lsp at informatik.uni-kiel.de (lennart spitzner) Date: Sun, 26 Apr 2015 18:08:32 +0200 Subject: [Haskell-cafe] ANN: exference: a different djinn Message-ID: <553D0D80.9070902@informatik.uni-kiel.de> Hello folks, Exference[1] is a Haskell tool for generating expressions from a type, e.g. Input: (Show b) => (a -> b) -> [a] -> [String] Output: \ b -> fmap (\ g -> show (b g)) In contrast to Djinn, the well known tool with the same general purpose, Exference supports a larger subset of the haskell type system - most prominently type classes. (Djinn's environment many contain type classes, but using them in queries will not really work.) This comes at a cost, however: Exference makes no promise regarding termination. Where Djinn tells you "there are no solutions", Exference will keep trying, sometimes aborting a non-ending search with "i could not find any solutions". See [2] for a report about the implementation, capabilities and limitations. [1] https://github.com/lspitzner/exference [2] https://github.com/lspitzner/exference/raw/master/exference.pdf Lennart From martin.drautzburg at web.de Sun Apr 26 16:55:23 2015 From: martin.drautzburg at web.de (martin) Date: Sun, 26 Apr 2015 18:55:23 +0200 Subject: [Haskell-cafe] Function hanging in infinite input In-Reply-To: <553D0431.4090403@unicaen.fr> References: <553CE61D.7020905@web.de> <553D0431.4090403@unicaen.fr> Message-ID: <553D187B.6050201@web.de> Am 04/26/2015 um 05:28 PM schrieb Jerzy Karczmarczuk: > > Le 26/04/2015 15:20, martin a ?crit : >> I was trying to implement >>= (tBind) on my "Temporal" data type and found that it hangs on an operation like >> >> takeInitialPart $ infiniteTemporal >>= (\x -> finiteTemporal) >> >> I am pretty sure the result is well defined and by no means infinite. /*Also the code works on finite Temporals.*/ > I am not sure about this... > I replaced [5,10 ..] by [5,10 .. 100], so outer becomes > > Temporal [(DPast,0),(T 5,5),(T 10,10),(T 15,15),(T 20,20),(T 25,25),(T 30,30),(T 35,35),(T 40,40),(T 45,45),(T 50,50),(T > 55,55),(T 60,60),(T 65,65),(T 70,70),(T 75,75),(T 80,80),(T 85,85),(T 90,90),(T 95,95),(T 100,100)] > > and GHCi says: > > *Main> ex10 > *** Exception: Prelude.head: empty list This is because I have commented out two corner cases in tBind, to make sure it's not them. Remove the comments and it'll work From jerzy.karczmarczuk at unicaen.fr Sun Apr 26 17:05:01 2015 From: jerzy.karczmarczuk at unicaen.fr (Jerzy Karczmarczuk) Date: Sun, 26 Apr 2015 19:05:01 +0200 Subject: [Haskell-cafe] Function hanging in infinite input In-Reply-To: <553D187B.6050201@web.de> References: <553CE61D.7020905@web.de> <553D0431.4090403@unicaen.fr> <553D187B.6050201@web.de> Message-ID: <553D1ABD.1010602@unicaen.fr> Martin reacts to my non-answer: > Am 04/26/2015 um 05:28 PM schrieb Jerzy Karczmarczuk: > > ... > and GHCi says: > > *Main> ex10 > *** Exception: Prelude.head: empty list > This is because I have commented out two corner cases in tBind, to make sure it's not them. Remove the comments and > it'll work Martin I sent you a private follow-up. I repeat it here. I uncommented those lines. Your program goes until the end of the list, and returns the /*last*/ element (modified). The form *tBind tpr f ... (tTail tpr `tBind` f) * loops until ... Now, I know about laziness... It seems that it doesn't help. Most probably your hd is simply empty, and the tail gets stuck in an idle loop. Jerzy Karczmarczuk -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.gibiansky at gmail.com Sun Apr 26 17:38:15 2015 From: andrew.gibiansky at gmail.com (Andrew Gibiansky) Date: Sun, 26 Apr 2015 10:38:15 -0700 Subject: [Haskell-cafe] ANN: exference: a different djinn In-Reply-To: <553D0D80.9070902@informatik.uni-kiel.de> References: <553D0D80.9070902@informatik.uni-kiel.de> Message-ID: Could this somehow be used with the GHC API, so that it could be embedded as autocomplete in an interpreter / code editor? I am imagining using this with GHC 7.10 Type Holes in IHaskell. To autocomplete, you'd insert a hole ("_"), GHC would be able to tell you the type of the hole, you'd be able to pass it to exference, and then your autocomplete suggestions would be full expressions. Does this sound plausible? Sounds like a very cool tool! Andrew On Sun, Apr 26, 2015 at 9:08 AM, lennart spitzner < lsp at informatik.uni-kiel.de> wrote: > Hello folks, > > Exference[1] is a Haskell tool for generating expressions from a type, > e.g. > > Input: (Show b) => (a -> b) -> [a] -> [String] > > Output: \ b -> fmap (\ g -> show (b g)) > > In contrast to Djinn, the well known tool with the same general > purpose, Exference supports a larger subset of the haskell type system > - most prominently type classes. (Djinn's environment many contain > type classes, but using them in queries will not really work.) This > comes at a cost, however: Exference makes no promise regarding > termination. Where Djinn tells you "there are no solutions", Exference > will keep trying, sometimes aborting a non-ending search with "i could > not find any solutions". > > See [2] for a report about the implementation, capabilities and > limitations. > > [1] https://github.com/lspitzner/exference > [2] https://github.com/lspitzner/exference/raw/master/exference.pdf > > Lennart > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From martin.drautzburg at web.de Sun Apr 26 18:22:06 2015 From: martin.drautzburg at web.de (martin) Date: Sun, 26 Apr 2015 20:22:06 +0200 Subject: [Haskell-cafe] Function hanging in infinite input In-Reply-To: <553D1ABD.1010602@unicaen.fr> References: <553CE61D.7020905@web.de> <553D0431.4090403@unicaen.fr> <553D187B.6050201@web.de> <553D1ABD.1010602@unicaen.fr> Message-ID: <553D2CCE.9000709@web.de> Am 04/26/2015 um 07:05 PM schrieb Jerzy Karczmarczuk: > Martin reacts to my non-answer: > Martin I sent you a private follow-up. I repeat it here. > > I uncommented those lines. > Your program goes until the end of the list, and returns the /*last*/ element (modified). The form > > *tBind tpr f > ... > (tTail tpr `tBind` f) * > > loops until ... > > Now, I know about laziness... It seems that it doesn't help. Most probably your hd is simply empty, and the tail gets > stuck in an idle loop. Thanks a lot for taking the time to look into my code. I had made a mistake when I stripped down my code. In tUntil the inequality is wrong. I updated the example and put it here: https://www.dropbox.com/s/836hykwhhsb0n55/Function_hanging_in_infinite_input.hs?dl=0 The strange thing is: I can set an upper limit to "outer" and I get the result Temporal [(DPast,1),(T 3,3)] When I push the upper limit to later times, the result doesn't change. This is as expected, because I am only taking everything until (T 5). It looks like Haskell doesn't know that and believes that later recursions might contribute to the result. But I don't see why. tUntil is basically an innocent takeWhile. From mjkorson at gmail.com Sun Apr 26 18:38:01 2015 From: mjkorson at gmail.com (Matthew Korson) Date: Sun, 26 Apr 2015 11:38:01 -0700 Subject: [Haskell-cafe] Function hanging in infinite input In-Reply-To: <553D2CCE.9000709@web.de> References: <553CE61D.7020905@web.de> <553D0431.4090403@unicaen.fr> <553D187B.6050201@web.de> <553D1ABD.1010602@unicaen.fr> <553D2CCE.9000709@web.de> Message-ID: The problem is that tAppend is too strict; it evaluates both its arguments before producing anything. This is because you are pattern matching on those arguments. You could use lazy patterns or make Temporal a newtype to avoid that. Or you could rewrite to something like tAppend as bs = Temporal $ toList as ++ toList bs On Sun, Apr 26, 2015 at 11:22 AM, martin wrote: > Am 04/26/2015 um 07:05 PM schrieb Jerzy Karczmarczuk: > > Martin reacts to my non-answer: > > > Martin I sent you a private follow-up. I repeat it here. > > > > I uncommented those lines. > > Your program goes until the end of the list, and returns the /*last*/ > element (modified). The form > > > > *tBind tpr f > > ... > > (tTail tpr `tBind` f) * > > > > loops until ... > > > > Now, I know about laziness... It seems that it doesn't help. Most > probably your hd is simply empty, and the tail gets > > stuck in an idle loop. > > Thanks a lot for taking the time to look into my code. > > I had made a mistake when I stripped down my code. In tUntil the > inequality is wrong. I updated the example and put it here: > > > https://www.dropbox.com/s/836hykwhhsb0n55/Function_hanging_in_infinite_input.hs?dl=0 > > The strange thing is: I can set an upper limit to "outer" and I get the > result > > Temporal [(DPast,1),(T 3,3)] > > When I push the upper limit to later times, the result doesn't change. > This is as expected, because I am only taking > everything until (T 5). It looks like Haskell doesn't know that and > believes that later recursions might contribute to > the result. But I don't see why. tUntil is basically an innocent takeWhile. > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- Matthew Korson mjkorson at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lsp at informatik.uni-kiel.de Sun Apr 26 19:10:35 2015 From: lsp at informatik.uni-kiel.de (lennart spitzner) Date: Sun, 26 Apr 2015 21:10:35 +0200 Subject: [Haskell-cafe] ANN: exference: a different djinn In-Reply-To: References: <553D0D80.9070902@informatik.uni-kiel.de> Message-ID: <553D382B.4090707@informatik.uni-kiel.de> On 26/04/15 19:38, Andrew Gibiansky wrote: > Could this somehow be used with the GHC API, so that it could be embedded > as autocomplete in an interpreter / code editor? > > I am imagining using this with GHC 7.10 Type Holes in IHaskell. To > autocomplete, you'd insert a hole ("_"), GHC would be able to tell you the > type of the hole, you'd be able to pass it to exference, and then your > autocomplete suggestions would be full expressions. Does this sound > plausible? This certainly counts as a long-term goal for Exference. As it stands, there are two issues: a) ideally, Exference would take into account all locally defined (or even all visible) functions. This is problematic, as one generally has to be careful adding too many / the wrong functions to the environment because it can blow up the search space. There might be some heuristics to determine what can safely be added; alternatively the user could selectively add functions. b) performance in general is not as good as i would like, both memory and run-time. (to give you an idea: running my test-cases needs something in the range of 3GB memory, the default even is at -M4G.) There certainly is room for optimizations; also i have not really tested what types of queries can be solved when you give tight limits to memory/run-time. Lennart From martin.drautzburg at web.de Sun Apr 26 19:23:56 2015 From: martin.drautzburg at web.de (martin) Date: Sun, 26 Apr 2015 21:23:56 +0200 Subject: [Haskell-cafe] Function hanging in infinite input In-Reply-To: References: <553CE61D.7020905@web.de> <553D0431.4090403@unicaen.fr> <553D187B.6050201@web.de> <553D1ABD.1010602@unicaen.fr> <553D2CCE.9000709@web.de> Message-ID: <553D3B4C.3040003@web.de> Am 04/26/2015 um 08:38 PM schrieb Matthew Korson: > The problem is that tAppend is too strict; it evaluates both its arguments before producing anything. This is because > you are pattern matching on those arguments. You could use lazy patterns or make Temporal a newtype to avoid that. Or > you could rewrite to something like > > tAppend as bs = Temporal $ toList as ++ toList bs Mathew, you made my day! At least things work now as expected. But could you please elaborate on the difference between tAppend (Temporal as) (Temporal bs) = Temporal (as ++ bs) vs tAppend as bs = Temporal $ (toList as) ++ (toList bs) toList (Temporal xs) = xs Why is the first one more strict than the second? From jerzy.karczmarczuk at unicaen.fr Sun Apr 26 19:41:08 2015 From: jerzy.karczmarczuk at unicaen.fr (Jerzy Karczmarczuk) Date: Sun, 26 Apr 2015 21:41:08 +0200 Subject: [Haskell-cafe] Function hanging in infinite input In-Reply-To: <553D3B4C.3040003@web.de> References: <553CE61D.7020905@web.de> <553D0431.4090403@unicaen.fr> <553D187B.6050201@web.de> <553D1ABD.1010602@unicaen.fr> <553D2CCE.9000709@web.de> <553D3B4C.3040003@web.de> Message-ID: <553D3F54.7050601@unicaen.fr> Am 04/26/2015 um 08:38 PM schrieb Matthew Korson: >> The problem is that tAppend is too strict; it evaluates both its arguments before producing anything. That's it. Define *tAppend (Temporal as) ~(Temporal bs) = Temporal (as ++ bs)* No need (I think) to pass through the toList, which is a redundant ping-pong. Jerzy K. -------------- next part -------------- An HTML attachment was scrubbed... URL: From allbery.b at gmail.com Sun Apr 26 19:43:35 2015 From: allbery.b at gmail.com (Brandon Allbery) Date: Sun, 26 Apr 2015 15:43:35 -0400 Subject: [Haskell-cafe] Function hanging in infinite input In-Reply-To: <553D3B4C.3040003@web.de> References: <553CE61D.7020905@web.de> <553D0431.4090403@unicaen.fr> <553D187B.6050201@web.de> <553D1ABD.1010602@unicaen.fr> <553D2CCE.9000709@web.de> <553D3B4C.3040003@web.de> Message-ID: On Sun, Apr 26, 2015 at 3:23 PM, martin wrote: > At least things work now as expected. But could you please elaborate on > the difference between > > tAppend (Temporal as) (Temporal bs) = Temporal (as ++ bs) > > vs > > tAppend as bs = Temporal $ (toList as) ++ (toList bs) > toList (Temporal xs) = xs > > Why is the first one more strict than the second? > Because the first one pattern matches both parameters immediately to ensure that the constructor is the one named (Temporal). The second defers it, since the toList call is not forced and therefore won't be invoked (along with its strict pattern match) until its value is needed. -- brandon s allbery kf8nh sine nomine associates allbery.b at gmail.com ballbery at sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.gillard at gmail.com Sun Apr 26 19:44:06 2015 From: florian.gillard at gmail.com (Florian Gillard) Date: Sun, 26 Apr 2015 21:44:06 +0200 Subject: [Haskell-cafe] Haskell Data.Vector, huge memory leak Message-ID: Hi, I am trying to make a basic 2D engine with haskell and the SDL1.2 bindings (for fun, I am just learning). Ideally the world is to be procedurally generated, chunk by chunk, allowing free exploration. Right now my chunk is composed of 200*200 tiles which I represent using a type: Mat [Tile] = Vec.Vector (Vec.Vector [Tile]) and these functions: fromMat :: [[a]] -> Mat a fromMat xs = Vec.fromList [Vec.fromList xs' | xs' <- xs] (?) :: Mat a -> (Int, Int) -> a v ? (r, c) = (v Vec.! r) Vec.! c I am using cyclic list of tiles in order to allow for sprite animation, and later for dynamic behaviour. Each frame of the game loop, the program reads the part of the vector relevant to the current camera position, display the corresponding tiles and return a new vector in which every of these cyclic lists has been replaced by it's tail. Here is the code responsible for this: applyTileMat :: Chunk -> SDL.Surface -> SDL.Surface -> IO Chunk applyTileMat ch src dest = let m = chLand $! ch (x,y) = chPos ch wid = Vec.length (m Vec.! 0) - 1 hei = (Vec.length m) - 1 (canW,canH) = canvasSize ch in do sequence $ [ applyTile (head (m ? (i,j))) (32*(j-x), 32*(i-y)) src dest | i <- [y..(y+canH)], j <- [x..(x+canW)]] m' <-sequence $ [sequence [(return $! tail (m ? (i,j))) | j <- [0..wid]] | i <- [0..hei]] --weird :P return ch { chLand = fromMat m' } the first sequence does the display part, the second one returns the new vector m'. At first I was using the following comprehension to get m' let !m' = [id $! [(tail $! (m ? (i,j))) | j <- [0..wid]] | i <- [0..hei]] but doing so results in ever increasing memory usage. I think it has to do with lazy evaluation preventing the data to be properly garbage collected, but I don't really understand why. In this particular case, it doesn't really mater since I have to look at the whole vector. But I don't know how I should do if I wanted to only "update" part of my chunk each frame, thus making a new chunk with only part of the data from the previous one. I am probably not using Data.Vector the way it's intended, but it's the simplest data structure I found with O(n) random access. The whole code is there: https://github.com/eniac314/wizzard/blob/master/tiler.hs -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertesx at gmx.de Mon Apr 27 01:33:20 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Mon, 27 Apr 2015 03:33:20 +0200 Subject: [Haskell-cafe] Haskell Data.Vector, huge memory leak In-Reply-To: References: Message-ID: > I am probably not using Data.Vector the way it's intended, but it's the > simplest data structure I found with O(n) random access. Remember that vectors from Data.Vector are lazy arrays. Technically speaking they are arrays of pointers to lazily evaluated values. What you really want for graphics is most likely Data.Vector.Storable. A vector of that type is always fully evaluated and dense: import qualified Data.Vector as V import qualified Data.Vector.Storable as Vs import qualified Data.Vector.Unboxed as Vu import Data.Word v :: V.Vector Word64 v = V.fromList [1..1000] vs :: Vs.Vector Word64 vs = Vs.fromList [1..1000] vu :: Vu.Vector Word64 vu = Vu.fromList [1..1000] You would expect that a 1000-element array of `Word64` values takes exactly 8000 bytes of memory. This is true for `vs` and `vu`, but not for `v`, because it is a lazy array. The difference between storable and unboxed vectors is that the former has a certain address in memory that is not moved around. This is useful for example when you need to interface with OpenGL or SDL. Unboxed vectors can be faster in certain cases, but the difference is almost always negligible and would not be possible anyway if you were to interface with a non-Haskell library. Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From soenkehahn at gmail.com Mon Apr 27 04:28:11 2015 From: soenkehahn at gmail.com (shahn) Date: Sun, 26 Apr 2015 21:28:11 -0700 (PDT) Subject: [Haskell-cafe] ANN: getopt-generics - easily create command line parsers by declaring a data type Message-ID: Hi all, getopt-generics is an experimental library that uses generic programming to automatically create command line parsers given a data type. For an example please refer to the README: [1]. You can also browse the API on hackage: [2]. The primary use-case for getopt-generics is to allow to create proper command line interfaces with as little code as possible. Any feedback appreciated! Cheers, S?nke [1] https://github.com/zalora/getopt-generics#getopt-generics [2] http://hackage.haskell.org/package/getopt-generics/docs/System-Console-GetOpt-Generics.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian.gillard at gmail.com Mon Apr 27 06:13:24 2015 From: florian.gillard at gmail.com (fayong) Date: Mon, 27 Apr 2015 08:13:24 +0200 Subject: [Haskell-cafe] Haskell Data.Vector, huge memory leak In-Reply-To: References: Message-ID: <20150427061324.GA2352@ThinkPad-Edge-E130> Ok I see the problem now, I will take a look in Data.Vector.Storable, Thank you! From k-bx at k-bx.com Mon Apr 27 08:30:32 2015 From: k-bx at k-bx.com (Kostiantyn Rybnikov) Date: Mon, 27 Apr 2015 11:30:32 +0300 Subject: [Haskell-cafe] Ord for partially ordered sets In-Reply-To: References: Message-ID: While this is a bit off-topic, I'd like to add my 5 cents that often adding instances for common type-classes might be "bad" even when it's totally defined for all values, one example is a Monoid instance for HashMap. So, I'd say that if you might be in doubt -- it's better to not add instance at all, since your users have no ability to remove it from their projects (or redefine). 26 ????. 2015 04:10 "wren romano" ????: > On Fri, Apr 24, 2015 at 9:06 AM, Ivan Lazar Miljenovic > wrote: > > What is the validity of defining an Ord instance for types for which > > mathematically the `compare` function is partially ordered? > > Defining Ord instances for types which are not totally ordered is *wrong*. > > For example, due to the existence of NaN values, Double/Float are not > totally ordered and therefore their Ord instances are buggy. In my > logfloat package I have to explicitly add checks to work around the > issues introduced by the buggy Ord Double instance. This is why I > introduced the PartialOrd class, and I'm not the first one to create > such a class. We really ought to have an official PartialOrd class as > part of base/Prelude. The only question is whether to use Maybe > Ordering or a specially defined PartialOrdering type (the latter > optimizing for space and pointer indirection; the former optimizing > for reducing code duplication for manipulating the > Ordering/PartialOrdering types). > > -- > Live well, > ~wren > _______________________________________________ > Libraries mailing list > Libraries at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dominic.p.mulligan at googlemail.com Mon Apr 27 09:23:11 2015 From: dominic.p.mulligan at googlemail.com (Dominic Mulligan) Date: Mon, 27 Apr 2015 10:23:11 +0100 Subject: [Haskell-cafe] ANN: exference: a different djinn In-Reply-To: <553D382B.4090707@informatik.uni-kiel.de> References: <553D0D80.9070902@informatik.uni-kiel.de> <553D382B.4090707@informatik.uni-kiel.de> Message-ID: > a) ideally, Exference would take into account all locally defined (or > even all visible) functions. This is problematic, as one generally has > to be careful adding too many / the wrong functions to the environment > because it can blow up the search space. There might be some > heuristics to determine what can safely be added; alternatively the > user could selectively add functions. Is this a major problem? As far as I can tell Agda's automated proof search does not take into account any globally defined functions or constants and yet is still (moderately) useful. Have you seen the recent announcement of Mote, a Vim plugin trying to replicate something akin to Agda-mode for Haskell? https://github.com/imeckler/mote It seems that marrying these two projects, extending Mote with a search facility based on exference, would be a good idea? From chneukirchen at gmail.com Mon Apr 27 10:55:40 2015 From: chneukirchen at gmail.com (Christian Neukirchen) Date: Mon, 27 Apr 2015 12:55:40 +0200 Subject: [Haskell-cafe] Munich Haskell Meeting, 2015-04-30 @ 19:30 Message-ID: <87383lq1qr.fsf@gmail.com> Dear all, This week, our monthly Munich Haskell Meeting will take place again on Thursday, April 30 at Cafe Puck at 19h30. For details see here: http://www.haskell-munich.de/dates If you plan to join, please add yourself to this dudle so we can reserve enough seats! It is OK to add yourself to the dudle anonymously or pseudonymously. https://dudle.inf.tu-dresden.de/haskell-munich-apr-2015/ Everybody is welcome! cu, -- Christian Neukirchen http://chneukirchen.org From oleg at okmij.org Mon Apr 27 11:02:03 2015 From: oleg at okmij.org (oleg at okmij.org) Date: Mon, 27 Apr 2015 07:02:03 -0400 (EDT) Subject: [Haskell-cafe] [ANN] FLOPS CFP 2016 Message-ID: <20150427110203.25328C3856@www1.g3.pair.com> FLOPS 2016: 13th International Symposium on Functional and Logic Programming March 3-6, 2016, Kochi, Japan Call For Papers http://www.info.kochi-tech.ac.jp/FLOPS2016/ Writing down detailed computational steps is not the only way of programming. The alternative, being used increasingly in practice, is to start by writing down the desired properties of the result. The computational steps are then (semi-)automatically derived from these higher-level specifications. Examples of this declarative style include functional and logic programming, program transformation and re-writing, and extracting programs from proofs of their correctness. FLOPS aims to bring together practitioners, researchers and implementors of the declarative programming, to discuss mutually interesting results and common problems: theoretical advances, their implementations in language systems and tools, and applications of these systems in practice. The scope includes all aspects of the design, semantics, theory, applications, implementations, and teaching of declarative programming. FLOPS specifically aims to promote cross-fertilization between theory and practice and among different styles of declarative programming. Scope FLOPS solicits original papers in all areas of the declarative programming: * functional, logic, functional-logic programming, re-writing systems, formal methods and model checking, program transformations and program refinements, developing programs with the help of theorem provers or SAT/SMT solvers; * foundations, language design, implementation issues (compilation techniques, memory management, run-time systems), applications and case studies. FLOPS promotes cross-fertilization among different styles of declarative programming. Therefore, submissions must be written to be understandable by the wide audience of declarative programmers and researchers. Submission of system descriptions and declarative pearls are especially encouraged. Submissions should fall into one of the following categories: * Regular research papers: they should describe new results and will be judged on originality, correctness, and significance. * System descriptions: they should contain a link to a working system and will be judged on originality, usefulness, and design. * Declarative pearls: new and excellent declarative programs or theories with illustrative applications. System descriptions and declarative pearls must be explicitly marked as such in the title. Submissions must be unpublished and not submitted for publication elsewhere. Work that already appeared in unpublished or informally published workshops proceedings may be submitted. See also ACM SIGPLAN Republication Policy. The proceedings will be published by Springer International Publishing in the Lecture Notes in Computer Science (LNCS) series, as a printed volume as well as online in the digital library SpringerLink. Post-proceedings: The authors of 4-7 best papers will be invited to submit the extended version of their FLOPS paper to a special issue of the journal Science of Computer Programming (SCP). Important dates Monday, September 14, 2015 (any time zone): Submission deadline Monday, November 16, 2015: Author notification March 3-6, 2016: FLOPS Symposium March 7-9, 2016: PPL Workshop Submission Submissions must be written in English and can be up to 15 pages long including references, though pearls are typically shorter. The formatting has to conform to Springer's guidelines. Regular research papers should be supported by proofs and/or experimental results. In case of lack of space, this supporting information should be made accessible otherwise (e.g., a link to a Web page, or an appendix). Papers should be submitted electronically at https://easychair.org/conferences/?conf=flops2016 Program Committee Andreas Abel Gothenburg University, Sweden Lindsay Errington USA Makoto Hamana Gunma University, Japan Michael Hanus CAU Kiel, Germany Jacob Howe City University London, UK Makoto Kanazawa National Institute of Informatics, Japan Andy King University of Kent, UK (PC Co-Chair) Oleg Kiselyov Tohoku University, Japan (PC Co-Chair) Hsiang-Shang Ko National Institute of Informatics, Japan Julia Lawall Inria-Whisper, France Andres L?h Well-Typed LLP, UK Anil Madhavapeddy Cambridge University, UK Jeff Polakow PivotCloud, USA Marc Pouzet ?cole normale sup?rieure, France V?tor Santos Costa Universidade do Porto, Portugal Tom Schrijvers KU Leuven, Belgium Zoltan Somogyi Australia Alwen Tiu Nanyang Technological University, Singapore Sam Tobin-Hochstadt Indiana University, USA Hongwei Xi Boston University, USA Neng-Fa Zhou CUNY Brooklyn College and Graduate Center, USA Organizers Andy King University of Kent, UK (PC Co-Chair) Oleg Kiselyov Tohoku University, Japan (PC Co-Chair) Yukiyoshi Kameyama University of Tsukuba, Japan (General Chair) Kiminori Matsuzaki Kochi University of Technology, Japan (Local Chair) flops2016 at logic.cs.tsukuba.ac dot jp From nicholls.mark at vimn.com Mon Apr 27 11:52:53 2015 From: nicholls.mark at vimn.com (Nicholls, Mark) Date: Mon, 27 Apr 2015 11:52:53 +0000 Subject: [Haskell-cafe] working through "Part I: Dependent Types in Haskell" Message-ID: Hello, working through https://www.fpcomplete.com/user/konn/prove-your-haskell-for-great-safety/dependent-types-in-haskell but a bit stuck...with an error... > {-# LANGUAGE DataKinds, TypeFamilies, TypeOperators, UndecidableInstances, GADTs, StandaloneDeriving #-} > data Nat = Z | S Nat > data Vector a n where > Nil :: Vector a Z > (:-) :: a -> Vector a n -> Vector a (S n) > infixr 5 :- I assume init...is a bit like tail but take n - 1 elements from the front....but... > init' :: Vector a ('S n) -> Vector a n > init' (x1 :- Nil) = Nil > init' (x :- xs) = x :- (init' xs) gives...(I could do with working out what haskell is tryign to tell me). Could not deduce (n1 ~ 'S n0) from the context ('S n ~ 'S n1) bound by a pattern with constructor :- :: forall a (n :: Nat). a -> Vector a n -> Vector a ('S n), in an equation for 'init'' at cafe.lhs:13:10-16 'n1' is a rigid type variable bound by a pattern with constructor :- :: forall a (n :: Nat). a -> Vector a n -> Vector a ('S n), in an equation for 'init'' at cafe.lhs:13:10 Expected type: Vector a n Actual type: Vector a ('S n0) Relevant bindings include xs :: Vector a n1 (bound at cafe.lhs:13:15) In the expression: x :- (init' xs) In an equation for 'init'': init' (x :- xs) = x :- (init' xs) so... the ":-" in "init' (x :- xs)" has type forall a (n :: Nat). a -> Vector a n -> Vector a ('S n) yep that makes sense.... if it knew "n ~ n1" then it knows "n1 ~ 'S n0" so it would know "n ~ 'S n0" but it only knows "'S n ~ 'S n1" hmmmm...surely from the def of Nat, thats "obvious" CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael at snoyman.com Mon Apr 27 12:03:15 2015 From: michael at snoyman.com (Michael Snoyman) Date: Mon, 27 Apr 2015 12:03:15 +0000 Subject: [Haskell-cafe] Improved Hackage Security document Message-ID: To try and clarify a number of the points brought up in discussion around Hackage security in the past few weeks, Mathieu and I have put some time into trying to organize the information around this a bit. The result is the following page: https://github.com/commercialhaskell/commercialhaskell/blob/master/proposal/improved-hackage-security.md Contributions by others are very welcome. If you send a pull request, odds are you'll end up with commit access too. -------------- next part -------------- An HTML attachment was scrubbed... URL: From oleg at okmij.org Mon Apr 27 12:06:13 2015 From: oleg at okmij.org (oleg at okmij.org) Date: Mon, 27 Apr 2015 08:06:13 -0400 (EDT) Subject: [Haskell-cafe] Prime ``sieve'' and Haskell demo Message-ID: <20150427120613.E9FD2C384E@www1.g3.pair.com> Ertugrul So:ylemez wrote: > How about simply changing `sieve` to `trialDiv`? It's not that I > don't like the given example, because it gives a very small use case > for laziness that is difficult enough to reproduce in an eagerly > evaluated language. Is it really so difficult to reproduce in a strict language? Here is that Haskell example in OCaml let primes = let rec trialDiv (Cons (p,xs)) = Cons (p, lazy (trialDiv @@ filter (fun x -> x mod p <> 0) @@ Lazy.force xs)) in trialDiv @@ iota 2 roughly the same number of lines, and mainly, exactly the same method. Some people prefer to write it as let primes = let rec trialDiv (Cons (p,xs)) = Cons (p, lazy (Lazy.force xs |> filter (fun x -> x mod p <> 0) |> trialDiv)) in iota 2 |> trialDiv The algorithm is the same, it is just as clearly stated. Now it becomes clearer when what is evaluated. OCaml has its own streams (and even a special syntax for them, via a syntactic extension), but it is not difficult to introduce them from scratch type 'a stream = Cons of 'a * 'a stream Lazy.t let rec iota n = Cons (n,lazy (iota (n+1))) let rec filter pred (Cons (p,xs)) = if pred p then Cons (p,lazy (filter pred (Lazy.force xs))) else filter pred (Lazy.force xs) let rec take n (Cons (p,xs)) = if n <= 0 then [] else p :: take (n-1) (Lazy.force xs) That's really all there is to it. I should stress that the typechecker won't lets us forget about lazy and force! The stress on laziness in Haskell is difficult to understand given how easy it is to use laziness in essentially any language if needed. Incidentally, given below is a real sieve of Eratosthenes, written as a *very* concurrent program, where all the concurrency primitives (including Haskell-like mvars) are implemented with delimcc. (The example is an elaboration of the code kindly sent by Christophe Deleuze, July 18, 2012). The full code is part of the delimcc distribution. open Lwc (* Send a stream of integers [m..n] on the channel out *) (* It is a task and hence creates a thunk *) let iota : int mvar -> int -> int -> task = fun out m n () -> for i = m to n do put_mvar out i done (* A task to print the values read from the stream *) let output : int mvar -> task = fun inp () -> while true do let v = take_mvar inp in Printf.printf "%i " v done (* The key step in the Eratosthenes sieve: copy inp to out but replace every n-th element with 0 *) let filter : int -> int mvar -> int mvar -> task = fun n inp out () -> let rec loop i = let v = take_mvar inp in if i <= 1 then (put_mvar out 0; loop n) else (put_mvar out v; loop (i-1)) in loop n (* The main sieving task: move prime numbers from inp to out by composing filters *) let rec sift : int mvar -> int mvar -> task = fun inp out () -> let n = take_mvar inp in if n = 0 then sift inp out () else begin put_mvar out n; let mid = make_mvar () in spawn (filter n inp mid); sift mid out () end (* Start up the task of the sieving job, with n being the upper limit *) let sieve : int -> task = fun n () -> let mi = make_mvar () in let mo = make_mvar () in spawn (iota mi 2 n); spawn (sift mi mo); spawn (output mo) From lsp at informatik.uni-kiel.de Mon Apr 27 12:14:15 2015 From: lsp at informatik.uni-kiel.de (lennart spitzner) Date: Mon, 27 Apr 2015 14:14:15 +0200 Subject: [Haskell-cafe] ANN: exference: a different djinn In-Reply-To: References: <553D0D80.9070902@informatik.uni-kiel.de> <553D382B.4090707@informatik.uni-kiel.de> Message-ID: <553E2817.9000607@informatik.uni-kiel.de> On 27/04/15 11:23, Dominic Mulligan wrote: > Is this a major problem? As far as I can tell Agda's automated proof > search does not take into account any globally defined functions or > constants and yet is still (moderately) useful. No, i just have high expectations :) Even with just the current functionality, if there is a typed hole in an expression like `f = _ x y` if the user knows that the implementation will involve some specific global function bar, she could write `f = _ bar x y`. > Have you seen the recent announcement of Mote, a Vim plugin trying to > replicate something akin to Agda-mode for Haskell? > It seems that marrying these two projects, extending Mote with a > search facility based on exference, would be a good idea? In a way it even inspired me to (finally) make the announcement. I have not, but will try to find time to play with Mote in the next days. From andres at well-typed.com Mon Apr 27 12:31:28 2015 From: andres at well-typed.com (=?UTF-8?Q?Andres_L=C3=B6h?=) Date: Mon, 27 Apr 2015 14:31:28 +0200 Subject: [Haskell-cafe] working through "Part I: Dependent Types in Haskell" In-Reply-To: References: Message-ID: Hi. > init' :: Vector a ('S n) -> Vector a n > init' (x1 :- Nil) = Nil > init' (x :- xs) = x :- (init' xs) You need to pattern match further on `xs` in order to make clear that you require it to not be `Nil`. The type checker doesn't look at the first case in order to conclude that you've already ruled that out. So e.g. init' (x :- xs @ (_ :- _)) = x :- (init' xs) works. Cheers, Andres -- Andres L?h, Haskell Consultant Well-Typed LLP, http://www.well-typed.com Registered in England & Wales, OC335890 250 Ice Wharf, 17 New Wharf Road, London N1 9RF, England From takenobu.hs at gmail.com Mon Apr 27 12:43:47 2015 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Mon, 27 Apr 2015 21:43:47 +0900 Subject: [Haskell-cafe] WebSocket on Haskell? Message-ID: Dear cafe, Would you tell me reference sources for WebSocket(or Socket.io) on Haskell? Once I wrote a toy program with WebSocket[1]: Node.js(backend) + JavaScript(frontend) + WebSocket(communication) I want to port it to Haskell backend for my exercise: Haskell(backend) + JavaScript(frontend) + WebSocket(communication) I'm glad if there are such references: * broadcast to multi-client by WebSocket(or Socket.io) * serve a simple top HTML page * deploy to Heroku or public server [1] https://github.com/takenobu-hs/social-drawing-old-js Thank you :-), Takenobu -------------- next part -------------- An HTML attachment was scrubbed... URL: From heraldhoi at gmail.com Mon Apr 27 13:11:32 2015 From: heraldhoi at gmail.com (Geraldus) Date: Mon, 27 Apr 2015 13:11:32 +0000 Subject: [Haskell-cafe] WebSocket on Haskell? In-Reply-To: References: Message-ID: Hello! Take a look at this package http://hackage.haskell.org/package/websockets 5:43 ????? ???????, ??, 27.04.2015, Takenobu Tani : > Dear cafe, > > Would you tell me reference sources for WebSocket(or Socket.io) on Haskell? > > Once I wrote a toy program with WebSocket[1]: > Node.js(backend) + JavaScript(frontend) + WebSocket(communication) > > I want to port it to Haskell backend for my exercise: > Haskell(backend) + JavaScript(frontend) + WebSocket(communication) > > > I'm glad if there are such references: > * broadcast to multi-client by WebSocket(or Socket.io) > * serve a simple top HTML page > * deploy to Heroku or public server > > > [1] https://github.com/takenobu-hs/social-drawing-old-js > > Thank you :-), > Takenobu > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From heraldhoi at gmail.com Mon Apr 27 13:17:41 2015 From: heraldhoi at gmail.com (Geraldus) Date: Mon, 27 Apr 2015 13:17:41 +0000 Subject: [Haskell-cafe] WebSocket on Haskell? In-Reply-To: References: Message-ID: Also there are also packages which allow to ease integration this library with popular web frameworks: websockets-snap yesod-websockets wai-websockets ??, 27 ???. 2015 ?. ? 18:11, Geraldus : > Hello! Take a look at this package > http://hackage.haskell.org/package/websockets > > 5:43 ????? ???????, ??, 27.04.2015, Takenobu Tani : > > Dear cafe, >> >> Would you tell me reference sources for WebSocket(or Socket.io) on >> Haskell? >> >> Once I wrote a toy program with WebSocket[1]: >> Node.js(backend) + JavaScript(frontend) + WebSocket(communication) >> >> I want to port it to Haskell backend for my exercise: >> Haskell(backend) + JavaScript(frontend) + WebSocket(communication) >> >> >> I'm glad if there are such references: >> * broadcast to multi-client by WebSocket(or Socket.io) >> * serve a simple top HTML page >> * deploy to Heroku or public server >> >> >> [1] https://github.com/takenobu-hs/social-drawing-old-js >> >> Thank you :-), >> Takenobu >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From k-bx at k-bx.com Mon Apr 27 13:30:55 2015 From: k-bx at k-bx.com (Kostiantyn Rybnikov) Date: Mon, 27 Apr 2015 16:30:55 +0300 Subject: [Haskell-cafe] Timeout on pure code In-Reply-To: References: Message-ID: David, thank you very much. I confirm everything you wrote, both doing LineBuffering and removing output to stderr resolves the problem. I will create a ticket for hslogger in order to propose somehow putting LineBuffering to stderr in case logging there is enabled. That should help people like myself who don't know they have the problem, but may suffer from it from time to time (at least every write to stderr slows down response-time quite significantly). Thanks for the "queues" rsyslog article, I'll take a look. At the time of having task to "centralize logs" I didn't want to add rsyslog as one more required service, I also had an impression that all current "syslog" implementations are in a somewhat outdated stage with multiple badly supported forks, so I did minimal work of adding decoupled forwarding logs from files to central rsyslog, but maybe it's time to migrate everything to rsyslogs and get rid of forwarder services. Anyway, thank you once again. On Sun, Apr 26, 2015 at 5:52 AM, David Turner wrote: > I see. > > The issue seems to be the default handler which writes the log to > stderr. Replacing 'addHandler h' with 'setHandlers [h]' makes it run > in a reasonable time as 'setHandlers' starts afresh; conversely it's > still slow even if you just use the default handler on its own, i.e. > removing the call to 'addHandler'. > > I was a bit suspicious about the 'hFlush' in System.Log.Handler.Simple > but removing that didn't help. However, suspecting that the issue may > be too-much-flushing I eventually found > https://ghc.haskell.org/trac/ghc/ticket/7418 which says that writing > to stderr is slow because its default buffering mode is NoBuffering. I > added 'hSetBuffering stderr LineBuffering' and boom, it runs at a > sensible speed. > > Recommend you get rid of writing to stderr and just log to the file > unless you've a good reason to send the output both ways, in which > case switch to LineBuffering as above. > > If you're using rsyslog you also may be interested to read > http://www.rsyslog.com/doc/queues.html - this describes how it can > decouple the final log output from the input using a combination of > in-memory and on-disk buffers, and even discard lower-priority > messages from the queue if the going gets really tough. We judged it a > lot of effort to implement our own on-disk spool and were particularly > worried about it growing without bound if the downstream was too slow, > so this feature of rsyslog was just what we needed. > > Hope that helps, > > David > > > On 25 April 2015 at 23:44, Kostiantyn Rybnikov wrote: > > Hi David. > > > > I planned to create a detailed bug-report at hslogger's issues to start > > investigation there (as a better place) on Monday, but since I have code > > prepared already, it's easy to share it right now: > > https://gist.github.com/k-bx/ccf6fd1c73680c8a4345 > > > > I'm launching it as: > > > > time ./dist/build/seq/seq &> /dev/null > > > > We don't use syslog driver, instead we have a separate file-to-syslog > worker > > to decouple these components. > > > > On Sun, Apr 26, 2015 at 12:08 AM, David Turner > > wrote: > >> > >> Hi, > >> > >> I've had a look at this as we use hslogger too, so I'm keen to avoid > >> this kind of performance issue. I threw a quick Criterion benchmark > >> together: > >> > >> https://gist.github.com/DaveCTurner/f977123b4498c4c64569 > >> > >> The headline result on my test machine are that each log call takes > >> ~540us, so 2000 should take about a second. Would be interested if you > >> could run the same benchmark on your setup as it's possible that > >> there's something else downstream that's causing you a problem. > >> > >> A couple of things that might be worth bearing in mind: if you're > >> talking to syslog over /dev/log then that can block if the log daemon > >> falls behind: unix datagram sockets don't drop datagrams when they're > >> congested. If the /dev/log test is slow but the UDP test is fast then > >> it could be that your syslog can't handle the load. > >> > >> I'm using rsyslogd and have enabled the feature that combines > >> identical messages, so this test doesn't generate much disk IO and it > >> keeps up easily, so the UDP and /dev/log tests run about equally fast > >> for me. Is your syslog writing out every message? It may be flushing > >> to disk after every message too, which would be terribly slow. > >> > >> If you're not logging to syslog, what's your hslogger config? > >> > >> Cheers, > >> > >> David > >> > >> > >> On 24 April 2015 at 20:25, Kostiantyn Rybnikov wrote: > >> > An update for everyone interested (and not). Turned out it's neither > GHC > >> > RTS, Snap or networking issues, it's hslogger being very slow. I > thought > >> > it's slow when used concurrently, but just did a test when it writes > >> > 2000 > >> > 5kb messages sequentially and that finishes in 111 seconds (while > >> > minimal > >> > program that writes same 2000 messages finishes in 0.12s). > >> > > >> > I hope I'll have a chance to investigate why hslogger is so slow in > >> > future, > >> > but meanwhile will just remove logging. > >> > > >> > On Thu, Apr 23, 2015 at 4:08 PM, Kostiantyn Rybnikov > >> > wrote: > >> >> > >> >> All right, good news! > >> >> > >> >> After adding ekg, gathering its data via bosun and seeing nothing > >> >> useful I > >> >> actually figured out that I could try harder to reproduce issue by > >> >> myself > >> >> instead of waiting for users to do that. And I succeeded! :) > >> >> > >> >> So, after launching 20 infinite curl loops to that handler's url I > was > >> >> quickly able to reproduce the issue, so the task seems clear now: > keep > >> >> reducing the code, reproduce locally, possibly without external > >> >> services > >> >> etc. I'll write up after I get to something. > >> >> > >> >> Thanks. > >> >> > >> >> On Wed, Apr 22, 2015 at 11:09 PM, Gregory Collins > >> >> wrote: > >> >>> > >> >>> Maybe but it would be helpful to rule the scenario out. Johan's ekg > >> >>> library is also useful, it exports a webserver on a different port > >> >>> that you > >> >>> can use to track metrics like gc times, etc. > >> >>> > >> >>> Other options for further debugging include gathering strace logs > from > >> >>> the binary. You'll have to do some data gathering to narrow down the > >> >>> cause > >> >>> unfortunately -- http client? your code? Snap server? GHC event > >> >>> manager > >> >>> (System.timeout is implemented here)? GC? etc > >> >>> > >> >>> G > >> >>> > >> >>> On Wed, Apr 22, 2015 at 10:14 AM, Kostiantyn Rybnikov < > k-bx at k-bx.com> > >> >>> wrote: > >> >>>> > >> >>>> Gregory, > >> >>>> > >> >>>> Servers are far from being highly-overloaded, since they're > currently > >> >>>> under a much less load they used to be. Memory consumption is > stable > >> >>>> and > >> >>>> low, and there's a lot of free RAM also. > >> >>>> > >> >>>> Would you say that given these factors this scenario is unlikely? > >> >>>> > >> >>>> On Wed, Apr 22, 2015 at 7:56 PM, Gregory Collins > >> >>>> wrote: > >> >>>>> > >> >>>>> Given your gist, the timeout on your requests is set to a > >> >>>>> half-second > >> >>>>> so it's conceivable that a highly-loaded server might have GC > pause > >> >>>>> times > >> >>>>> approaching that long. Smells to me like a classic Haskell memory > >> >>>>> leak > >> >>>>> (that's why the problem occurs after the server has been up for a > >> >>>>> while): > >> >>>>> run your program with the heap profiler, and audit any shared > >> >>>>> tables/IORefs/MVars to make sure you are not building up thunks > >> >>>>> there. > >> >>>>> > >> >>>>> Greg > >> >>>>> > >> >>>>> On Wed, Apr 22, 2015 at 9:14 AM, Kostiantyn Rybnikov < > k-bx at k-bx.com> > >> >>>>> wrote: > >> >>>>>> > >> >>>>>> Hi! > >> >>>>>> > >> >>>>>> Our company's main commercial product is a Snap-based web app > which > >> >>>>>> we > >> >>>>>> compile with GHC 7.8.4. It works on four app-servers currently > >> >>>>>> load-balanced > >> >>>>>> behind Haproxy. > >> >>>>>> > >> >>>>>> I recently implemented a new piece of functionality, which led to > >> >>>>>> weird behavior which I have no idea how to debug, so I'm asking > >> >>>>>> here for > >> >>>>>> help and ideas! > >> >>>>>> > >> >>>>>> The new functionality is this: on specific url-handler, we need > to > >> >>>>>> query n external services concurrently with a timeout, gather and > >> >>>>>> render > >> >>>>>> results. Easy (in Haskell)! > >> >>>>>> > >> >>>>>> The implementation looks, as you might imagine, something like > this > >> >>>>>> (sorry for almost-real-haskell, I'm sure I forgot tons of imports > >> >>>>>> and other > >> >>>>>> things, but I hope everything is clear as-is, if not -- I'll be > >> >>>>>> glad to > >> >>>>>> update gist to make things more specific): > >> >>>>>> > >> >>>>>> https://gist.github.com/k-bx/0cf7035aaf1ad6306e76 > >> >>>>>> > >> >>>>>> Now, this works wonderful for some time, and in logs I can see > >> >>>>>> both, > >> >>>>>> successful fetches of external-content, and also lots of timeouts > >> >>>>>> from our > >> >>>>>> external providers. Life is good. > >> >>>>>> > >> >>>>>> But! After several days of work (sometimes a day, sometimes > couple > >> >>>>>> days), apps on all 4 servers go crazy. It might take some > interval > >> >>>>>> (like 20 > >> >>>>>> minutes) before they're all crazy, so it's not super-synchronous. > >> >>>>>> Now: how > >> >>>>>> crazy, exactly? > >> >>>>>> > >> >>>>>> First of all, this endpoint timeouts. Haproxy requests for a > >> >>>>>> response, > >> >>>>>> and response times out, so they "hang". > >> >>>>>> > >> >>>>>> Secondly, logs are interesting. If you look at the code from gist > >> >>>>>> once > >> >>>>>> again, you can see, that some of CandidateProvider's don't > actually > >> >>>>>> require > >> >>>>>> any networking work, so all they do is actually just logging that > >> >>>>>> they're > >> >>>>>> working (I added this as part of debugging actually) and return > >> >>>>>> pure data. > >> >>>>>> So what's weird is that they timeout also! Here's how output of > our > >> >>>>>> logs > >> >>>>>> starts to look like after the bug happens: > >> >>>>>> > >> >>>>>> ``` > >> >>>>>> [2015-04-22 09:56:20] provider: CandidateProvider1 > >> >>>>>> [2015-04-22 09:56:20] provider: CandidateProvider2 > >> >>>>>> [2015-04-22 09:56:21] Got timeout while requesting > >> >>>>>> CandidateProvider1 > >> >>>>>> [2015-04-22 09:56:21] Got timeout while requesting > >> >>>>>> CandidateProvider2 > >> >>>>>> [2015-04-22 09:56:22] provider: CandidateProvider1 > >> >>>>>> [2015-04-22 09:56:22] provider: CandidateProvider2 > >> >>>>>> [2015-04-22 09:56:23] Got timeout while requesting > >> >>>>>> CandidateProvider1 > >> >>>>>> [2015-04-22 09:56:23] Got timeout while requesting > >> >>>>>> CandidateProvider2 > >> >>>>>> ... and so on > >> >>>>>> ``` > >> >>>>>> > >> >>>>>> What's also weird is that, even after timeout is logged, the > string > >> >>>>>> ""Got responses!" never gets logged also! So hanging happens > >> >>>>>> somewhere > >> >>>>>> in-between. > >> >>>>>> > >> >>>>>> I have to say I'm sorry that I don't have strace output now, I'll > >> >>>>>> have > >> >>>>>> to wait until this situation happens once again, but I'll get > later > >> >>>>>> to you > >> >>>>>> with this info. > >> >>>>>> > >> >>>>>> So, how is this possible that almost-pure code gets timed-out? > And > >> >>>>>> why > >> >>>>>> does it hang afterwards? > >> >>>>>> > >> >>>>>> CPU and other resource usage is quite low, number of open > >> >>>>>> file-descriptors also (it seems). > >> >>>>>> > >> >>>>>> Thanks for all the suggestions in advance! > >> >>>>>> > >> >>>>>> _______________________________________________ > >> >>>>>> Haskell-Cafe mailing list > >> >>>>>> Haskell-Cafe at haskell.org > >> >>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > >> >>>>>> > >> >>>>> > >> >>>>> > >> >>>>> > >> >>>>> -- > >> >>>>> Gregory Collins > >> >>>> > >> >>>> > >> >>> > >> >>> > >> >>> > >> >>> -- > >> >>> Gregory Collins > >> >> > >> >> > >> > > >> > > >> > _______________________________________________ > >> > Haskell-Cafe mailing list > >> > Haskell-Cafe at haskell.org > >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > >> > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ertesx at gmx.de Mon Apr 27 13:57:28 2015 From: ertesx at gmx.de (Ertugrul =?utf-8?Q?S=C3=B6ylemez?=) Date: Mon, 27 Apr 2015 15:57:28 +0200 Subject: [Haskell-cafe] Prime ``sieve'' and Haskell demo In-Reply-To: <20150427120613.E9FD2C384E@www1.g3.pair.com> References: <20150427120613.E9FD2C384E@www1.g3.pair.com> Message-ID: >> How about simply changing `sieve` to `trialDiv`? It's not that I >> don't like the given example, because it gives a very small use case >> for laziness that is difficult enough to reproduce in an eagerly >> evaluated language. > > Is it really so difficult to reproduce in a strict language? Here is > that Haskell example in OCaml > > let primes = > let rec trialDiv (Cons (p,xs)) = > Cons (p, lazy (trialDiv @@ filter (fun x -> x mod p <> 0) @@ Lazy.force xs)) > in trialDiv @@ iota 2 > > [...] OCaml, given `lazy`, is not eagerly evaluated. You are using lazy evaluation. In many modern languages laziness can be reproduced by abusing first class functions. However, you are using another important feature unconsciously: sharing. That one is not as easy to reproduce in a language that doesn't give you built-in laziness like Haskell or (apparently) OCaml. You would need to write a wrapper type with destructive update first. > That's really all there is to it. I should stress that the typechecker > won't lets us forget about lazy and force! I have very little experience with OCaml, but that's probably because the language does not consider lazily evaluated values to be semantically equivalent to their eager counterparts. If you believe that `lazy x` and `x` represent the same value semantically, then it's fine not to be explicit about forcing. This gives you the advantage that non-strict functions are lazily evaluated by default and you can "strictify" them. The other direction is not possible. > The stress on laziness in Haskell is difficult to understand given how > easy it is to use laziness in essentially any language if needed. It depends on your paradigms and idioms. After almost 7 years of Haskell I'm clearly in the lazy-by-default mindset so much that I find it difficult to program in an eager-by-default language. If I'd write OCaml code it would probably be cluttered with lazy wrappers. > Incidentally, given below is a real sieve of Eratosthenes, written as > a *very* concurrent program, where all the concurrency primitives > (including Haskell-like mvars) are implemented with delimcc. (The > example is an elaboration of the code kindly sent by Christophe > Deleuze, July 18, 2012). The full code is part of the delimcc > distribution. > > [...] Interesting program. Something (semantically) similar can be implemented using laziness, but the result is very slow compared to a sieve implemented by bit operations. I can't judge the efficiency of your program without trying it, but I expect it to be similar. If you want the stream to be infinite, partial sieves are almost as fast as regular sieve (and probably faster due to better cache behaviour) and run in constant memory. Greets, Ertugrul -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 472 bytes Desc: not available URL: From takenobu.hs at gmail.com Mon Apr 27 14:26:03 2015 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Mon, 27 Apr 2015 23:26:03 +0900 Subject: [Haskell-cafe] WebSocket on Haskell? In-Reply-To: References: Message-ID: Hi Geraldus, Thank you for good references!!! These are nice examples for me. I'll learn from them:-) Thank you, Takenobu 2015-04-27 22:17 GMT+09:00 Geraldus : > Also there are also packages which allow to ease integration this library > with popular web frameworks: > > websockets-snap > yesod-websockets > wai-websockets > > ??, 27 ???. 2015 ?. ? 18:11, Geraldus : > > Hello! Take a look at this package >> http://hackage.haskell.org/package/websockets >> >> 5:43 ????? ???????, ??, 27.04.2015, Takenobu Tani > >: >> >> Dear cafe, >>> >>> Would you tell me reference sources for WebSocket(or Socket.io) on >>> Haskell? >>> >>> Once I wrote a toy program with WebSocket[1]: >>> Node.js(backend) + JavaScript(frontend) + WebSocket(communication) >>> >>> I want to port it to Haskell backend for my exercise: >>> Haskell(backend) + JavaScript(frontend) + WebSocket(communication) >>> >>> >>> I'm glad if there are such references: >>> * broadcast to multi-client by WebSocket(or Socket.io) >>> * serve a simple top HTML page >>> * deploy to Heroku or public server >>> >>> >>> [1] https://github.com/takenobu-hs/social-drawing-old-js >>> >>> Thank you :-), >>> Takenobu >>> >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskell-Cafe at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ollie at ocharles.org.uk Mon Apr 27 15:07:13 2015 From: ollie at ocharles.org.uk (Oliver Charles) Date: Mon, 27 Apr 2015 16:07:13 +0100 Subject: [Haskell-cafe] WebSocket on Haskell? In-Reply-To: References: Message-ID: If you want to carry on using socket-io, you might be interested in https://hackage.haskell.org/package/socket-io. On Mon, Apr 27, 2015 at 1:43 PM, Takenobu Tani wrote: > Dear cafe, > > Would you tell me reference sources for WebSocket(or Socket.io) on Haskell? > > Once I wrote a toy program with WebSocket[1]: > Node.js(backend) + JavaScript(frontend) + WebSocket(communication) > > I want to port it to Haskell backend for my exercise: > Haskell(backend) + JavaScript(frontend) + WebSocket(communication) > > > I'm glad if there are such references: > * broadcast to multi-client by WebSocket(or Socket.io) > * serve a simple top HTML page > * deploy to Heroku or public server > > > [1] https://github.com/takenobu-hs/social-drawing-old-js > > Thank you :-), > Takenobu > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gershomb at gmail.com Mon Apr 27 17:20:14 2015 From: gershomb at gmail.com (Gershom B) Date: Mon, 27 Apr 2015 13:20:14 -0400 Subject: [Haskell-cafe] ANN: New Haskell.org Committee Members Message-ID: Following the self-nomination period and discussion, the Haskell.org committee has selected new members: * Edward Kmett (reappointment) * Ryan Trinkle * John Wiegley As per the rules of the committee, this discussion was held among the current members of the committee, and the outgoing members of the committee who were not seeking reappointment. Thank you to all candidates who submitted a self-nomination. All of the nominations we received were very strong, and we would encourage all those who nominated themselves to consider self-nominating again in the future. We would also like to thank our two outgoing members, Jason Dagit and Brent Yorgey, for their service. Cheers, Gershom -------------- next part -------------- An HTML attachment was scrubbed... URL: From erkokl at gmail.com Mon Apr 27 20:26:11 2015 From: erkokl at gmail.com (Levent Erkok) Date: Mon, 27 Apr 2015 13:26:11 -0700 Subject: [Haskell-cafe] [Job] Intel is hiring: FP/FV oriented folks most welcome.. Message-ID: The group I work for at Intel has an opening for a recent graduate (BS/MS/PhD) of a US institution. We work within the product team responsible for the Xeon Phi many-core processor which is used to build supercomputers. See: http://en.wikipedia.org/wiki/Xeon_Phi Our team focuses mainly on the floating-point arithmetic verification using an internal STE based model checking tool called Forte, which is based on the reFLect language, which itself is a descendant of ML. (With lazy-evaluation default, and a baked-in BDD based equivalence checking engine, which makes it supercool! Imagine Haskell with a built-in symbolic simulation engine.) We also use Haskell for internal purposes as needed. Our group has extensive freedom in the choice of tools we use. We also work on a variety of non-arithmetic verification problems, including cache coherence, ECC (error-detection/correction) algorithms, instruction length decoders, to name a few. As part of our coherence work we developed an open source explicit state distributed model checker called PReach . Our group is based in Portland, OR, with one member in Santa Clara, CA. Please let me know if you are, or you know anyone who might be interested in this position. Feel free to forward this request. -Levent. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris at moixaenergy.com Tue Apr 28 08:36:16 2015 From: chris at moixaenergy.com (Chris Wright) Date: Tue, 28 Apr 2015 08:36:16 +0000 Subject: [Haskell-cafe] Paid internship at Energy startup Moixa Technology Message-ID: Hi We are looking for a bright sparky developer to join our team in London as a paid intern, with the potential to turn into a full time job. We are based in central London, http://moixatechnology.com/ & www.meetmaslow.com The intern's objective will be creating web interfaces for users of our innovative renewable electricity systems. This opportunity is exciting in many ways. Moixa has won a number of prizes for its innovative products. The enterprise is a cool innovative startup that welcomes open thinking and encourages personal development. London has unarguably the largest UK Haskell community, and developers of most Haskell web frameworks can be met around here. Here is what the intern would have: - good CS background - decent Haskell knowledge: no professional experience, but able to write non-trivial applications - familiarity with web development: knows HTML/CSS and worked with some web frameworks --language mostly unimportant, but hopefully not PHP :) - knowledge of browser-side JavaScript & HTML DOM Contact e-mail: chris at moixaenergy.com Best Regards Chris Chris Wright CTO - Moixa Technology www.meetmaslow.com 0207 734 1511 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mboes at tweag.net Tue Apr 28 10:07:50 2015 From: mboes at tweag.net (Mathieu Boespflug) Date: Tue, 28 Apr 2015 12:07:50 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> <87bninfdkp.fsf@karetnikov.org> Message-ID: Hi all, last week, I found some time to write up a very simple proposal that addresses the following goals simultaneously: - maintain a difficult to forge public audit log of Hackage updates; - make downloads from Hackage mirrors just as trustworthy as downloading from Hackage itself; - guarantee that `cabal update` is always pulling the freshest package index (called "snapshots" in the proposal), and detect when this might not be the case; - implement the first half of TUF (namely the index signing part discussed in Duncan's blog post, not the author package signing part) with fewer metadata files and in a way that reuses existing tooling; - get low-implementation-cost, straightforward and incremental `cabal update`. After a preliminary review from a few colleagues and friends in the community, here is the proposal, in the form of Commercial Haskell wiki page: https://github.com/commercialhaskell/commercialhaskell/wiki/Git-backed-Hackage-index-signing-and-distribution The design constraints here are: - stay backwards compatible where the cost for doing so is low. - reuse existing tooling and mechanisms, especially when it comes to key management, snapshot identity, and distributing signatures. - Focus on the above 5 goals only, because they happen to all be solvable by changing a single piece of mechanism. But strive to reuse whatever mechanism others are proposing to solve other goals (e.g. certification of provenance using author package signing, as Chris Done has already proposed). To that effect, the tl;dr is that I'm proposing that we just use Git for maintaining the Hackage package index, that we use Git for synchronizing this locally, and that we use Git commit signatures for implementing the first half of TUF. The Git tooling currently assumes GnuPG keys for signatures, so I'm proposing that we use GnuPG keys for signing, and that we manage key revocation and any trust delegation between keys using GnuPG and its existing infrasture. I estimate the total effort necessary here to be the equivalent of 5-6 full time days overall. However, I have not pooled the necessary resources to carry that out yet. I'd like to get feedback first before going ahead with this, but in meantime, ** if there are any volunteers that would like to signal their intent to help with the implementation effort then please add your name at the bottom of the wiki page. ** Best, Mathieu On 18 April 2015 at 20:11, Michael Snoyman wrote: > > > On Sat, Apr 18, 2015 at 12:20 AM Bardur Arantsson > wrote: >> >> On 17-04-2015 10:17, Michael Snoyman wrote: >> > This is a great idea, thank you both for raising it. I was discussing >> > something similar with others in a text chat earlier this morning. I've >> > gone ahead and put together a page to cover this discussion: >> > >> > >> > https://github.com/commercialhaskell/commercialhaskell/blob/master/proposal/improved-hackage-security.md >> > >> > The document definitely needs more work, this is just meant to get the >> > ball >> > rolling. As usual with the commercialhaskell repo, if anyone wants edit >> > access, just request it on the issue tracker. Or most likely, send a PR >> > and >> > you'll get a commit bit almost magically ;) >> >> Thank you. Just to make sure that I understand -- is this page only >> meant to cover the original "strawman proposal" at the start of this >> thread, or...? >> >> Maybe you intend for this to be extended in a detailed way under the >> "Long-term solutions" heading? >> >> I was imagining a wiki page which could perhaps start out by collecting >> all the currently identified possible threats in a table, and then all >> "participants" could perhaps fill in how their suggestion addresses >> those threats (or tell us why we shouldn't care about this particular >> threat). Of course other relevent non-threat considerations might be >> relevant to add to such a table, such as: how prevalent is the >> software/idea we're basing this on? does this have any prior >> implementation (e.g. the append-to-tar and expect that web servers will >> behave sanely thing)? etc. >> >> (I realize that I'm asking for a lot of work, but I think it's going to >> be necessary, at least if there's going to be consensus and not just a >> de-facto "winner".) >> >> > > Hi Bardur, > > > I don't think I have any different intention for this page than you've > identified. In fact, I thought that I had clearly said exactly what you > described when I said: > >> There are various ideas at play already. The bullets are not intended to >> be full representations of the proposals, but rather high level summaries. >> We should continue to expand this page with more details going forward. > > If this is unclear somehow, please tell me. But my intention absolutely is > that many people can edit this page to add their ideas and we can flesh out > a complete solution. > > Michael > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > From takenobu.hs at gmail.com Tue Apr 28 12:44:35 2015 From: takenobu.hs at gmail.com (Takenobu Tani) Date: Tue, 28 Apr 2015 21:44:35 +0900 Subject: [Haskell-cafe] WebSocket on Haskell? In-Reply-To: References: Message-ID: Hi Oliver, Wow, nice library! It's very useful for broadcast, command handling and easy connection. And I like your JSON APIs. Thank you for your pretty work:-), Takenobu 2015-04-28 0:07 GMT+09:00 Oliver Charles : > If you want to carry on using socket-io, you might be interested in > https://hackage.haskell.org/package/socket-io. > > On Mon, Apr 27, 2015 at 1:43 PM, Takenobu Tani > wrote: > >> Dear cafe, >> >> Would you tell me reference sources for WebSocket(or Socket.io) on >> Haskell? >> >> Once I wrote a toy program with WebSocket[1]: >> Node.js(backend) + JavaScript(frontend) + WebSocket(communication) >> >> I want to port it to Haskell backend for my exercise: >> Haskell(backend) + JavaScript(frontend) + WebSocket(communication) >> >> >> I'm glad if there are such references: >> * broadcast to multi-client by WebSocket(or Socket.io) >> * serve a simple top HTML page >> * deploy to Heroku or public server >> >> >> [1] https://github.com/takenobu-hs/social-drawing-old-js >> >> Thank you :-), >> Takenobu >> >> >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskell-Cafe at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholls.mark at vimn.com Tue Apr 28 14:45:54 2015 From: nicholls.mark at vimn.com (Nicholls, Mark) Date: Tue, 28 Apr 2015 14:45:54 +0000 Subject: [Haskell-cafe] dependent types, singleton types.... Message-ID: Can someone check my answer (no I'm not doing an assessment...I'm actually learning stuff out of interest!) working through https://www.fpcomplete.com/user/konn/prove-your-haskell-for-great-safety/dependent-types-in-haskell still there is a section about singleton types and the exercise is "Exercise: Define the binary tree type and implement its singleton type." Ok, I think I'm probably wrong....a binary tree is something like... > data BTree a = Leaf | Branch a (BTree a) (BTree a) With DataKind My logic goes... Leaf is an uninhabited type, so I need a value isomorphic to it.... Easy? > data SBTree a where > SLeaf :: SBTree Leaf Things like Branch Integer Leaf (Branch String Leaf Leaf) Are uninhabited...so I need to add > SBranch :: (a :: *) -> (SBTree (b :: BTree *)) -> (SBTree (c :: BTree *)) -> SBTree (Branch a b c) ? It compiles...but....is it actually correct? Things like > y = SBranch (SS (SS SZ)) SLeaf SLeaf > z = SBranch (SS (SS SZ)) (SBranch SZ SLeaf SLeaf) SLeaf Seem to make sense ish. From: Nicholls, Mark Sent: 28 April 2015 9:33 AM To: Nicholls, Mark Subject: sds Hello, working through https://www.fpcomplete.com/user/konn/prove-your-haskell-for-great-safety/dependent-types-in-haskell but a bit stuck...with an error... > {-# LANGUAGE DataKinds, TypeFamilies, TypeOperators, UndecidableInstances, GADTs, StandaloneDeriving #-} > data Nat = Z | S Nat > data Vector a n where > Nil :: Vector a Z > (:-) :: a -> Vector a n -> Vector a (S n) > infixr 5 :- I assume init...is a bit like tail but take n - 1 elements from the front....but... > init' :: Vector a ('S n) -> Vector a n > init' (x :- Nil) = Nil > init' (x :- xs@(_ :- _)) = x :- (init' xs) > zipWithSame :: (a -> b -> c) -> Vector a n -> Vector b n -> Vector c n > zipWithSame f Nil Nil = Nil > zipWithSame f (x :- xs) (y :- xs@(_ :- _)) = Nil Mark Nicholls | Senior Technical Director, Programmes & Development - Viacom International Media Networks A: 17-29 Hawley Crescent London NW1 8TT | e: Nicholls.Mark at vimn.com T: +44 (0)203 580 2223 [Description: cid:image001.png at 01CD488D.9204D030] CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 22096 bytes Desc: image001.png URL: From mboes at tweag.net Tue Apr 28 21:07:21 2015 From: mboes at tweag.net (Mathieu Boespflug) Date: Tue, 28 Apr 2015 23:07:21 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> <87bninfdkp.fsf@karetnikov.org> Message-ID: This is a valid concern. One that I should have addressed explicitly in the proposal. Git is fairly well supported on Windows these days and installs easily. It could conceivably be included as part of MinGHC. There are many alternatives, but I doubt we'll need them: statically linking a C implementation (libgit2 or another), or a simple native implementation of the git protocol (the protocol is quite straightforward and is documented) and basic disk format. The same is true about GnuPG, via gpg4win, though note that under this proposal GnuPG wouldn't be a requirement for `cabal update` to work. Just an additional optional dependency which you'll want to have installed if you want to protect yourself from the attacks listed in the proposal. By the way, one side note about this Git proposal: it sides steps the discussion around how to add SSL support to cabal-install entirely. Since Git understands (among others) HTTPS natively, so we can outsource our support for that to Git. In any case SSL no longer becomes a necessity for protecting against MITM (the commit signing takes care of that), only a nice-to-have for privacy. On 28 April 2015 at 19:46, wrote: > To me, the elephant in the room is how the dependency on Git will be > handled. I'm not a Windows user, but how much more painful will it be to set > up a Haskell environment on Windows with a new dependency on Git? Will users > need to install it separately, or do you suggest embedding Git into the > relevant tools? Should the Haskell Platform bundle it? What about MinGHC? > Oh, and I guess the same question can be asked about GnuPG. > > I personally use a Mac and Homebrew, so it's pretty easy for me to install > those dependencies, and I'm sure the same is true on Linux. But also, not > everyone uses Homebrew (in fact, I'm sure most programmers on Macs don't use > it), so it's also worth considering whether the requisite tools should be > embedded in the "GHC for Mac OS X" distribution. > > On Linux this probably isn't an issue because pretty much everyone has a > decent dependency-tracking package manager. > > I don't know if you care personally about these issues, but I think any > proposal which introduces new dependencies to the core development > environment of Haskell should take it into consideration. Very few people > have Git and GPG already installed, and I think the new-user experience > should be considered, and I'm surprised nobody has mentioned it in this > entire thread (unless I missed it). > > -- radix (Christopher Armstrong) > > P.S. I'm very excited to see this work, including the emphasis on using the > well-researched TUF. Thanks to you and other people working on this. :) > > On Tuesday, April 28, 2015 at 5:07:56 AM UTC-5, Mathieu Boespflug wrote: >> >> Hi all, >> >> last week, I found some time to write up a very simple proposal that >> addresses the following goals simultaneously: >> >> - maintain a difficult to forge public audit log of Hackage updates; >> - make downloads from Hackage mirrors just as trustworthy as >> downloading from Hackage itself; >> - guarantee that `cabal update` is always pulling the freshest package >> index (called "snapshots" in the proposal), and detect when this might >> not be the case; >> - implement the first half of TUF (namely the index signing part >> discussed in Duncan's blog post, not the author package signing part) >> with fewer metadata files and in a way that reuses existing tooling; >> - get low-implementation-cost, straightforward and incremental `cabal >> update`. >> >> After a preliminary review from a few colleagues and friends in the >> community, here is the proposal, in the form of Commercial Haskell >> wiki page: >> >> >> https://github.com/commercialhaskell/commercialhaskell/wiki/Git-backed-Hackage-index-signing-and-distribution >> >> The design constraints here are: >> >> - stay backwards compatible where the cost for doing so is low. >> - reuse existing tooling and mechanisms, especially when it comes to >> key management, snapshot identity, and distributing signatures. >> - Focus on the above 5 goals only, because they happen to all be >> solvable by changing a single piece of mechanism. But strive to reuse >> whatever mechanism others are proposing to solve other goals (e.g. >> certification of provenance using author package signing, as Chris >> Done has already proposed). >> >> To that effect, the tl;dr is that I'm proposing that we just use Git >> for maintaining the Hackage package index, that we use Git for >> synchronizing this locally, and that we use Git commit signatures for >> implementing the first half of TUF. The Git tooling currently assumes >> GnuPG keys for signatures, so I'm proposing that we use GnuPG keys for >> signing, and that we manage key revocation and any trust delegation >> between keys using GnuPG and its existing infrasture. >> >> I estimate the total effort necessary here to be the equivalent of 5-6 >> full time days overall. However, I have not pooled the necessary >> resources to carry that out yet. I'd like to get feedback first before >> going ahead with this, but in meantime, >> >> ** if there are any volunteers that would like to signal their intent >> to help with the implementation effort then please add your name at >> the bottom of the wiki page. ** >> >> Best, >> >> Mathieu >> >> On 18 April 2015 at 20:11, Michael Snoyman wrote: >> > >> > >> > On Sat, Apr 18, 2015 at 12:20 AM Bardur Arantsson >> > >> > wrote: >> >> >> >> On 17-04-2015 10:17, Michael Snoyman wrote: >> >> > This is a great idea, thank you both for raising it. I was discussing >> >> > something similar with others in a text chat earlier this morning. >> >> > I've >> >> > gone ahead and put together a page to cover this discussion: >> >> > >> >> > >> >> > >> >> > https://github.com/commercialhaskell/commercialhaskell/blob/master/proposal/improved-hackage-security.md >> >> > >> >> > The document definitely needs more work, this is just meant to get >> >> > the >> >> > ball >> >> > rolling. As usual with the commercialhaskell repo, if anyone wants >> >> > edit >> >> > access, just request it on the issue tracker. Or most likely, send a >> >> > PR >> >> > and >> >> > you'll get a commit bit almost magically ;) >> >> >> >> Thank you. Just to make sure that I understand -- is this page only >> >> meant to cover the original "strawman proposal" at the start of this >> >> thread, or...? >> >> >> >> Maybe you intend for this to be extended in a detailed way under the >> >> "Long-term solutions" heading? >> >> >> >> I was imagining a wiki page which could perhaps start out by collecting >> >> all the currently identified possible threats in a table, and then all >> >> "participants" could perhaps fill in how their suggestion addresses >> >> those threats (or tell us why we shouldn't care about this particular >> >> threat). Of course other relevent non-threat considerations might be >> >> relevant to add to such a table, such as: how prevalent is the >> >> software/idea we're basing this on? does this have any prior >> >> implementation (e.g. the append-to-tar and expect that web servers will >> >> behave sanely thing)? etc. >> >> >> >> (I realize that I'm asking for a lot of work, but I think it's going to >> >> be necessary, at least if there's going to be consensus and not just a >> >> de-facto "winner".) >> >> >> >> >> > >> > Hi Bardur, >> > >> > >> > I don't think I have any different intention for this page than you've >> > identified. In fact, I thought that I had clearly said exactly what you >> > described when I said: >> > >> >> There are various ideas at play already. The bullets are not intended >> >> to >> >> be full representations of the proposals, but rather high level >> >> summaries. >> >> We should continue to expand this page with more details going forward. >> > >> > If this is unclear somehow, please tell me. But my intention absolutely >> > is >> > that many people can edit this page to add their ideas and we can flesh >> > out >> > a complete solution. >> > >> > Michael >> > >> > _______________________________________________ >> > Haskell-Cafe mailing list >> > Haskel... at haskell.org >> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >> > >> _______________________________________________ >> Haskell-Cafe mailing list >> Haskel... at haskell.org >> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From mboes at tweag.net Tue Apr 28 21:09:21 2015 From: mboes at tweag.net (Mathieu Boespflug) Date: Tue, 28 Apr 2015 23:09:21 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429176843.25663.31.camel@dunky.localdomain> <1429179169.25663.59.camel@dunky.localdomain> <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> <87bninfdkp.fsf@karetnikov.org> Message-ID: [removing erroneous haskell-cafe at googlegroups.com from To list.] On 28 April 2015 at 23:07, Mathieu Boespflug wrote: > This is a valid concern. One that I should have addressed explicitly > in the proposal. Git is fairly well supported on Windows these days > and installs easily. It could conceivably be included as part of > MinGHC. There are many alternatives, but I doubt we'll need them: > statically linking a C implementation (libgit2 or another), or a > simple native implementation of the git protocol (the protocol is > quite straightforward and is documented) and basic disk format. > > The same is true about GnuPG, via gpg4win, though note that under this > proposal GnuPG wouldn't be a requirement for `cabal update` to work. > Just an additional optional dependency which you'll want to have > installed if you want to protect yourself from the attacks listed in > the proposal. > > By the way, one side note about this Git proposal: it sides steps the > discussion around how to add SSL support to cabal-install entirely. > Since Git understands (among others) HTTPS natively, so we can > outsource our support for that to Git. In any case SSL no longer > becomes a necessity for protecting against MITM (the commit signing > takes care of that), only a nice-to-have for privacy. > > On 28 April 2015 at 19:46, wrote: >> To me, the elephant in the room is how the dependency on Git will be >> handled. I'm not a Windows user, but how much more painful will it be to set >> up a Haskell environment on Windows with a new dependency on Git? Will users >> need to install it separately, or do you suggest embedding Git into the >> relevant tools? Should the Haskell Platform bundle it? What about MinGHC? >> Oh, and I guess the same question can be asked about GnuPG. >> >> I personally use a Mac and Homebrew, so it's pretty easy for me to install >> those dependencies, and I'm sure the same is true on Linux. But also, not >> everyone uses Homebrew (in fact, I'm sure most programmers on Macs don't use >> it), so it's also worth considering whether the requisite tools should be >> embedded in the "GHC for Mac OS X" distribution. >> >> On Linux this probably isn't an issue because pretty much everyone has a >> decent dependency-tracking package manager. >> >> I don't know if you care personally about these issues, but I think any >> proposal which introduces new dependencies to the core development >> environment of Haskell should take it into consideration. Very few people >> have Git and GPG already installed, and I think the new-user experience >> should be considered, and I'm surprised nobody has mentioned it in this >> entire thread (unless I missed it). >> >> -- radix (Christopher Armstrong) >> >> P.S. I'm very excited to see this work, including the emphasis on using the >> well-researched TUF. Thanks to you and other people working on this. :) >> >> On Tuesday, April 28, 2015 at 5:07:56 AM UTC-5, Mathieu Boespflug wrote: >>> >>> Hi all, >>> >>> last week, I found some time to write up a very simple proposal that >>> addresses the following goals simultaneously: >>> >>> - maintain a difficult to forge public audit log of Hackage updates; >>> - make downloads from Hackage mirrors just as trustworthy as >>> downloading from Hackage itself; >>> - guarantee that `cabal update` is always pulling the freshest package >>> index (called "snapshots" in the proposal), and detect when this might >>> not be the case; >>> - implement the first half of TUF (namely the index signing part >>> discussed in Duncan's blog post, not the author package signing part) >>> with fewer metadata files and in a way that reuses existing tooling; >>> - get low-implementation-cost, straightforward and incremental `cabal >>> update`. >>> >>> After a preliminary review from a few colleagues and friends in the >>> community, here is the proposal, in the form of Commercial Haskell >>> wiki page: >>> >>> >>> https://github.com/commercialhaskell/commercialhaskell/wiki/Git-backed-Hackage-index-signing-and-distribution >>> >>> The design constraints here are: >>> >>> - stay backwards compatible where the cost for doing so is low. >>> - reuse existing tooling and mechanisms, especially when it comes to >>> key management, snapshot identity, and distributing signatures. >>> - Focus on the above 5 goals only, because they happen to all be >>> solvable by changing a single piece of mechanism. But strive to reuse >>> whatever mechanism others are proposing to solve other goals (e.g. >>> certification of provenance using author package signing, as Chris >>> Done has already proposed). >>> >>> To that effect, the tl;dr is that I'm proposing that we just use Git >>> for maintaining the Hackage package index, that we use Git for >>> synchronizing this locally, and that we use Git commit signatures for >>> implementing the first half of TUF. The Git tooling currently assumes >>> GnuPG keys for signatures, so I'm proposing that we use GnuPG keys for >>> signing, and that we manage key revocation and any trust delegation >>> between keys using GnuPG and its existing infrasture. >>> >>> I estimate the total effort necessary here to be the equivalent of 5-6 >>> full time days overall. However, I have not pooled the necessary >>> resources to carry that out yet. I'd like to get feedback first before >>> going ahead with this, but in meantime, >>> >>> ** if there are any volunteers that would like to signal their intent >>> to help with the implementation effort then please add your name at >>> the bottom of the wiki page. ** >>> >>> Best, >>> >>> Mathieu >>> >>> On 18 April 2015 at 20:11, Michael Snoyman wrote: >>> > >>> > >>> > On Sat, Apr 18, 2015 at 12:20 AM Bardur Arantsson >>> > >>> > wrote: >>> >> >>> >> On 17-04-2015 10:17, Michael Snoyman wrote: >>> >> > This is a great idea, thank you both for raising it. I was discussing >>> >> > something similar with others in a text chat earlier this morning. >>> >> > I've >>> >> > gone ahead and put together a page to cover this discussion: >>> >> > >>> >> > >>> >> > >>> >> > https://github.com/commercialhaskell/commercialhaskell/blob/master/proposal/improved-hackage-security.md >>> >> > >>> >> > The document definitely needs more work, this is just meant to get >>> >> > the >>> >> > ball >>> >> > rolling. As usual with the commercialhaskell repo, if anyone wants >>> >> > edit >>> >> > access, just request it on the issue tracker. Or most likely, send a >>> >> > PR >>> >> > and >>> >> > you'll get a commit bit almost magically ;) >>> >> >>> >> Thank you. Just to make sure that I understand -- is this page only >>> >> meant to cover the original "strawman proposal" at the start of this >>> >> thread, or...? >>> >> >>> >> Maybe you intend for this to be extended in a detailed way under the >>> >> "Long-term solutions" heading? >>> >> >>> >> I was imagining a wiki page which could perhaps start out by collecting >>> >> all the currently identified possible threats in a table, and then all >>> >> "participants" could perhaps fill in how their suggestion addresses >>> >> those threats (or tell us why we shouldn't care about this particular >>> >> threat). Of course other relevent non-threat considerations might be >>> >> relevant to add to such a table, such as: how prevalent is the >>> >> software/idea we're basing this on? does this have any prior >>> >> implementation (e.g. the append-to-tar and expect that web servers will >>> >> behave sanely thing)? etc. >>> >> >>> >> (I realize that I'm asking for a lot of work, but I think it's going to >>> >> be necessary, at least if there's going to be consensus and not just a >>> >> de-facto "winner".) >>> >> >>> >> >>> > >>> > Hi Bardur, >>> > >>> > >>> > I don't think I have any different intention for this page than you've >>> > identified. In fact, I thought that I had clearly said exactly what you >>> > described when I said: >>> > >>> >> There are various ideas at play already. The bullets are not intended >>> >> to >>> >> be full representations of the proposals, but rather high level >>> >> summaries. >>> >> We should continue to expand this page with more details going forward. >>> > >>> > If this is unclear somehow, please tell me. But my intention absolutely >>> > is >>> > that many people can edit this page to add their ideas and we can flesh >>> > out >>> > a complete solution. >>> > >>> > Michael >>> > >>> > _______________________________________________ >>> > Haskell-Cafe mailing list >>> > Haskel... at haskell.org >>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe >>> > >>> _______________________________________________ >>> Haskell-Cafe mailing list >>> Haskel... at haskell.org >>> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe From onepoint at starurchin.org Tue Apr 28 21:48:21 2015 From: onepoint at starurchin.org (Jeremy Henty) Date: Tue, 28 Apr 2015 22:48:21 +0100 Subject: [Haskell-cafe] Cannot run GHC-7.10.1 tests Message-ID: <20150428214821.GE13478@omphalos.singularity> I successfully built GHC-7.10.1 but I cannot run the tests (at least not when I follow the instructions at https://ghc.haskell.org/trac/ghc/wiki/Building/RunningTests). "make fast" fails like this: ===--- building phase 0 make -r --no-print-directory -f ghc.mk phase=0 phase_0_builds make[1]: Nothing to be done for `phase_0_builds'. ===--- building phase 1 make -r --no-print-directory -f ghc.mk phase=1 phase_1_builds make[1]: Nothing to be done for `phase_1_builds'. ===--- building final phase make -r --no-print-directory -f ghc.mk phase=final fast make[1]: *** No rule to make target `fast'. Stop. make: *** [fast] Error 2 "make testfast" fails in exactly the same way, except that the broken make target is "testfast" instead of fast. "make test" and "make fulltest" fail like this: make -C testsuite/tests CLEANUP=1 OUTPUT_SUMMARY=../../testsuite_summary.txt fast make[1]: Entering directory `/data/build.d/6.8/ghc-7.10.1/testsuite/tests' ../mk/boilerplate.mk:168: ../mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk: No such file or directory ../mk/ghc-config "/data/build.d/6.8/ghc-7.10.1/inplace/bin/ghc-stage2" >"../mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk"; if [ $? != 0 ]; then rm -f "../mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk"; exit 1; fi /bin/sh: ../mk/ghc-config: cannot execute binary file make[1]: *** [../mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk] Error 1 make[1]: Leaving directory `/data/build.d/6.8/ghc-7.10.1/testsuite/tests' make: *** [test] Error 2 Interestingly, for GHC-7.8.4 "make fast" and "make testfast" fail in exactly the same way but "make test" and "make fulltest" both work. So, how do I test GHC-7.10.1 ? Regards, Jeremy Henty From christiaan.baaij at gmail.com Tue Apr 28 22:14:00 2015 From: christiaan.baaij at gmail.com (Christiaan Baaij) Date: Wed, 29 Apr 2015 00:14:00 +0200 Subject: [Haskell-cafe] Cannot run GHC-7.10.1 tests In-Reply-To: <20150428214821.GE13478@omphalos.singularity> References: <20150428214821.GE13478@omphalos.singularity> Message-ID: You are running these commands from _within_ the 'testsuite' directory, right? On 28 April 2015 at 23:48, Jeremy Henty wrote: > > I successfully built GHC-7.10.1 but I cannot run the tests (at least > not when I follow the instructions at > https://ghc.haskell.org/trac/ghc/wiki/Building/RunningTests). > > "make fast" fails like this: > > ===--- building phase 0 > make -r --no-print-directory -f ghc.mk phase=0 phase_0_builds > make[1]: Nothing to be done for `phase_0_builds'. > ===--- building phase 1 > make -r --no-print-directory -f ghc.mk phase=1 phase_1_builds > make[1]: Nothing to be done for `phase_1_builds'. > ===--- building final phase > make -r --no-print-directory -f ghc.mk phase=final fast > make[1]: *** No rule to make target `fast'. Stop. > make: *** [fast] Error 2 > > "make testfast" fails in exactly the same way, except that the broken > make target is "testfast" instead of fast. > > "make test" and "make fulltest" fail like this: > > make -C testsuite/tests CLEANUP=1 > OUTPUT_SUMMARY=../../testsuite_summary.txt fast > make[1]: Entering directory > `/data/build.d/6.8/ghc-7.10.1/testsuite/tests' > ../mk/boilerplate.mk:168: ../mk/ > ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk: No such > file or directory > ../mk/ghc-config "/data/build.d/6.8/ghc-7.10.1/inplace/bin/ghc-stage2" > >"../mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk"; > if [ $? != 0 ]; then rm -f "../mk/ > ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk"; exit 1; > fi > /bin/sh: ../mk/ghc-config: cannot execute binary file > make[1]: *** [../mk/ > ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk] Error 1 > make[1]: Leaving directory > `/data/build.d/6.8/ghc-7.10.1/testsuite/tests' > make: *** [test] Error 2 > > Interestingly, for GHC-7.8.4 "make fast" and "make testfast" fail in > exactly the same way but "make test" and "make fulltest" both work. > > So, how do I test GHC-7.10.1 ? > > Regards, > > Jeremy Henty > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at cs.dartmouth.edu Wed Apr 29 00:40:25 2015 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Tue, 28 Apr 2015 20:40:25 -0400 Subject: [Haskell-cafe] Prime sieve'' and Haskell demo Message-ID: <201504290040.t3T0ePcl019332@coolidge.cs.dartmouth.edu> >From vicki.smith at hanovernh.org Tue Apr 28 16:15:03 2015 X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on mail.cs.dartmouth.edu X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,HTML_MESSAGE autolearn=ham autolearn_force=no version=3.4.0 Received: from mailhub27.dartmouth.edu (mailhub27.dartmouth.edu [129.170.204.251]) by mail.cs.dartmouth.edu (8.14.8/8.14.8) with ESMTP id t3SKF17I005566 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO) for ; Tue, 28 Apr 2015 16:15:01 -0400 Received: from na01-by2-obe.outbound.protection.outlook.com (mail-by2on0119.outbound.protection.outlook.com [207.46.100.119]) by mailhub27.dartmouth.edu (8.13.5/DND2.0/8.13.5) with ESMTP id t3SKE8BW008313 (version=TLSv1/SSLv3 cipher=AES256-SHA256 bits=256 verify=FAIL); Tue, 28 Apr 2015 16:14:16 -0400 Received: from BN3PR02MB1224.namprd02.prod.outlook.com (25.162.168.26) by BN1PR0201MB0835.namprd02.prod.outlook.com (25.160.170.155) with Microsoft SMTP Server (TLS) id 15.1.148.16; Tue, 28 Apr 2015 20:14:06 +0000 Received: from BN3PR02MB1224.namprd02.prod.outlook.com ([25.162.168.26]) by BN3PR02MB1224.namprd02.prod.outlook.com ([25.162.168.26]) with mapi id 15.01.0148.008; Tue, 28 Apr 2015 20:14:05 +0000 From: Vicki Smith To: Beth Rivard , Betsy Smith , "Russ Rohloff (Russ.Rohloff at pathwaysconsult.com)" , "Seale, Perry (perry.seale at hypertherm.com) (perry.seale at hypertherm.com)" , "(James.Kennedy at valley.net)" , "Douglas McIlroy (mcilroy at dartmouth.edu)" , Ed Chamberlain New , "Edwin Chamberlain (edwin.chamberlain at valley.net)" , Hugh Mellert , "John Trummel (trummel at valley.net)" , "Michael Mayor (michael.b.mayor at hitchcock.org)" , "Peter Christie (ptrchristie at gmail.com)" , Whit Spaulding Subject: Planning Board Agenda for next week Thread-Topic: Planning Board Agenda for next week Thread-Index: AdCB7+4fJecpOFT+QOuBUNwg5Qn5fg== Date: Tue, 28 Apr 2015 20:14:05 +0000 Message-ID: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: yes X-MS-TNEF-Correlator: authentication-results: hanovernh.org; dkim=none (message not signed) header.d=none; x-originating-ip: [216.177.11.102] x-microsoft-antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:BN1PR0201MB0835; x-forefront-antispam-report: BMV:1;SFV:NSPM;SFS:(10019020)(6009001)(74316001)(86362001)(19625215002)(16236675004)(99936001)(588024002)(229853001)(50986999)(46102003)(19300405004)(19580395003)(40100003)(33656002)(558084003)(2900100001)(102836002)(66066001)(15975445007)(77156002)(99286002)(76576001)(2171001)(5001920100001)(122556002)(87936001)(5001770100001)(19609705001)(92566002)(107886001)(2656002)(54356999)(62966003)(921003)(1121003);DIR:OUT;SFP:1102;SCL:1;SRVR:BN1PR0201MB0835;H:BN3PR02MB1224.namprd02.prod.outlook.com;FPR:;SPF:None;MLV:sfv;LANG:en; x-microsoft-antispam-prvs: x-exchange-antispam-report-test: UriScan:; x-exchange-antispam-report-cfa-test: BCL:0;PCL:0;RULEID:(601004)(5002010)(5005006)(3002001);SRVR:BN1PR0201MB0835;BCL:0;PCL:0;RULEID:;SRVR:BN1PR0201MB0835; x-forefront-prvs: 0560A2214D Content-Type: multipart/mixed; boundary="_004_BN3PR02MB1224A7F725FB8D0626CBD179FAE80BN3PR02MB1224namp_" MIME-Version: 1.0 X-OriginatorOrg: hanovernh.org X-MS-Exchange-CrossTenant-originalarrivaltime: 28 Apr 2015 20:14:05.6264 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: b0209be8-b8b0-493a-b0ec-8ecf8e595c3a X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN1PR0201MB0835 X-DCC--Metrics: mailhub27.dartmouth.edu 1356; Body=2 Fuz1=2 Fuz2=2 X-MailScanner: Found to be clean by mailhub27.dartmouth.edu X-MailScanner-From: vicki.smith at hanovernh.org Status: R --_004_BN3PR02MB1224A7F725FB8D0626CBD179FAE80BN3PR02MB1224namp_ Content-Type: multipart/alternative; boundary="_000_BN3PR02MB1224A7F725FB8D0626CBD179FAE80BN3PR02MB1224namp_" --_000_BN3PR02MB1224A7F725FB8D0626CBD179FAE80BN3PR02MB1224namp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Here is the agenda for next week. Vicki --_000_BN3PR02MB1224A7F725FB8D0626CBD179FAE80BN3PR02MB1224namp_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Here is the agenda for next week.

Vicki

--_000_BN3PR02MB1224A7F725FB8D0626CBD179FAE80BN3PR02MB1224namp_-- --_004_BN3PR02MB1224A7F725FB8D0626CBD179FAE80BN3PR02MB1224namp_ Content-Type: application/msword; name="05-05-2015.doc" Content-Description: 05-05-2015.doc Content-Disposition: attachment; filename="05-05-2015.doc"; size=50688; creation-date="Tue, 28 Apr 2015 17:24:01 GMT"; modification-date="Tue, 28 Apr 2015 20:13:15 GMT" Content-Transfer-Encoding: base64 0M8R4KGxGuEAAAAAAAAAAAAAAAAAAAAAPgADAP7/CQAGAAAAAAAAAAAAAAABAAAAXAAAAAAAAAAA EAAAXwAAAAEAAAD+////AAAAAFsAAAD///////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////s pcEAD8AJBAAA+BK/AAAAAAAAEAAAAAAACAAApRIAAA4AYmpiaskWyRYAAAAAAAAAAAAAAAAAAAAA AAAJBBYAQSQAAKt8AACrfAAApQoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD//w8AAAAA AAAAAAD//w8AAAAAAAAAAAD//w8AAAAAAAAAAAAAAAAAAAAAALcAAAAAAJwJAAAAAAAAnAkAAPYW AAAAAAAA9hYAAAAAAAD2FgAAAAAAAPYWAAAAAAAA9hYAABQAAAAAAAAAAAAAAP////8AAAAAChcA AAAAAAAKFwAAAAAAAAoXAAAAAAAAChcAACwAAAA2FwAAHAAAAAoXAAAAAAAAbUIAAIwCAABSFwAA AAAAAFIXAAAAAAAAUhcAAAAAAABSFwAAAAAAAFIXAAAAAAAALRgAAC4AAABbGAAAHAAAAHcYAAAQ AAAA7EEAAAIAAADuQQAAAAAAAO5BAAAAAAAA7kEAAAAAAADuQQAAAAAAAO5BAAAAAAAA7kEAACQA AAD5RAAAsgIAAKtHAAB0AAAAEkIAABUAAAAAAAAAAAAAAAAAAAAAAAAA9hYAAAAAAAAHGQAAAAAA AAAAAAAAAAAAAAAAAAAAAAAtGAAAAAAAAC0YAAAAAAAABxkAAAAAAAAHGQAAAAAAABJCAAAAAAAA AAAAAAAAAAD2FgAAAAAAAPYWAAAAAAAAUhcAAAAAAAAAAAAAAAAAAFIXAADbAAAAJ0IAABYAAADL GQAAAAAAAMsZAAAAAAAAyxkAAAAAAAAHGQAAdgAAAPYWAAAAAAAAUhcAAAAAAAD2FgAAAAAAAFIX AAAAAAAA7EEAAAAAAAAAAAAAAAAAAMsZAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAABxkAAAAAAADsQQAAAAAAAAAAAAAAAAAAyxkAAAAAAADLGQAA /gAAAAYuAAC4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMjAAAAAAAABSFwAAAAAAAP////8AAAAAAAq9wu+B 0AEAAAAAAAAAAP////8AAAAAfRkAAAoAAAC+LgAAGAAAAAAAAAAAAAAA2EEAABQAAAA9QgAAMAAA AG1CAAAAAAAA1i4AAFwBAAAfSAAAAAAAAIcZAAAiAAAAH0gAADAAAAAyMAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAy MAAAbgAAAB9IAAAAAAAAAAAAAAAAAAD2FgAAAAAAAKAwAAA4EQAAhxgAABQAAACbGAAADgAAAMsZ AAAAAAAAqRgAAAwAAAC1GAAAUgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAhxgA AAAAAACHGAAAAAAAAIcYAAAAAAAAEkIAAAAAAAASQgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAqRkAACIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIcYAAAA AAAAhxgAAAAAAACHGAAAAAAAAG1CAAAAAAAABxkAAAAAAAAHGQAAAAAAAAcZAAAAAAAABxkAAAAA AAAAAAAAAAAAAP////8AAAAA/////wAAAAD/////AAAAAAAAAAAAAAAA/////wAAAAD/////AAAA AP////8AAAAA/////wAAAAD/////AAAAAP////8AAAAA/////wAAAAD/////AAAAAP////8AAAAA /////wAAAAD/////AAAAAP////8AAAAA/////wAAAAD/////AAAAAB9IAAAAAAAAhxgAAAAAAACH GAAAAAAAAIcYAAAAAAAAhxgAAAAAAACHGAAAAAAAAIcYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACHGAAAAAAAAIcYAAAAAAAAhxgA AAAAAACcCQAAIAwAALwVAAA6AQAABQASAQAACQQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACABDVBM QU5OSU5HIEJPQVJEDVR1ZXNkYXksIE1heSA1LCAyMDE1IGF0IDc6MzAgUE0NQm9hcmQgUm9vbSwg VG93biBPZmZpY2VzLCBNYWluIFN0cmVldA0NMS4gICBQMjAxNS0xMyBTdWJtaXNzaW9uIG9mIEFw cGxpY2F0aW9uIGZvciBTaXRlIFBsYW4gUmV2aWV3IGJ5IHRoZSBIYW5vdmVyIFNjaG9vbCBEaXN0 cmljdCB0byBpbnN0YWxsIGEgZ3JlZW5ob3VzZSBhdCA0MSBMZWJhbm9uIFN0cmVldCwgVGF4IE1h cCAzNCwgTG90IDY4LCBpbiB0aGUgk0mUIHpvbmluZyBkaXN0cmljdC4NMi4JUDIwMTUtMTAgIFN1 Ym1pc3Npb24gb2YgQXBwbGljYXRpb24gZm9yIE1pbm9yIExvdCBMaW5lIEFkanVzdG1lbnQgYnkg Sm9obiBWYW5zYW50LCBhcyBBZ2VudCBmb3IgdGhlIFRydXN0ZWVzIG9mIERhcnRtb3V0aCBDb2xs ZWdlIGFuZCBSZXBsb2dsZSBGYW1pbHkgTExDLCBwcm9wZXJ0eSBvd25lcnMgb2YgcmVjb3JkLCB0 byBhbm5leCAyLDE3NSBzcXVhcmUgZmVldCBmcm9tIDYgUm9wZSBGZXJyeSBSb2FkLCBUYXggTWFw IDM3LCBMb3QgNDAgdG8gMTAgUm9wZSBGZXJyeSBSb2FkLCBUYXggTWFwIDM3LCBMb3QgNDEuICBC b3RoIHBhcmNlbHMgYXJlIGxvY2F0ZWQgaW4gdGhlIJNTUi0xlCB6b25pbmcgZGlzdHJpY3QuDTMu CVAyMDE1LTExIFN1Ym1pc3Npb24gb2YgQXBwbGljYXRpb24gZm9yIE1pbm9yIExvdCBMaW5lIEFk anVzdG1lbnQgYnkgSm9obiBWYW5zYW50LCBhcyBBZ2VudCBmb3IgdGhlIFRydXN0ZWVzIG9mIERh cnRtb3V0aCBDb2xsZWdlLCBwcm9wZXJ0eSBvd25lciBvZiByZWNvcmQsIHRvIGFubmV4IDEyLDgz NSBzcSBmdCBmcm9tIDEwIEhpbHRvbiBGaWVsZCBMYW5lLCBUYXggTWFwIDQwLCBMb3QgOSwgaW4g dGhlIJNOUJQgem9uaW5nIGRpc3RyaWN0IHRvIDYgUm9wZSBGZXJyeSBSb2FkLCBUYXggTWFwIDM3 LCBMb3QgNDAgaW4gdGhlIJNTUi0xlCB6b25pbmcgZGlzdHJpY3QuDTQuCVAyMDE1LTEyICBTdWJt aXNzaW9uIG9mIEFwcGxpY2F0aW9uIGZvciBNaW5vciBTdWJkaXZpc2lvbiBieSBKb2huIFZhbnNh bnQsIGFzIEFnZW50IGZvciB0aGUgVHJ1c3RlZXMgb2YgRGFydG1vdXRoIENvbGxlZ2UsIHByb3Bl cnR5IG93bmVyIG9mIHJlY29yZCwgdG8gZGl2aWRlIDYgUm9wZSBGZXJyeSBSb2FkLCBUYXggTWFw IDM3LCBMb3QgNDAsIGluIHRoZSCTU1ItMZQgem9uaW5nIGRpc3RyaWN0cyBpbnRvIHR3byBsb3Rz IChjcmVhdGluZyBsb3RzIG9mIDMxLDQ5NSBzcSBmdCBhbmQgMzEsMzE1IHNxIGZ0KS4gIA01LglJ bmZvcm1hbCBkaXNjdXNzaW9uIHdpdGggSHlwZXJ0aGVybSAsIEluYy4gcmVnYXJkaW5nIHJlY29u c3RydWN0aW9uIG9mIGFuIGFjY2VzcyByb2FkLCBwYXJraW5nIGFyZWEsIGNvbmNyZXRlIGxvYWRp bmcgZG9jaywgYW5kIGNvbm5lY3RvciBhY2Nlc3Mgcm9hZCwgZHJhaW5hZ2UgaW1wcm92ZW1lbnRz LCBhbmQgcmVwbGFjZW1lbnQgb2YgbHVtaW5haXJlcyBvbiBleGlzdGluZyBwb2xlcyBhbmQgYnVp bGRpbmcgbW91bnRlZCBmaXh0dXJlcy4NNi4JQ29udGludWF0aW9uIG9mIHJldmlldyBvZiByZS1v cmdhbml6ZWQgWm9uaW5nIE9yZGluYW5jZQ03LiAgT3RoZXIgQnVzaW5lc3MNOC4JQWRqb3Vybg0N Q29ycmVzcG9uZGVuY2UgYXR0YWNoZWQ6DTEuCU1hdGVyaWFscyBmb3IgSGFub3ZlciBTY2hvb2wg RGlzdHJpY3QgU2l0ZSBQbGFuDTIuCU1hdGVyaWFscyBmb3IgRGFydG1vdXRoIENvbGxlZ2UvUmVw b2dsZSBGYW1pbHkgTExDIE1pbm9yIExvdCBMaW5lIEFkanVzdG1lbnQNMy4JTWF0ZXJpYWxzIGZv ciBEYXJ0bW91dGggQ29sbGVnZSBNaW5vciBMb3QgTGluZSBBZGp1c3RtZW50cw00LglNYXRlcmlh bHMgZm9yIERhcnRtb3V0aCBDb2xsZWdlIE1pbm9yIFN1YmRpdmlzaW9uDQkJDQ0NDUFMTCBBUkUg V0VMQ09NRSBUTyBBVFRFTkQgUExBTk5JTkcgQk9BUkQgTUVFVElOR1MNDUNvbmR1Y3Qgb2YgUHVi bGljIEhlYXJpbmdzOg1BbnlvbmUgbWF5IHNwZWFrLCBpZiByZWNvZ25pemVkIGJ5IHRoZSBDaGFp ci4gUGxlYXNlIHJhaXNlIHlvdXIgaGFuZCBhbmQgc3RhdGUgeW91ciBuYW1lIGFzIGFsbCBtZWV0 aW5ncyBhcmUgc291bmQtcmVjb3JkZWQuDQ1UaGUgUGxhbm5pbmcgQm9hcmQgbWVldHMgb24gdGhl IGZpcnN0IHRocmVlIFR1ZXNkYXlzIG9mIG1vc3QgbW9udGhzIGF0IHRoZSBNdW5pY2lwYWwgQnVp bGRpbmcsIDc6MzAtMTBQTQ0NR2VuZXJhbCBmb3JtYXQgb2YgUGxhbm5pbmcgQm9hcmQgbWVldGlu Z3M6DUFwcGxpY2FudCBwcmVzZW50cyBwcm9qZWN0Lg1QbGFubmluZyBCb2FyZCBtZW1iZXJzIGFu ZCBUb3duIHN0YWZmIGFzayBxdWVzdGlvbnMgb2YgYXBwbGljYW50Lg1RdWVzdGlvbnMgb3IgY29t bWVudHMgYXJlIHdlbGNvbWUgZnJvbSB0aGUgcHVibGljOyB0aGV5IHNob3VsZCBiZSBkaXJlY3Rl ZCB0byB0aGUgQ2hhaXIgd2hvIG1heSBhc2sgdGhlIGFwcGxpY2FudCB0byByZXNwb25kLg1QbGFu bmluZyBCb2FyZCBkaXNjdXNzZXMgY2FzZSB3aXRoIGFwcGxpY2FudC4NUGxhbm5pbmcgQm9hcmQg YWN0aW9ucyBtYXkgaW5jbHVkZTogdm90aW5nIG9uIGFwcGxpY2F0aW9uIHdhaXZlcnMgdGhhdCBo YXZlIGJlZW4gcmVxdWVzdGVkLCB2b3Rpbmcgb24gY29tcGxldGVuZXNzIG9mIGFwcGxpY2F0aW9u LCBjb250aW51aW5nLCBhcHByb3Zpbmcgb3IgZGVueWluZyB0aGUgcmVxdWVzdC4NDQ1EYXRlIG9m IFBvc3Rpbmc6ICBBcHJpbCAyOSwgMjAxNQlBZ2VuZGEgNS0wNS0xNQ0AAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAABCAAAAggA AAMIAAAECAAAEQgAABIIAAAZCAAAIAgAACEIAAAmCAAAKggAAC0IAAAyCAAAVwgAAPjp4c+9q5uO fnFhUUEvAAAAAAAAAAAAAAAAAAAAAAAAAAAiFWjtbMgAFmh3HTkANQiBPioBQ0oYAE9KAwBRSgMA YUoYAAAfFWjtbMgAFmh/PSoANQiBQ0oYAE9KAwBRSgMAYUoYAB8VaO1syAAWaEtqYwA1CIFDShgA T0oDAFFKAwBhShgAHxVo7WzIABZooDsFADUIgUNKGABPSgMAUUoDAGFKGAAZFmgSIDoANQiBQ0oY AE9KAwBRSgMAYUoYAB8VaO1syAAWaG8/4QA1CIFDShgAT0oDAFFKAwBhShgAGRZoWzWfADUIgUNK GABPSgMAUUoDAGFKGAAfFWjtbMgAFmjqLfkANQiBQ0oYAE9KAwBRSgMAYUoYACIVaO1syAAWaJc8 7AA1CIFDShgAT0oDAFFKAwBhShgAaAgAACIVaO1syAAWaH89KgA1CIFDShgAT0oDAFFKAwBhShgA aAgAACIVaO1syAAWaOc38wA1CIFDShgAT0oDAFFKAwBhShgAaAgAAA4WaKA7BQBDShgAYUoYAAAd A2oAAAAAFWigOwUAFmjnN/MAQ0oYAFUIAWFKGAAOFmgMa9cAQ0oYAGFKGAAOAAgAAAMIAAASCAAA MggAAFgIAABZCAAAEwkAAHYKAAC9CwAA5wwAAOcNAAAiDgAA9gAAAAAAAAAAAAAAAOoAAAAAAAAA AAAAAADqAAAAAAAAAAAAAAAA3gAAAAAAAAAAAAAAAN4AAAAAAAAAAAAAAADKAAAAAAAAAAAAAAAA tgAAAAAAAAAAAAAAAKIAAAAAAAAAAAAAAACiAAAAAAAAAAAAAAAAjgAAAAAAAAAAAAAAAHMAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGgAAAyQDDcYI AAJoAaAFAAAOhGgBD4RoARGEmP4qJAFdhGgBXoRoAWCEmP5hJANnZJR6FQAAEwAADcYLAANoAdAC NiQAAAAPhGgBEYSY/l6EaAFghJj+Z2SBZc0AABMAAA3GCwADaAHQAjYkAAAAD4RoARGEmP5ehGgB YISY/mdkCFxoAAATAAANxgsAA2gB0AI2JAAAAA+EaAERhJj+XoRoAWCEmP5nZDlUNgAAEwAADcYL AANoAdACNiQAAAAPhGgBEYSY/l6EaAFghJj+Z2RbNZ8AAAsAAAMkAg3GBQABNiQAYSQCZ2RbNZ8A AAsAAAMkAg3GBQABNiQAYSQCZ2RzfoIAAAgAAA3GBQABNiQAZ2SgOwUAAAtXCAAAWQgAABIJAAAT CQAAFgkAAHUKAAC8CwAAwAsAAOcMAADoDAAA6gwAAOYNAADnDQAA6A0AACEOAAAiDgAAIw4AACYO AADx4dHCsqKSooKygnJlWEg8MAAAAAAAAAAWFmiaEk0AQ0oYAE9KAwBRSgMAYUoYAAAWFmj3FB8A Q0oYAE9KAwBRSgMAYUoYAAAfFWiUehUAFmgAbKMAQ0oYAE9KAwBRSgMAXAiBYUoYABkWaABsowBD ShgAT0oDAFFKAwBcCIFhShgAGRZo9xQfAENKGABPSgMAUUoDAFwIgWFKGAAfFWg3K6cAFmhtSfAA Q0oYAE9KAwBRSgMAXAiBYUoYAB8VaDcrpwAWaPcUHwBDShgAT0oDAFFKAwBcCIFhShgAHxVoNyun ABZoCFxoAENKGABPSgMAUUoDAFwIgWFKGAAfFWg3K6cAFmg5VDYAQ0oYAE9KAwBRSgMAXAiBYUoY AB8VaDcrpwAWaGomhgBDShgAT0oDAFFKAwBcCIFhShgAHBVoNyunABZoRALUAENKGABPSgMAUUoD AGFKGAAAHxVoNyunABZoWzWfAD4qAUNKGABPSgMAUUoDAGFKGAAfFWg3K6cAFmhbNZ8AQ0oYAE9K AwBRSgMAXAiBYUoYABwWaFs1nwA1CIE+KgFDShgAT0oDAFFKAwBhShgAESYOAAA0DgAANQ4AADYO AAA4DgAAQA4AAEEOAABZDgAAWg4AAF0OAAByDgAAjA4AAI0OAACeDgAAqA4AAMgOAADcDgAA3Q4A AN4OAADgDgAA7g4AABoPAAAbDwAAUA8AAFEPAABSDwAAUw8AAFQPAABVDwAA8eXZ5fHKvq+kmY6D jpl4mY6Zg5l4g3iDbWJXTAAAAAAAAAAAFBZoKi4iADUIgU9KAwBRSgMAXAiBABQWaP1jEgA1CIFP SgMAUUoDAFwIgQAUFmj4QHkANQiBT0oDAFFKAwBcCIEAFBZorjHaADUIgU9KAwBRSgMAXAiBABQW aDcrpwA1CIFPSgMAUUoDAFwIgQAUFmhEAtQANQiBT0oDAFFKAwBcCIEAFBZoWzWfADUIgU9KAwBR SgMAXAiBABQWaDpcPgA1CIFPSgMAUUoDAFwIgQAUFmiBZc0ANQiBT0oDAFFKAwBcCIEAHRVoAE79 ABZorjHaADUIgT4qAU9KAwBRSgMAaAgAFxVo7WzIABZofz0qAE9KAwBRSgMAaAgAHBVoO3ANABZo L0zgAENKGABPSgMAUUoDAGFKGAAAFhZo9xQfAENKGABPSgMAUUoDAGFKGAAAFhZoAE79AENKGABP SgMAUUoDAGFKGAAAHBVoO3ANABZoOWOLAENKGABPSgMAUUoDAGFKGAAcIg4AADUOAABADgAAQQ4A AFoOAACNDgAA3Q4AABsPAABQDwAAUw8AAFQPAABVDwAAVg8AAIgPAACJDwAA6wAAAAAAAAAAAAAA AN8AAAAAAAAAAAAAAADWAAAAAAAAAAAAAAAAwwAAAAAAAAAAAAAAALQAAAAAAAAAAAAAAAC0AAAA AAAAAAAAAAAAtAAAAAAAAAAAAAAAALQAAAAAAAAAAAAAAAC0AAAAAAAAAAAAAAAAtAAAAAAAAAAA AAAAAKUAAAAAAAAAAAAAAAC0AAAAAAAAAAAAAAAAlQAAAAAAAAAAAAAAAIgAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANAAANxgUAATYkADckADgkAEgkAGdkoDsFABAA AAMkAQ3GBQABNiQANyQAOCQASCQAYSQBZ2SgOwUADxAAAyQDDcYMAgAAoAUCaAE2JAAAYSQDZ2RC Xl0ADxAAAyQDDcYMAgAAoAUCaAE2JAAAYSQDZ2SgOwUAABIAAA3GBQABNiQAJmQGAwABMSQAUMYI AAAA/wYDAQBnZABO/QAACAAADcYFAAE2JABnZHN+ggAACwAADcYLAANoAdACNiQAAABnZABO/QAU AAADJAMPhNACEYQw/TckADgkAEgkAF6E0AJghDD9YSQDZ2SaEk0AAA5VDwAAVg8AAIcPAACIDwAA iQ8AAKMPAAClDwAA9Q8AAP4PAAAGEAAACBAAAAkQAAALEAAAIhAAACMQAACMEAAAjRAAAJQQAACg EAAApxAAAKkQAACtEAAArxAAALEQAACyEAAAtBAAALgQAAC5EAAA0hAAANMQAADUEAAA1RAAANwQ AADeEAAA4RAAAO4QAAD0EAAAFBEAABURAACVEQAAnBEAAJ4RAADAEQAAwREAAMIRAADDEQAAyhEA AMwRAADPEQAAHBIAAB4SAABDEgAARRIAAHESAAByEgAAcxIAAPLi0uLEuK2iraKtoq2XopeiraKt oq2iraKtoq2il62iraKtoq2iraKtoq2XraKtoq2iraKtoq0AAAAAAAAAAAAAAAAAAAAAAAAAAAAA ABQVaO1syAAWaKA7BQBPSgMAUUoDAAAUFWjtbMgAFmhcKocAT0oDAFFKAwAAFBVo7WzIABZo2EYm AE9KAwBRSgMAABcVaO1syAAWaH89KgBPSgMAUUoDAGgIABoVaO1syAAWaH89KgA1CIFPSgMAUUoD AGgIAAAfFWjtbMgAFmjKDnkANQiBQ0oYAE9KAwBRSgMAYUoYAB8VaO1syAAWaKA7BQA1CIFDShgA T0oDAFFKAwBhShgAGhVoO3ANABZoxVb5ADUIgU9KAwBRSgMAXAiBN4kPAAClDwAAIhAAACMQAACM EAAAjRAAALgQAADUEAAAFhEAAJQRAADCEQAAcxIAAHQSAAB1EgAApRIAAOwAAAAAAAAAAAAAAADf AAAAAAAAAAAAAAAA3wAAAAAAAAAAAAAAAN8AAAAAAAAAAAAAAADfAAAAAAAAAAAAAAAA3wAAAAAA AAAAAAAAAM0AAAAAAAAAAAAAAADNAAAAAAAAAAAAAAAAzQAAAAAAAAAAAAAAAM0AAAAAAAAAAAAA AADNAAAAAAAAAAAAAAAA3wAAAAAAAAAAAAAAAMEAAAAAAAAAAAAAAADBAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAsAAA3GCAAC4h02JAAAMSQAZ2SgOwUAEgAACiYA C0YHAA3GCAAC0AI2JAAANyQAOCQASCQAZ2SgOwUADQAADcYFAAE2JAA3JAA4JABIJABnZKA7BQAA EgAADcYFAAE2JAAmZAYDAAExJABQxggAAAD/BgMBAGdkoDsFAAAOcxIAAHQSAAB1EgAAhBIAAIUS AACGEgAAjRIAAI8SAACQEgAAlRIAAJYSAACbEgAAnhIAAJ8SAAChEgAAohIAAKQSAAClEgAA9end 0cW8s6eekoazerNuZVkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABcVaO1syAAWaF8C jgBPSgMAUUoDAGgIABEWaBIgOgBPSgMAUUoDAGgIABcVaO1syAAWaPRlLABPSgMAUUoDAGgIABcV aO1syAAWaAJtFABPSgMAUUoDAGgIABcVaO1syAAWaHtuuwBPSgMAUUoDAGgIABcVaO1syAAWaEJe XQBPSgMAUUoDAGgIABEWaABO/QBPSgMAUUoDAGgIABcVaO1syAAWaFt/1ABPSgMAUUoDAGgIABEW aFs1nwBPSgMAUUoDAGgIABEWaC5B5ABPSgMAUUoDAGgIABcVaO1syAAWaEtSKwBPSgMAUUoDAGgI ABcVaO1syAAWaBxlugBPSgMAUUoDAGgIABcVaO1syAAWaI5nagBPSgMAUUoDAGgIABcVaO1syAAW aKA7BQBPSgMAUUoDAGgIABQVaO1syAAWaAIMbgBPSgMAUUoDABE/ABIwABxQAQAxkBABOnBVD9gA H7DQLyCw4D0hsKAFIrCgBSOQ0AIkkNACJbAAABew0AIYsNACDJDQAkRwAQAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIElAABEAGQAcAOzAgAACgAAAAAA AAABAAAAAAB7EPAM3AJ/AgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAADwAE8FQAAACy BArwCAAAAAEEAAAACgAAgwAL8DAAAAAAAeQJAAABAeQJAAACAbASAAADAbASAAAEQQEAAACBAREA ABC/AQAAEAD/AQAACAAAABDwBAAAAAAAAIBiAAfw2SQAAAYGPV+TcVxrWGW2O2tZK9o//P8AtSQA AAEAAABEAAAAAAAyCgBuHvCtJAAAPV+TcVxrWGW2O2tZK9o//P+JUE5HDQoaCgAAAA1JSERSAAAD cAAAArMIAwAAAGg/H0sAAAAEZ0FNQQAAsYiVmPSmAAAABlBMVEUAAAD///+l2Z/dAAAACXBIWXMA AC4yAAAuNAG1rGreAAAgAElEQVR4nO2ci3brKg5Anf//6Znb0yYGJB42lkTYe62Z09gYBNI2Ttrc 4wUAZhzeAQDsBMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIB GIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdg CMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAh CAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYg HIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJw AIYgHIAhCAdgCMIBGIJwAIYgHIAhCOfBwbLvCpl34DgwbldIvAMIty8k3p4D4faFxNtzYNy+kHd7 EG5jyLs5B8JtDHk358C4jSHt5iDczpB2aw6E2xnSbs2BcTtD1q1BuK0h69bkwiHeVpBta47MOITb CrJtzHGkxr3/8Q4MTCDPxmTCHQi3F+TZmFS4s3fekYEFpNkaQbgD4baBNFtzqHhHBgaQZWN030jF DpBlYxBub8iyI3i2H6TbE3zbDvLtCcJtB/n2BN+2g4R7gnDbQcI9QbjtIOGeINx2kHBPEG47SLgn CLcdJNwThNsOEu4Jwm0HCfcE4baDhHuCcNtBwj1BuO0g4Z4g3HaQcE8QbjtIuCd8W2A7yLcnJ+FQ bw/IsjX8Jxa2hixbg3BbQ5bNQbidIcuOnDXDtz0gzZ6wrW0HCfeEB8ntIN+O8M5tP8i3G3xYsiNk 2ws+ntwSku3DxzOM2wpy7cPZMozbCFLtQqoYxu0DmXYhMwzhtoFMe5ALxha3DSTaA4TbFhLtQeEX wu0CifYA4baFRHuAcNtCol3gPdyukGgX+LXArpBpF9ItjQ1uH8i0D/xp16aQaif4D+TtCbl2gv98 0J6QazfQbUfIth/4tiGk2xF82w/y7Qi+7QcJdwTh9oOEO4Jw+0HCHUG4/SDhjiDcfpBwR/iUcj/I tyMItx/k2xGE2w/y7QjC7Qf5dgTh9oN8O4Jw+0G+HUG4/SDffvBlgQ0h334g3IaQbz8QbkPItx8I tyHk24//bMO4zSDdfiDchpBuP/4J5x0FmEK+/WCH2xDS7cd/riHcZpBuP35cQ7i9IN1+INyGkG4/ EG5DSLcbB8JtCOl2A+F2hHS7gXA7QrrdQLgdId1uINyOkG43flXDuK0g224g3I6QbTcQbkfIthsI tyNk2w2E2xGy7QbC7QjZ9uJAuB0h214g3JaQbS8QbkvIthcf4cjBRpBsL/48Q7itINlevD1DuJ0g 2V4g3JaQbC8QbktIthcItyUk24kD4baEZDuBcHtCsp1AuD0h2U4g3J6QbCcQbk9IthMItyck24mT ZuRgI0i2Ewi3JyTbibNwJGEfyLUTCLcn5NoJhNsTcu0Ewu0JufbhQLg9Idc+INymkGsfEG5TyLUP Z8kQbiPItQ8Itynk2odEMozbB1LtA8JtCqn2AeE2hVT7gHCbQqpdOBBuU0i1Cwi3K6TaBYTbFVLt AsLtCql2AeF2hVS7kCqGcPtAql1AuF0h1S4g3K6QahcyxTBuG8i0Cwi3K2TaBYTbFTLtAsLtCpn2 4EC4XSHTHiDctpBpDxBuW8i0B7lgCLcNZNoDhNsWMu0Bwm0LmfagFI48bAKJ9gDhtoVEe4Bw20Ki PUC4bSHRDhR+Idw2kGgHEG5fSLQDCLcvJNqBS8L9tPn//46EhyKEpyBjDkjCFYk4/k4MYBM+3IAc OVC4kR0Y1AztFoLs2FPR5KZqiBcesmJOxY+pvmFdREiIOR8Vso9BBn3rb+49Y/hAMqwRFWiY1f46 QYebD80HhiAN1oi1P29fYreLDQmwRq76J4xAuoCw+NZUhZs/mqbd9JGgCxbeFq3aH5YA56LAqpui FbuFABgXARbdjFqxG5U/25w7LLgN5xIvC9229HHOEVbbgrS+vYXLA8I5Q1jqxylr2+2RUo/LevBt YaWfRthM/Hc4KTiP8feDZX6U3LbPdwKKZh7hvdjnrGGNH0R7pxRJuPqnpzAbFvgxFNte0YT7CwDl LGB5H0LXLaJwf0Fg3NOwuo9Q0y2qcM/8BTWksLQPUNctrnCvWtAwBRZ2Oi3dhD9gDlThKPcsLOtk Wra9ipqOVuAY9ySs6lSGdMsxjlUnXkTfA2s6kw5/dN8i1XfAkL4EVnQePfbUfItU3gFD+g5Y0Gl0 qfM+L/01ZaxkvCP1DuSrYDVn0bdTrbLD/RA0rKVhMWfQr01duGjZiBrXwrCUExiQZi3hUG46LOQE BowR9Aot3D/CBrYerOR9Roz5a3Vu+/tzVOGC3wwWg2W8zcgG9W6VtJaORQLl5sEi3mToefDTLGl+ svCRGG+DcdNgDW8x+ImH7NvnZeCaxrg5sIR3GPy8I/mgJDsjH48De9wcWMEbjH68qPp2PjMxvrnw XDkDlu86l30TNrIFHirZ5WbA6l1l8HEy+RBSFy7yFodxE2DxrjKoW32DqzxthgLj7sLaXWN0e2ts cImNafcqM6YxDG/kbsLKXeJC3Z+aNoQ7+oRzshHjbsHCXeFKYdeFe00W7hPf9Axj3B1Yt3EubSTn xrJwZ+OmCZfZNwWMuwHLNsy1Ao4g3Nm9O8JMFngrWLRRLm4Y/cK9LIS75wzGXYY1G+VirSZCCVfn 5+2U+/nfa+w/XoJxV2HJBhn27Si+7CZdnp2/HNUUBfsHHI0TWLIhhqry3Ph0SbPS7xTyVPNqYYws BHxgvYboLrO8cN/XdFT4hCqeplxltp0rAQms1wCNChTavZvPqPCrwc5CG2BCsPvAag3QLjGtVO8W 9+2IZ8T0vqx3NaCE1eqmacOV2v3rLv8DyifCPuQBRsMWZjEv2q+HteqmLkN3fVY/NUnaPRD60Rzg 368IRpka7VfDUvWiF1dHPY7U7kMlPG7JQNBPBPydsE6d6GWla/X5jdtY7T5WvxfH+NcW5WbAKnWi FFVnBbYK1Kx6i7vCyLVN47CuCQvUh2hJpQa1y4/Pjlc0uPg9hMtTuWJcW7vH4v4OWJ8uinJSb/By xeUF+e6p/OuS5+v2viJN6RBPg3XpIq9OrcyUiwvjPv8If13ysG/v4T4BDQ94jl+1DgRYlg702/ip ZmsXF8+U7/9PDTz/v/W0pnSCdC1YkjaKal1/avH2MS3D9NjrfOD10nbKxyc2ryeM02BF2ty4hydG pcKpBwzfABWx3OpK/HWj3Ya9BqxFE6WSOupTNSz3qzhgMrEizCkDKwuFdv9gFVpcUi27UBPuvcXc +Kx+AnMHF6bjMquYsAgtrlZO1l6uQEFGj9I83wRmjY5xIqxBnctFk18j118Q4fKIp3eIdH/sPv8q l4tFuCy+cEmUc3rDuYKtJ1/nep1Il8mVF1a4eRGgXMLGU6+T351HLpWuE+tO5YEJDYc+sc9XOf9Z va/GthOv8a8irlaHXFdrCPfkryfm/c5vZTaddo17xa9duohwj/1+Qnqyntb5Qmw56Ro36029uuJW KOHyUOd3/sQbxXXYcc4aE6pev1x1LZ5wj7/fijJPDzacssa5wi6WQqVKz31HF+6HR8NJlmB675HZ a7Y6M8q92kUi3PlLBFGFe/oDjmCztWKryapMqfZ6D+kIZ/miCvf0BxzpeszvPybbTFTnmHK3bXWR Fm8m33MfDt7EyrhAU36YXeapcsx5dGr2oQqnHAjCwzFFnPKz7DJPjTlVft6vugZKJXsJW14UHlcC 4fZh1qbS00vaJrEsPx2t+oyMCzbrh9hikhJHxpyeehr9vXwf/T2QfnQZiYeVmJaJFfjy6WnM063z ofTj0mdDS337ff0KKNzzf5C1j3HfPTuFibr1vgn8O58PXfCKKJzBXx3Py0hsvnhqKjNz29tRU7TS u1gYBBd7ASbxtRPTmZnXdzetjpYX7ofngwu+APf52onJTK7r7p5aQi2inEVs0dfgJt84J53Jyfzp pqczYVzRsFWMsxgj6hLc5BvnpDF7B+nu7NxO2MsU7gf4ADbRBV+EO3zfjDRml/NAXxdsC1trRuGF X4erfNl0VKbX8khvQ55FrzSr+KKvw0W+azYa8wt5qLsOodYRjk3uFt80F53pqTtGPj5ILdKuS06H rzOT8JZYiUG+aCoaD+waY/1ljbXLzsejF5pRfIvcfUb4lnmoPKDbNd/Or9V2xXV3wnwSKw2+zrgv mYbGE7qN+Va01K/LT4QuMzMPvsy475iFQqKbo2/5kUrj8uorIVpgb9zrG8p1/RnoPKLbg779d1K4 fjhCI+x2nkfy6MTyE1B5RrcL79+KI9XmzQ4CYebAQ7n0YPHwdR5K0V3fyk0sa9/RRRgMHUC42Dx1 Rxz3bWSDE3UMXWMuxsVdjg6WDl7jsSeQ2761hRON64/QGpdN7vmxnmPp4BWeuxmOdTvum3w+dImZ SnBa/8BLUmXVuCsE9u2acLGNs/vo5DTYuvvcomHrPPisP9jvFeHULS5ynjBugDWj1nkwHxa+rbzF WYW4tnFLBq3y5O3vgm8zhQudKC/ljMabyYoxqwTyTXbkqnDBtzhr41be4xYMWSWSb5sJZ77HLbvL rRavzpO6jd9T5aaXhQtvnHX9P5rtJ1ksXJ1YvqnCdVw3dDwO5vWPcJ48e8cb7/sB4aKnqu8/0Tlz wL+khF+aEwuFWmMN3/oeKVfd4vz2uPhL82GhUHUefqC/0Lm2wX21cMa/An89nvgnWCbQCla+zRDu 2oV91/rjuMctsDo/rBJnhafX/Er3Wz5Svuy3uNe19DiySJgVnr7JXer8kUfKBXLlUPzHWp9XLhFk jYj7G8LZjumxsV5liSB1Hn+Gv9i9+kTZ/nrO0sK5POAdK21xK8SoE9W3mnCVXxvVxlqknnxq//0f kI+/RPEjrPG0b3eEO8oj/fT0GBWPTW6dX8lFj6/G84t8eYDTNWOmqd4tUEt/+BpnOegFosdXweCm dse3tkMjly1RTH94xLvKGkWPT8fQtysPlFd9Qbjbo4Zep8ix1bFY3ItDKI78flySnMmblC9XNM7n c/ol1ilwaHUM1vbyCMV11W1Lo95lcP5F66ec7bgDxI2sisXKXh/ikl91+aLXUYFP5cc3LmxgVWL7 Vv3M5KZ46+ATcfiVChtYDZMCvDHEJE8Q7vKw5g+z3USNq4ZN/U0RbnIsS2XLKeLgSxU1rgrhfeuu tZExQleRjOcWF3aposZVwdS3G0+DvSMMyHktGC9cHyqjLlbQsCqs4Jv218vCAL0DxS0hHR4qS2JG VcHEt/s5+1wrBXz2TJxQeTBqAVXxqnubIrlEwJCq2Pp2XzhxE1MELIc/HQ1ZPS3cqj6sb8sKF32U 34uzbezUez5M6tZx+iOwpMPFcCv8sFtcvIiq2G5w97p4pW+8Us1exeb3bltsdqcOl8PbOPuBG8SL qIaNb9OEy7r4bHavVLf8RHnRssJhXEa4gGoY+TYjV8e/v+IqD572vvOOdz7zieF9EcKtNHKNaPFU WWeDS4U7yfPKjiQvs+fPZMuLVzp9OJa90e15jFjR1FnIt1/hkv5KwdIDQginw9EKpxv3LS7WwoUK po7V+k0TLukt6bHsvjh9vFIvg9XNAP7GOQytEiqYOlbLN1W4Q5Gn6P10SGwZrG5GcDfOYWSVUMFU WWqDS34Pd+o4OVkM+ypPf5FwGPcfkWKpY7zB3e/lr7P8mCKP8vT4BcIF+ODEY2iZQKHUWWuDEz/M 7xQuPXt6Nr0Zkh/+wsVZuziR1LHybVZxINwJz6KPZlyYQOqY+YZwTxDAOJexBcIEUsfaN4SbSwDj gqxfjChaLLfBIVyOZ8kj3CjLbXAIl+Na8gg3xnobHMLlINw/YkRRx/AhHOEeg2fKH0IE0cDcN4R7 ANeaj2NchBhaIJx2zVL4P1NGWEP/CJoYrtW8kRCuwLXeEa6fFTc4hCvxrXiE68V+g0O4R0C4nzi8 A2iy5AaHcAK+NR/EOPcAWhi+3Z05EMKVOL+PimGc9/hN7J8oZ/V1+ic5tr1wThNBuB4sc4RwD4Nw CPfUUAgnEOOZ0ncdg2dx1Q0O4UScKz6CccGziHDfKJy3cV7D/4TgOXibVZ8oEU4mxkOl0+j/QvAc vInDBodwTxJji/Ma/ScCz8GbLPtEiXAKCOc5eBOEQ7gHxncb/RVcONMHEIQzYXfjQmcR4b5WOF/j 3AZHuHysid2d/kmO7S1ciC3ObfDYwi28wSGcSgThHJcychYR7vWFwrlXPMJpINwL4b5t/MBZtH17 jXBWOD9SOr+LC5xFjw0O4Z4nhHB+v3r3GrjNyk+UCFfB2ziEk0G48yuEmzu+2+BeA7dZ+S0cwlXw Fs7VuLhZNN3gEM4Qd+E8jYubRYRLXiHc7AichvYZtgNT3xDOkhjC+UQQN4sewk3t7/RPcgzhEC4i CJe8+jrh3D81QbgUhEteIdx3BBA2i2t/ZoJwNWII5zS0z7BtEC59hXDfEUDYLNo+UfJIaYuzccbF lQztMWgPHm/hEM6KGFucRwRhs4hw6SuEmz4+O9wZhEtfIdz08RHuDMKlr75KuI0/NQmbRYRLXyHc 9AB8RnYZtc2BcOmrbxLO7yOLJAKfkV1GbYNw2WGE+44AombRWDh+D2cKwoUD4bLDXyic34QQrgDh ssMINz0An5FdRm3jI9y84RCuRgzh2OFOIFx2+AuFcw/AI4KoWUS47DDCzQ8A4T5YCze7BhCuBsKF w0m4aeMhXI3jFUI4l5E9Bu0A4bLDCDc1AITLQLjs8HcJ999s/IXjkfIDwmWHv0o499kgXI65cJOf MhCujvNkEC7HXri5S/ER7jgfex8sBztOah3l4S8TzhnewxX43YOmcDyA95y+B8cFDZvFxYsM4SKD cCWrFxnCBQbhSigyeA6EK0A4eA6/2gpb0TxHwWM4llbYgkY4eAyEK0E4eAyEK0E4eAyEK0E4eAyE K+G3T/AYjoUVtp4RDp7Cs7Di1jPCwTO4VlbcekY4eAaEE0E4eAaEE+FNHDwE7+FEEA6eAeFEEA6e AeFEEA4ewbWuIpczxsETIJwCwsETIJwCwsETuFZV6GpGOHgA16IKXc1DW9zRQG1anhCHT5pqHTyJ 9Xjfiu8axs7eQIW1fPv0op9KD4iv5C5aIV1eAGV6rSbDkShnqsFrfcqRdkevD9dzqIM5CblKbOH6 b0e5AhKVhnkn6itluEZM99agHK/VpHJO71/tshVSJVBxxTomKKM0k+NTmZGP60QXrte4vnwVb5iz pMlFIgWSVUA9qOFJKx0JE5CGk7aCeijaqa7ZHcKhYt37d7gK9dvlGsQPtm9Fk6UvMpHkS77yKLvJ qju9NLukFlY7+gZlUF0NyzPDYXZclvtWXi3fzfQgZKuyk11dBmSBWHuFy15mV+m5EVIo9qMI1zau HX0ZTuXIDeGql1V2nlqs9ZaCHrXO1Ib1DC1QxX8sEWqXcNlLRTjl4iyf6SlxiFOrWs5Hi+HP/uI5 ra/PaiCXhGvNrrXxlsYpQTRuYdnZvvgCsk6kQygZ66g46Qb691N2iXC11PNw3Ddu4ZeEKx0vz+rj tUau3M20vrRHf/nkWsYtE+ggY8IVxgkdFVnOL9bqbSjoMtShcqp7U7noonGJBfXuW121hHsd2kmE C4AsXL29Ktzv8ax9frH25mkgZCHUsWq6ItzPmaZwjQ28FqR+NxO7qjfT3i6oowdjmUAHGU2KWhR/ r8q7anHxPeEyyZKuO7u4LFx1nPrsPntyParmFodwKzOelOoW96pucNVnst5SyOstj6q7l2vCVWq8 Prv3krXC+ozSfvZGuNW4KpxQev9eHl3CCSN0loLseGVv0WcxeEa+zWRX6rMbE04fRljzWmfFkdrw kVgm0DGKpLRzohWFWG6lflpN9pVCWW4JPV28Lx08k2ymWmz67P7uDO246s+UCLc0E4WTik3UT66l rlIQBxSU65rE2JmjSzhNlCHhaltcehDhVuOCcKdr0qvHhMtH6RUui0OjYw5jZz6Htf7rt5M+4Zpb 3F3hGsPHYZ1Ih8iT0lusknBljZQJP7XLhumpBcnvS76NC3c6WhdONmVcOGWY8vZYFa5+IDLrRDqE i3CicZ3CCS86Hct7GhUuaSE/7ennj/fjZkdgFeGOlnB5nJWLY7NQqCNcEa7xTJk0zC9MWgqPm81x zy+0YTvQpVGPZ42kNnonF4QTmwt7lupU6/YXm5ViHWCqcEJv+YVJyzvC3fOt/jx6Vzhxdp3C1bc4 wWM9+PJVa+xILBVsPzeFk7a4pF1+Ydq0JqcyrtJT81qhqwHhhEWqvr0SZjcuXP2BUZ1GcfJVnFmB taLt5gHhzi+LC19a217hjt8f1H76KKq0KVztvWo5g3J2fsLp04rMWtF2c1e44n3V6WXZU22f6BVO rKrmpWJX3SekupZaFVecXw8sbtJJek4KTAke4QJyWTjpgiyvDeEKPTujvRCw3FP/mXHhBBVuLm4x gtRA/dAE4YIgZayZmYZwwvus9/liKL11f7hPCyf7VhdO2u2vCZdZ2whW2++WM26pYPu5LVzxnuP8 qryuHPzTfDT213XfxoXrGFecXXLuzuKWA8jBZgHIPS3AUsH2c2nLUGrieH/28PtSuE4YXW3eweUy GhFOaioMXLVyhnDCtU3hqhGHZqVYB7ginJDIv3JK/RMulIa/LtzlDW6OcMWOpjcaEi7v4v1zR7Dq yetr5cNCoY4g1k0jMYdSEz//n+pXXCiO/7n6QvDXEjMgnNyyR7hidsN3s+yBvSOKrq5WYKFQR5gl 3PEuiL/L5fKQA1Da98R+MS9zhMufDfRWn6fuMeGqTwxjwi1m3DqRDiEkoVXHsqJ5TSnlp0RQfFO8 O/bHhVMa9gl3uhspVwkXlB38/tQOtnryznrZs0ygY6jCVfMoXnB69FE6qO0V46Vwq35uCye++9WC zBdnIK73BXr3Y33pY8dimUDHkHJQr3/9Alvh7t2vu4VTGxbj982ueTMTF7f/nWS1s5WMWyXOQcQU 1AToaD8oXMXQCvd8GxKuFkG7pSRczRHt+mnCLVLJi4Q5ipyBeoZbzWsX60GYCle5OD1TGSSPoHN2 jbuZen3fJzd/x99nhQvEocOxSJij6HuRdKbvRquXQb0m+6OessF1fRgyQTipy5Eb0rhwvwdHMhiP NaIcRU3mIZxpCJO+bLfqPKW3v5GSXuFqwxzdTYXlUVy4FWx2TB1kDeOWCHKQ40PzpH6bffUJVxlK u6QReH97LZbWmVbEp9PN9WmM31rd/veH9YwtY9wKMY5ycqr6AVvdld+mlZdZbz09dMTd3bwSilyO v6f6F6c1uWJ2PWqkrVsh5HlS59gaLArxI7xAR500WrxbZtdV+urqoT7YzZLRp6UXcK2PVtuXtCJ9 172bDoZQ6bsrn/6ED3Af1iiYDjpc+2tYu1w29zj9v3DhxZDtiB/hLnyNb1CDDAcB3/aAFMcA3zaB HMcA3zaBJIcA33aBLEcA37aBNAcA3/aBPPuDbxtBot3Bt50g097g21aQal/4/dtmkGtrer+9Al8J 2TYmMQzftoN023J6hFzk+yQwFfJtivqtE9gEEm4Jum0PKbcE3baHpFuCb9tD1m1Bts0h8wCGIByA IQgHYAjCARiCcACGIByAIQgHYAjCARiCcACGIByAIQgHYAjCARiCcACGrCtc+eVpABHvUj0TKpgh vLMIy+BdqmdCBTOEdxZhGbxL9UyoYIbwziIsg3epngkVzBDeWYRl8C7VM6GCGcI7i7AM3qV6JlQw Q3hnEZbBu1TPhApmCO8swjJ4l+qZUMEAfDsIB2AIwgEYgnAAhiAcgCEIB2AIwgEYgnAAhiAcgCEI B2AIwgEYgnAAhiAcgCEIB2DIVwqXTSr7fkb96xqd3+mQvgDS+ZWQ/NzxPp53kLRMX4g99A5YtD+k GM6nW5M60h+Ub8fE/dKMHV8x7XpF5cktXyen8ooodD2Khv+aSMfEYItwpMH/O5oXa3mN8Gp4/qcj 4gzkQ8KifH6Ql6K1QMXhrGGjz7JVPkAEQgY1iJD+rB6LLMqtxYooy1VpmR9TllYYXxv8SB3rFU6Y vjLhPCJtARpr0hSu0nse6Ol43lK4XOhOHyAEIYMaRbhjp6+qBZdlOP+xaH0Srhg2yboWbBGOHFdV OK1ZebqItBgnUyI7mB0TRzy3yIdIe3+pFKL8/qwZJ18U3bcNhMsXXyjA85mkXdHZWbgyinwYJdii oRB1Flp21W3himdMQYmjnOqAcMLZpm9JCEfeqRJw2Y8+SABiR9eJmv/fF3XjtBuwaJxsoXR9XTjh 8VB2WApavAdor+X5Fw3KKz/CFV0PC6cumhB02VoyTjmDcBbUKu64LJxYPY8KJ7WVgh4R7udFa/7y hjFLuNqiCTG3hVNX4iWuYiiCh9dHQ7jGlvdJsPDkkvc+TbiymoQLwgnXGnGGcO9x28MJeiGcAU3h iiNCxZfCVe61N4U7t60Jl+y+ihblAXH6+Z6fvawKV7kRyHMo+ukQTu+9Mr9W03gED6+PMiGV55P3 Pf/Qm6hnzgIIrXuEe2V3gF7hKrYPzz9rIQt38kS5E5WvNeHUCUod1u6f/15nsZeziatd2MBGOIpC qW1f7xt30UToNzvTLVwl51mn5x/UraQp3LvKigIs+j7+7bDSy3wK597LAf/9mCy4tJaH2Ls8T2l+ wuDFxJNFDK1c0LDGOEpOp5J/3z/Wi0S5WMprdux0Vo32r01Xn6fGWqT6/MvI34adT5yVOFuWdd8e MF0AcYGKzoopCSMpTcsOf38UhghCyKBGGS+4V1nxQoKO/JRSO4d0rPL4lPRVr8ejNK3s+sL80xPK FIQRGmUbMd4AAAQgSURBVAOe1lY8qS1PPlg+sN701C49W0uBIxFjGuadlWrB5ceyf4X8HJ8mQnfC WHpFpb2K434uKoovaVp2XZZkc/6pIqeQ1PjPR9UBP4EKXVUWRRXuvCx/r9s9KEciEDGmYbKlTVNc lIhQHUInlZYvuXYK+VrRvv3JtrqycTKJZmkJ889iy6eUCKeEfe45byUJJ0ZT6VidTrrYWicIZ0gt /+KN9nP2lZWe2k/aRErmUa2a85mkSV24n6NpHO3S6pm/KpwSddJzh3DJydoEy1NFux7hxGXRhnMk YkzDXBQu3TGEYssEK3aJWhgdwmW99gknOaHNP5u+IEUmSMu37B5VBiAKOVO42rIuscUFDGmc3oIr 8n4+3iWcNmJ5UN8s0tu1GFfaaeLKqHBlPIPCSV1PF+6VLV22PSFcLCoFVz9aL/jzgazU28LVnn3y XruFe40KJx9932BONxsthEPq5AnhzquSx64sfDmKclUgIsY0zGXhknIuSrlsqI5YGU5oljRqCPfK RZc2ITlw0ZV8/xdPqZ0fBsKVQyQ3HCW02lWRiBjTMH0FJx4+i3To6RSEa9V9l3BS1Ze9zhFOeH68 Ilyl5xvCJV30CldqmjaNWNwRYxpmknBJjvLSzl6vIZzYLBEu7UDpvFigMeFepwGlGeZd561LrfKL aj8EI2RQo3QK91Juv0k9/bx6/5C2yy/7cOo+D6rIu9bvIfWZxztFOGloKTYprtqAHcJJE8yHKcet zbTjqkjEjGoQOf/lkovHi3rWC+KlNJQLXNs1KsXaJdyrQBlBUV3quSacXNVZk+rIygyVYcpxy57q beLq9iXCKbMQtgLpeLHhyAlTb8iN7A7kvrdLB2xiEn3quKy8alZE04kbmSNzFyVw9sEciuFxWGL4 QDUAGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAh CAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYg HIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJw AIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIB GIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdg CMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAhCAdgCMIBGIJwAIYgHIAh CAdgCMIBGIJwAIYgHIAhCAdgCMIBGLKycAfEoJ2KT4NPyf131LV+XPgfbIfQX95YioQAAAAASUVO RK5CYIIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAXgQXABIAAQALAQ8ABwAAAAAAAAAAAAQACAAAAAgAAAAIAAAACAAA AAgAAAAIAAAACAAAAAgAAAAOAAAADgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAA4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAIAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACAAAAAgAAAAA AAAAMAYAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAMgYAABgAAADAAwAA0AMAAOAD AADwAwAAAAQAABAEAAAgBAAAMAQAAEAEAABQBAAAYAQAAHAEAACABAAAkAQAAMADAADQAwAA4AMA APADAAAABAAAEAQAADIGAAAoAgAA2AEAAOgBAAAgBAAAMAQAAEAEAABQBAAAYAQAAHAEAACABAAA kAQAAMADAADQAwAA4AMAAPADAAAABAAAEAQAACAEAAAwBAAAQAQAAFAEAABgBAAAcAQAAIAEAACQ BAAAwAMAANADAADgAwAA8AMAAAAEAAAQBAAAIAQAADAEAABABAAAUAQAAGAEAABwBAAAgAQAAJAE AADAAwAA0AMAAOADAADwAwAAAAQAABAEAAAgBAAAMAQAAEAEAABQBAAAYAQAAHAEAACABAAAkAQA AMADAADQAwAA4AMAAPADAAAABAAAEAQAACAEAAAwBAAAQAQAAFAEAABgBAAAcAQAAIAEAACQBAAA wAMAANADAADgAwAA8AMAAAAEAAAQBAAAIAQAADAEAABABAAAUAQAAGAEAABwBAAAgAQAAJAEAAA4 AQAAWAEAAPgBAAAIAgAAGAIAAFYCAAB+AgAAFAAAAF9IAQRtSAkEbkgJBHNICQR0SAkEAAAAADgA AGDx/wIAOAAMEAAAAAAAAAAABgBOAG8AcgBtAGEAbAAAAAIAAAAQAF9IAQRtSAkEc0gJBHRICQRQ AAFAAQACAFAADBAAAAAAAAAAAAkASABlAGEAZABpAG4AZwAgADEAAAARAAEAAyQBBiQBMSQAQCYA YSQBABEAPioBQ0oYAGgIAHRICQR1CAAAUgACQAEAAgBSAAwQAAAAAAAAAAAJAEgAZQBhAGQAaQBu AGcAIAAyAAAAEQACAAMkAgYkATEkAEAmAWEkAgAUADUIgT4qAUNKGABoCAB0SAkEdQgATAADQAEA AgBMAAwQAAAAAAAAAAAJAEgAZQBhAGQAaQBuAGcAIAAzAAAAEQADAAYkATEkACZkBgMAAUAmAgAN ADUIgWgIAHRICQR1CAAAUAAEQAEAAgBQAAwQAAAAAAAAAAAJAEgAZQBhAGQAaQBuAGcAIAA0AAAA EQAEAAMkAgYkATEkAEAmA2EkAgARADUIgUNKGABoCAB0SAkEdQgAADgABUABAAIAOAAMEAAAAAAA AAAACQBIAGUAYQBkAGkAbgBnACAANQAAAAgABQAGJAFAJgQDADUIgQA+AAZAAQACAD4ADBAAAAAA AAAAAAkASABlAGEAZABpAG4AZwAgADYAAAAIAAYABiQBQCYFCgA1CIFDShgAXAiBQgAHQAEAAgBC AAwQAAAAAAAAAAAJAEgAZQBhAGQAaQBuAGcAIAA3AAAACAAHAAYkAUAmBg0ANQiBPioBQ0oYAFwI gQAAAAAARABBIPL/oQBEAAwBAAAAAAAAAAAWAEQAZQBmAGEAdQBsAHQAIABQAGEAcgBhAGcAcgBh AHAAaAAgAEYAbwBuAHQAAAAAAFYAaQDz/7MAVgAMAQAAAAAAAAAADABUAGEAYgBsAGUAIABOAG8A cgBtAGEAbAAAACAAOlYLABf2AwAANNYGAAEFAwAANNYGAAEKA2wAYfYDAAACAAsAAAAoAGsg9P/B ACgAAAEAAAAAAAAAAAcATgBvACAATABpAHMAdAAAAAIADAAAAAAASgBCQAEA8gBKAAwAAAAAAAAA AAAJAEIAbwBkAHkAIABUAGUAeAB0AAAACAAPACokATEkABYAQ0oYAE9KBABRSgQAaAgAdEgJBHUI AEQAUEABAAIBRAAMABQAAAAAAAAACwBCAG8AZAB5ACAAVABlAHgAdAAgADIAAAAQABAAKiQBDcYI AAIAAKAFAAADADUIgQBCAFFAAQASAUIADAAAAAAAAAAAAAsAQgBvAGQAeQAgAFQAZQB4AHQAIAAz AAAADQARAA3GCAACAACgBQAAAAQAQ0oWAEgAmUABACIBSAAMAQAAfGMaAAAADABCAGEAbABsAG8A bwBuACAAVABlAHgAdAAAAAIAEgAUAENKEABPSgUAUUoFAF5KBQBhShAAPAD+b/L/MQE8AAwBAAB1 DiQAAAAFAHYAaQBjAGsAaQAAABoAQ0oUAE9KAgBRSgIAXkoCAGFKFABwaAAAAP88AP5v8v9BATwA DAAQAIJs9wAAABAAQgBvAGQAeQAgAFQAZQB4AHQAIAAyACAAQwBoAGEAcgAAAAMANQgBAEQAWkAB AFIBRAAMCBYAnXNMADAGCgBQAGwAYQBpAG4AIABUAGUAeAB0AAAAAgAVABQAQ0oVAE9KBgBQSgcA UUoGAGFKFQBKAP5v8v9hAUoADAAVAJ1zTAAwBg8AUABsAGEAaQBuACAAVABlAHgAdAAgAEMAaABh AHIAAAAUAENKFQBPSgYAUEoHAFFKBgBhShUAUEsDBBQABgAIAAAAIQDp3g+//wAAABwCAAATAAAA W0NvbnRlbnRfVHlwZXNdLnhtbKyRy07DMBBF90j8g+UtSpyyQAgl6YLHjseifMDImSQWydiyp1X7 90zSVEKoIBZsLNkz954743K9Hwe1w5icp0qv8kIrJOsbR12l3zdP2a1WiYEaGDxhpQ+Y9Lq+vCg3 h4BJiZpSpXvmcGdMsj2OkHIfkKTS+jgCyzV2JoD9gA7NdVHcGOuJkTjjyUPX5QO2sB1YPe7l+Zgk 4pC0uj82TqxKQwiDs8CS1Oyo+UbJFkIuyrkn9S6kK4mhzVnCVPkZsOheZTXRNajeIPILjBLDsAyJ X89nIBkt5r87nons29ZZbLzdjrKOfDZezE7B/xRg9T/oE9PMf1t/AgAA//8DAFBLAwQUAAYACAAA ACEApdan58AAAAA2AQAACwAAAF9yZWxzLy5yZWxzhI/PasMwDIfvhb2D0X1R0sMYJXYvpZBDL6N9 AOEof2giG9sb69tPxwYKuwiEpO/3qT3+rov54ZTnIBaaqgbD4kM/y2jhdj2/f4LJhaSnJQhbeHCG o3vbtV+8UNGjPM0xG6VItjCVEg+I2U+8Uq5CZNHJENJKRds0YiR/p5FxX9cfmJ4Z4DZM0/UWUtc3 YK6PqMn/s8MwzJ5PwX+vLOVFBG43lExp5GKhqC/jU72QqGWq1B7Qtbj51v0BAAD//wMAUEsDBBQA BgAIAAAAIQBreZYWgwAAAIoAAAAcAAAAdGhlbWUvdGhlbWUvdGhlbWVNYW5hZ2VyLnhtbAzMTQrD IBBA4X2hd5DZN2O7KEVissuuu/YAQ5waQceg0p/b1+XjgzfO3xTVm0sNWSycBw2KZc0uiLfwfCyn G6jaSBzFLGzhxxXm6XgYybSNE99JyHNRfSPVkIWttd0g1rUr1SHvLN1euSRqPYtHV+jT9yniResr JgoCOP0BAAD//wMAUEsDBBQABgAIAAAAIQAw3UMpqAYAAKQbAAAWAAAAdGhlbWUvdGhlbWUvdGhl bWUxLnhtbOxZT2/bNhS/D9h3IHRvYyd2Ggd1itixmy1NG8Ruhx5piZbYUKJA0kl9G9rjgAHDumGH Fdhth2FbgRbYpfs02TpsHdCvsEdSksVYXpI22IqtPiQS+eP7/x4fqavX7scMHRIhKU/aXv1yzUMk 8XlAk7Dt3R72L615SCqcBJjxhLS9KZHetY3337uK11VEYoJgfSLXcduLlErXl5akD8NYXuYpSWBu zEWMFbyKcCkQ+AjoxmxpuVZbXYoxTTyU4BjI3hqPqU/QUJP0NnLiPQaviZJ6wGdioEkTZ4XBBgd1 jZBT2WUCHWLW9oBPwI+G5L7yEMNSwUTbq5mft7RxdQmvZ4uYWrC2tK5vftm6bEFwsGx4inBUMK33 G60rWwV9A2BqHtfr9bq9ekHPALDvg6ZWljLNRn+t3slplkD2cZ52t9asNVx8if7KnMytTqfTbGWy WKIGZB8bc/i12mpjc9nBG5DFN+fwjc5mt7vq4A3I4lfn8P0rrdWGizegiNHkYA6tHdrvZ9QLyJiz 7Ur4GsDXahl8hoJoKKJLsxjzRC2KtRjf46IPAA1kWNEEqWlKxtiHKO7ieCQo1gzwOsGlGTvky7kh zQtJX9BUtb0PUwwZMaP36vn3r54/RccPnh0/+On44cPjBz9aQs6qbZyE5VUvv/3sz8cfoz+efvPy 0RfVeFnG//rDJ7/8/Hk1ENJnJs6LL5/89uzJi68+/f27RxXwTYFHZfiQxkSim+QI7fMYFDNWcSUn I3G+FcMI0/KKzSSUOMGaSwX9nooc9M0pZpl3HDk6xLXgHQHlowp4fXLPEXgQiYmiFZx3otgB7nLO OlxUWmFH8yqZeThJwmrmYlLG7WN8WMW7ixPHv71JCnUzD0tH8W5EHDH3GE4UDklCFNJz/ICQCu3u UurYdZf6gks+VuguRR1MK00ypCMnmmaLtmkMfplW6Qz+dmyzewd1OKvSeoscukjICswqhB8S5pjx Op4oHFeRHOKYlQ1+A6uoSsjBVPhlXE8q8HRIGEe9gEhZteaWAH1LTt/BULEq3b7LprGLFIoeVNG8 gTkvI7f4QTfCcVqFHdAkKmM/kAcQohjtcVUF3+Vuhuh38ANOFrr7DiWOu0+vBrdp6Ig0CxA9MxEV vrxOuBO/gykbY2JKDRR1p1bHNPm7ws0oVG7L4eIKN5TKF18/rpD7bS3Zm7B7VeXM9olCvQh3sjx3 uQjo21+dt/Ak2SOQEPNb1Lvi/K44e//54rwony++JM+qMBRo3YvYRtu03fHCrntMGRuoKSM3pGm8 Jew9QR8G9Tpz4iTFKSyN4FFnMjBwcKHAZg0SXH1EVTSIcApNe93TREKZkQ4lSrmEw6IZrqSt8dD4 K3vUbOpDiK0cEqtdHtjhFT2cnzUKMkaq0Bxoc0YrmsBZma1cyYiCbq/DrK6FOjO3uhHNFEWHW6Gy NrE5lIPJC9VgsLAmNDUIWiGw8iqc+TVrOOxgRgJtd+uj3C3GCxfpIhnhgGQ+0nrP+6hunJTHypwi Wg8bDPrgeIrVStxamuwbcDuLk8rsGgvY5d57Ey/lETzzElA7mY4sKScnS9BR22s1l5se8nHa9sZw TobHOAWvS91HYhbCZZOvhA37U5PZZPnMm61cMTcJ6nD1Ye0+p7BTB1Ih1RaWkQ0NM5WFAEs0Jyv/ chPMelEKVFSjs0mxsgbB8K9JAXZ0XUvGY+KrsrNLI9p29jUrpXyiiBhEwREasYnYx+B+HaqgT0Al XHeYiqBf4G5OW9tMucU5S7ryjZjB2XHM0ghn5VanaJ7JFm4KUiGDeSuJB7pVym6UO78qJuUvSJVy GP/PVNH7Cdw+rATaAz5cDQuMdKa0PS5UxKEKpRH1+wIaB1M7IFrgfhemIajggtr8F+RQ/7c5Z2mY tIZDpNqnIRIU9iMVCUL2oCyZ6DuFWD3buyxJlhEyEVUSV6ZW7BE5JGyoa+Cq3ts9FEGom2qSlQGD Oxl/7nuWQaNQNznlfHMqWbH32hz4pzsfm8yglFuHTUOT278QsWgPZruqXW+W53tvWRE9MWuzGnlW ALPSVtDK0v41RTjnVmsr1pzGy81cOPDivMYwWDREKdwhIf0H9j8qfGa/dugNdcj3obYi+HihiUHY QFRfso0H0gXSDo6gcbKDNpg0KWvarHXSVss36wvudAu+J4ytJTuLv89p7KI5c9k5uXiRxs4s7Nja ji00NXj2ZIrC0Dg/yBjHmM9k5S9ZfHQPHL0F3wwmTEkTTPCdSmDooQcmDyD5LUezdOMvAAAA//8D AFBLAwQUAAYACAAAACEADdGQn7YAAAAbAQAAJwAAAHRoZW1lL3RoZW1lL19yZWxzL3RoZW1lTWFu YWdlci54bWwucmVsc4SPTQrCMBSE94J3CG9v07oQkSbdiNCt1AOE5DUNNj8kUeztDa4sCC6HYb6Z abuXnckTYzLeMWiqGgg66ZVxmsFtuOyOQFIWTonZO2SwYIKObzftFWeRSyhNJiRSKC4xmHIOJ0qT nNCKVPmArjijj1bkIqOmQci70Ej3dX2g8ZsBfMUkvWIQe9UAGZZQmv+z/TgaiWcvHxZd/lFBc9mF BSiixszgI5uqTATKW7q6xN8AAAD//wMAUEsBAi0AFAAGAAgAAAAhAOneD7//AAAAHAIAABMAAAAA AAAAAAAAAAAAAAAAAFtDb250ZW50X1R5cGVzXS54bWxQSwECLQAUAAYACAAAACEApdan58AAAAA2 AQAACwAAAAAAAAAAAAAAAAAwAQAAX3JlbHMvLnJlbHNQSwECLQAUAAYACAAAACEAa3mWFoMAAACK AAAAHAAAAAAAAAAAAAAAAAAZAgAAdGhlbWUvdGhlbWUvdGhlbWVNYW5hZ2VyLnhtbFBLAQItABQA BgAIAAAAIQAw3UMpqAYAAKQbAAAWAAAAAAAAAAAAAAAAANYCAAB0aGVtZS90aGVtZS90aGVtZTEu eG1sUEsBAi0AFAAGAAgAAAAhAA3RkJ+2AAAAGwEAACcAAAAAAAAAAAAAAAAAsgkAAHRoZW1lL3Ro ZW1lL19yZWxzL3RoZW1lTWFuYWdlci54bWwucmVsc1BLBQYAAAAABQAFAF0BAACtCgAAAAA8P3ht bCB2ZXJzaW9uPSIxLjAiIGVuY29kaW5nPSJVVEYtOCIgc3RhbmRhbG9uZT0ieWVzIj8+DQo8YTpj bHJNYXAgeG1sbnM6YT0iaHR0cDovL3NjaGVtYXMub3BlbnhtbGZvcm1hdHMub3JnL2RyYXdpbmdt bC8yMDA2L21haW4iIGJnMT0ibHQxIiB0eDE9ImRrMSIgYmcyPSJsdDIiIHR4Mj0iZGsyIiBhY2Nl bnQxPSJhY2NlbnQxIiBhY2NlbnQyPSJhY2NlbnQyIiBhY2NlbnQzPSJhY2NlbnQzIiBhY2NlbnQ0 PSJhY2NlbnQ0IiBhY2NlbnQ1PSJhY2NlbnQ1IiBhY2NlbnQ2PSJhY2NlbnQ2IiBobGluaz0iaGxp bmsiIGZvbEhsaW5rPSJmb2xIbGluayIvPgAAAAClCgAAGQAAJAAAAAD/////AAgAAFcIAAAmDgAA VQ8AAHMSAAClEgAACgAAAAwAAAANAAAADwAAABEAAAAACAAAIg4AAIkPAAClEgAACwAAAA4AAAAQ AAAADwAA8DgAAAAAAAbwGAAAAAIEAAACAAAAAQAAAAEAAAABAAAAAgAAAEAAHvEQAAAA//8AAAAA /wCAgIAA9wAAEAAPAALwkgAAABAACPAIAAAAAQAAAAEEAAAPAAPwMAAAAA8ABPAoAAAAAQAJ8BAA AAAAAAAAAAAAAAAAAAAAAAAAAgAK8AgAAAAABAAABQAAAA8ABPBCAAAAEgAK8AgAAAABBAAAAA4A AFMAC/AeAAAAvwEAABAAywEAAAAA/wEAAAgABAMJAAAAPwMBAAEAAAAR8AQAAAABAAAA//8DAAAA BwBfAEcAbwBCAGEAYwBrAAUAVABlAHgAdAA1AAUAVABlAHgAdAA5ADIAAAC4AAAA/wAAAKcKAAAA AAAAAQAAAAIAAAAyAAAA/gAAAAABAACnCgAA//8BAAAABgBFDJoAEAACAAAAAABuCAAApwoAAAAA AAABAG8IAACnCgAAAAAAAAEAAAA4AAAAAQAAACqAdXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpv ZmZpY2U6c21hcnR0YWdzBIBDaXR5AIAMAAABoGktBAAAAAABAAAAAAAAAAAAnAEAAKQBAAAlAwAA JwMAACgDAAAqAwAAzAQAAM4EAADPBAAA0QQAAN0EAADfBAAA4AQAAOIEAAADBQAADQUAALAGAAC3 BgAApwoAAAcAHAAHABwABwAcAAcAHAAHABwABwAcAAcAHAAHABwABwAcAAcAAAAAAKcKAAAHAAAA AADHBgAAyAYAAO4GAABPBwAApwoAAAMABAADAAQAAwAAAAAAxwYAAMgGAADuBgAATwcAAKcKAAAD AAQAAwAEAAMACQC7OlsE3pv2rv8P/w//D/8P/w//D/8P/w//DxAAKT2sLpwSbFn/D/8P/w//D/8P /w//D/8P/w8QAMVGJ0wwF4Rl/w//D/8P/w//D/8P/w//D/8PEAApeBVXyKdMHv8P/w//D/8P/w// D/8P/w//DxAARDRGXEpatGP/D/8P/w//D/8P/w//D/8P/w8QADlY2W4Ii5bj/w//D/8P/w//D/8P /w//D/8PEACbSdVxFDfwAP8P/w//D/8P/w//D/8P/w//DxAAlEaXc3pXOKb/D/8P/w//D/8P/w// D/8P/w8QAKlHGH06nWi6/w//D/8P/w//D/8P/w//D/8PEAABAAAAABABAAAAAAAAAAAAaAEAAAAA AAAKEAAAD4TQAhGEmP5ehNACYISY/odoAAAAAIhIAAACAAAALgABAAAABJABAAAAAAAAAAAAaAEA AAAAAAAKEAAAD4SgBRGEmP5ehKAFYISY/odoAAAAAIhIAAACAAEALgABAAAAApIBAAAAAAAAAAAA aAEAAAAAAAAKEAAAD4RwCBGETP9ehHAIYIRM/4doAAAAAIhIAAACAAIALgABAAAAAJABAAAAAAAA AAAAaAEAAAAAAAAKEAAAD4RACxGEmP5ehEALYISY/odoAAAAAIhIAAACAAMALgABAAAABJABAAAA AAAAAAAAaAEAAAAAAAAKEAAAD4QQDhGEmP5ehBAOYISY/odoAAAAAIhIAAACAAQALgABAAAAApIB AAAAAAAAAAAAaAEAAAAAAAAKEAAAD4TgEBGETP9ehOAQYIRM/4doAAAAAIhIAAACAAUALgABAAAA AJABAAAAAAAAAAAAaAEAAAAAAAAKEAAAD4SwExGEmP5ehLATYISY/odoAAAAAIhIAAACAAYALgAB AAAABJABAAAAAAAAAAAAaAEAAAAAAAAKEAAAD4SAFhGEmP5ehIAWYISY/odoAAAAAIhIAAACAAcA LgABAAAAApIBAAAAAAAAAAAAaAEAAAAAAAAKEAAAD4RQGRGETP9ehFAZYIRM/4doAAAAAIhIAAAC AAgALgABAAAAAAABAAAAAAAAAAAAAAAAAAAAAAADEAAAD4TQAhGEmP5ehNACYISY/m8oAAIAAAAu AAEAAAAEgAEAAAAAAAAAAAAAAAAAAAAAAAoQAAAPhKAFEYSY/l6EoAVghJj+h2gAAAAAiEgAAAIA AQAuAAEAAAACggEAAAAAAAAAAAAAAAAAAAAAAAoQAAAPhHAIEYRM/16EcAhghEz/h2gAAAAAiEgA AAIAAgAuAAEAAAAAgAEAAAAAAAAAAAAAAAAAAAAAAAoQAAAPhEALEYSY/l6EQAtghJj+h2gAAAAA iEgAAAIAAwAuAAEAAAAEgAEAAAAAAAAAAAAAAAAAAAAAAAoQAAAPhBAOEYSY/l6EEA5ghJj+h2gA AAAAiEgAAAIABAAuAAEAAAACggEAAAAAAAAAAAAAAAAAAAAAAAoQAAAPhOAQEYRM/16E4BBghEz/ h2gAAAAAiEgAAAIABQAuAAEAAAAAgAEAAAAAAAAAAAAAAAAAAAAAAAoQAAAPhLATEYSY/l6EsBNg hJj+h2gAAAAAiEgAAAIABgAuAAEAAAAEgAEAAAAAAAAAAAAAAAAAAAAAAAoQAAAPhIAWEYSY/l6E gBZghJj+h2gAAAAAiEgAAAIABwAuAAEAAAACggEAAAAAAAAAAAAAAAAAAAAAAAoQAAAPhFAZEYRM /16EUBlghEz/h2gAAAAAiEgAAAIACAAuAAEAAAAXEAAAAAAAAAAAAABoAQAAAAAAABUQAAAPhKAF EYSY/l6EoAVghJj+T0oBAFFKAQBvKACHaAAAAACISAAAAQC38AEAAAAXkAAAAAAAAAAAAABoAQAA AAAAABkQAAAPhHAIEYSY/l6EcAhghJj+T0oIAFFKCABeSggAbygAh2gAAAAAiEgAAAEAbwABAAAA F5AAAAAAAAAAAAAAaAEAAAAAAAAVEAAAD4RACxGEmP5ehEALYISY/k9KCQBRSgkAbygAh2gAAAAA iEgAAAEAp/ABAAAAF5AAAAAAAAAAAAAAaAEAAAAAAAAVEAAAD4QQDhGEmP5ehBAOYISY/k9KAQBR SgEAbygAh2gAAAAAiEgAAAEAt/ABAAAAF5AAAAAAAAAAAAAAaAEAAAAAAAAZEAAAD4TgEBGEmP5e hOAQYISY/k9KCABRSggAXkoIAG8oAIdoAAAAAIhIAAABAG8AAQAAABeQAAAAAAAAAAAAAGgBAAAA AAAAFRAAAA+EsBMRhJj+XoSwE2CEmP5PSgkAUUoJAG8oAIdoAAAAAIhIAAABAKfwAQAAABeQAAAA AAAAAAAAAGgBAAAAAAAAFRAAAA+EgBYRhJj+XoSAFmCEmP5PSgEAUUoBAG8oAIdoAAAAAIhIAAAB ALfwAQAAABeQAAAAAAAAAAAAAGgBAAAAAAAAGRAAAA+EUBkRhJj+XoRQGWCEmP5PSggAUUoIAF5K CABvKACHaAAAAACISAAAAQBvAAEAAAAXkAAAAAAAAAAAAABoAQAAAAAAABUQAAAPhCAcEYSY/l6E IBxghJj+T0oJAFFKCQBvKACHaAAAAACISAAAAQCn8AEAAAAAEAEAAAAAAAAAAABoAQAAAAAAAAoQ AAAPhNACEYSY/l6E0AJghJj+h2gAAAAAiEgAAAIAAAAuAAEAAAAEkAEAAAAAAAAAAABoAQAAAAAA AAoQAAAPhKAFEYSY/l6EoAVghJj+h2gAAAAAiEgAAAIAAQAuAAEAAAACkgEAAAAAAAAAAABoAQAA AAAAAAoQAAAPhHAIEYRM/16EcAhghEz/h2gAAAAAiEgAAAIAAgAuAAEAAAAAkAEAAAAAAAAAAABo AQAAAAAAAAoQAAAPhEALEYSY/l6EQAtghJj+h2gAAAAAiEgAAAIAAwAuAAEAAAAEkAEAAAAAAAAA AABoAQAAAAAAAAoQAAAPhBAOEYSY/l6EEA5ghJj+h2gAAAAAiEgAAAIABAAuAAEAAAACkgEAAAAA AAAAAABoAQAAAAAAAAoQAAAPhOAQEYRM/16E4BBghEz/h2gAAAAAiEgAAAIABQAuAAEAAAAAkAEA AAAAAAAAAABoAQAAAAAAAAoQAAAPhLATEYSY/l6EsBNghJj+h2gAAAAAiEgAAAIABgAuAAEAAAAE kAEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhIAWEYSY/l6EgBZghJj+h2gAAAAAiEgAAAIABwAuAAEA AAACkgEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhFAZEYRM/16EUBlghEz/h2gAAAAAiEgAAAIACAAu AAEAAAAAAAEAAAAAAAAAAAAAAAAAAAAAAAMQAAAPhNACEYSY/l6E0AJghJj+bygAAgAAAC4AAQAA AASAAQAAAAAAAAAAAAAAAAAAAAAAChAAAA+EoAURhJj+XoSgBWCEmP6HaAAAAACISAAAAgABAC4A AQAAAAKCAQAAAAAAAAAAAAAAAAAAAAAAChAAAA+EcAgRhEz/XoRwCGCETP+HaAAAAACISAAAAgAC AC4AAQAAAACAAQAAAAAAAAAAAAAAAAAAAAAAChAAAA+EQAsRhJj+XoRAC2CEmP6HaAAAAACISAAA AgADAC4AAQAAAASAAQAAAAAAAAAAAAAAAAAAAAAAChAAAA+EEA4RhJj+XoQQDmCEmP6HaAAAAACI SAAAAgAEAC4AAQAAAAKCAQAAAAAAAAAAAAAAAAAAAAAAChAAAA+E4BARhEz/XoTgEGCETP+HaAAA AACISAAAAgAFAC4AAQAAAACAAQAAAAAAAAAAAAAAAAAAAAAAChAAAA+EsBMRhJj+XoSwE2CEmP6H aAAAAACISAAAAgAGAC4AAQAAAASAAQAAAAAAAAAAAAAAAAAAAAAAChAAAA+EgBYRhJj+XoSAFmCE mP6HaAAAAACISAAAAgAHAC4AAQAAAAKCAQAAAAAAAAAAAAAAAAAAAAAAChAAAA+EUBkRhEz/XoRQ GWCETP+HaAAAAACISAAAAgAIAC4ABQAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAAxAAAA+E0AIRhJj+ XoTQAmCEmP5vKAACAAAALgABAAAABIABAAAAAAAAAAAAAAAAAAAAAAAKEAAAD4SgBRGEmP5ehKAF YISY/odoAAAAAIhIAAACAAEALgABAAAAAoIBAAAAAAAAAAAAAAAAAAAAAAAKEAAAD4RwCBGETP9e hHAIYIRM/4doAAAAAIhIAAACAAIALgABAAAAAIABAAAAAAAAAAAAAAAAAAAAAAAKEAAAD4RACxGE mP5ehEALYISY/odoAAAAAIhIAAACAAMALgABAAAABIABAAAAAAAAAAAAAAAAAAAAAAAKEAAAD4QQ DhGEmP5ehBAOYISY/odoAAAAAIhIAAACAAQALgABAAAAAoIBAAAAAAAAAAAAAAAAAAAAAAAKEAAA D4TgEBGETP9ehOAQYIRM/4doAAAAAIhIAAACAAUALgABAAAAAIABAAAAAAAAAAAAAAAAAAAAAAAK EAAAD4SwExGEmP5ehLATYISY/odoAAAAAIhIAAACAAYALgABAAAABIABAAAAAAAAAAAAAAAAAAAA AAAKEAAAD4SAFhGEmP5ehIAWYISY/odoAAAAAIhIAAACAAcALgABAAAAAoIBAAAAAAAAAAAAAAAA AAAAAAAKEAAAD4RQGRGETP9ehFAZYIRM/4doAAAAAIhIAAACAAgALgABAAAAAAABAAAAAAAAAAAA AAAAAAAAAAADEAAAD4TQAhGEmP5ehNACYISY/m8oAAIAAAAuAAEAAAAEgAEAAAAAAAAAAAAAAAAA AAAAAAoQAAAPhKAFEYSY/l6EoAVghJj+h2gAAAAAiEgAAAIAAQAuAAEAAAACggEAAAAAAAAAAAAA AAAAAAAAAAoQAAAPhHAIEYRM/16EcAhghEz/h2gAAAAAiEgAAAIAAgAuAAEAAAAAgAEAAAAAAAAA AAAAAAAAAAAAAAoQAAAPhEALEYSY/l6EQAtghJj+h2gAAAAAiEgAAAIAAwAuAAEAAAAEgAEAAAAA AAAAAAAAAAAAAAAAAAoQAAAPhBAOEYSY/l6EEA5ghJj+h2gAAAAAiEgAAAIABAAuAAEAAAACggEA AAAAAAAAAAAAAAAAAAAAAAoQAAAPhOAQEYRM/16E4BBghEz/h2gAAAAAiEgAAAIABQAuAAEAAAAA gAEAAAAAAAAAAAAAAAAAAAAAAAoQAAAPhLATEYSY/l6EsBNghJj+h2gAAAAAiEgAAAIABgAuAAEA AAAEgAEAAAAAAAAAAAAAAAAAAAAAAAoQAAAPhIAWEYSY/l6EgBZghJj+h2gAAAAAiEgAAAIABwAu AAEAAAACggEAAAAAAAAAAAAAAAAAAAAAAAoQAAAPhFAZEYRM/16EUBlghEz/h2gAAAAAiEgAAAIA CAAuAAEAAAAAEAEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhNACEYSY/l6E0AJghJj+h2gAAAAAiEgA AAIAAAAuAAEAAAAEkAEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhKAFEYSY/l6EoAVghJj+h2gAAAAA iEgAAAIAAQAuAAEAAAACkgEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhHAIEYRM/16EcAhghEz/h2gA AAAAiEgAAAIAAgAuAAEAAAAAkAEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhEALEYSY/l6EQAtghJj+ h2gAAAAAiEgAAAIAAwAuAAEAAAAEkAEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhBAOEYSY/l6EEA5g hJj+h2gAAAAAiEgAAAIABAAuAAEAAAACkgEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhOAQEYRM/16E 4BBghEz/h2gAAAAAiEgAAAIABQAuAAEAAAAAkAEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhLATEYSY /l6EsBNghJj+h2gAAAAAiEgAAAIABgAuAAEAAAAEkAEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhIAW EYSY/l6EgBZghJj+h2gAAAAAiEgAAAIABwAuAAEAAAACkgEAAAAAAAAAAABoAQAAAAAAAAoQAAAP hFAZEYRM/16EUBlghEz/h2gAAAAAiEgAAAIACAAuAAEAAAAAEAEAAAAAAAAAAABoAQAAAAAAAAoQ AAAPhAgHEYSY/l6ECAdghJj+h2gAAAAAiEgAAAIAAAAuAAEAAAAEkAEAAAAAAAAAAABoAQAAAAAA AAoQAAAPhNgJEYSY/l6E2AlghJj+h2gAAAAAiEgAAAIAAQAuAAEAAAACkgEAAAAAAAAAAABoAQAA AAAAAAoQAAAPhKgMEYRM/16EqAxghEz/h2gAAAAAiEgAAAIAAgAuAAEAAAAAkAEAAAAAAAAAAABo AQAAAAAAAAoQAAAPhHgPEYSY/l6EeA9ghJj+h2gAAAAAiEgAAAIAAwAuAAEAAAAEkAEAAAAAAAAA AABoAQAAAAAAAAoQAAAPhEgSEYSY/l6ESBJghJj+h2gAAAAAiEgAAAIABAAuAAEAAAACkgEAAAAA AAAAAABoAQAAAAAAAAoQAAAPhBgVEYRM/16EGBVghEz/h2gAAAAAiEgAAAIABQAuAAEAAAAAkAEA AAAAAAAAAABoAQAAAAAAAAoQAAAPhOgXEYSY/l6E6BdghJj+h2gAAAAAiEgAAAIABgAuAAEAAAAE kAEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhLgaEYSY/l6EuBpghJj+h2gAAAAAiEgAAAIABwAuAAEA AAACkgEAAAAAAAAAAABoAQAAAAAAAAoQAAAPhIgdEYRM/16EiB1ghEz/h2gAAAAAiEgAAAIACAAu AAkAAAC7OlsEAAAAAAAAAAAAAAAARDRGXAAAAAAAAAAAAAAAAJtJ1XEAAAAAAAAAAAAAAAA5WNlu AAAAAAAAAAAAAAAAqUcYfQAAAAAAAAAAAAAAACl4FVcAAAAAAAAAAAAAAAApPawuAAAAAAAAAAAA AAAAxUYnTAAAAAAAAAAAAAAAAJRGl3MAAAAAAAAAAAAAAAD///////////////////////////// /////////////////////wkAAAAAAAAAAAAAAAAAAAAAAAAAAAD//wkAAAASAA8ACQQZAAkEGwAJ BA8ACQQZAAkEGwAJBA8ACQQZAAkEGwAJBBIADwAJBBkACQQbAAkEDwAJBBkACQQbAAkEDwAJBBkA CQQbAAkEEgABAAkEAwAJBAUACQQBAAkEAwAJBAUACQQBAAkEAwAJBAUACQQSAA8ACQQZAAkEGwAJ BA8ACQQZAAkEGwAJBA8ACQQZAAkEGwAJBBIADwAJBBkACQQbAAkEDwAJBBkACQQbAAkEDwAJBBkA CQQbAAkEEgAPAAkEGQAJBBsACQQPAAkEGQAJBBsACQQPAAkEGQAJBBsACQQSAA8ACQQZAAkEGwAJ BA8ACQQZAAkEGwAJBA8ACQQZAAkEGwAJBBIADwAJBBkACQQbAAkEDwAJBBkACQQbAAkEDwAJBBkA CQQbAAkEEgAPAAkEGQAJBBsACQQPAAkEGQAJBBsACQQPAAkEGQAJBBsACQQGAPMTLzwAAAAAAAAA AAABAgACAJ5/XkgAAAAAAAAAAAABAgACAKIJ8VMAAAAAAAAAAAABAgACABRy2lsAAAAAAAAAAAAB AgACAIEFOmIAAAAAAAAAAAABAgACAD1nYW0AAAAAAAAAAAABAgACAEgEAAAEAAAACAAAAOUAAAAA AAAARwQAAAgFAAANTgAAnWQAAMARAQCyLgEAvEIBAOxkAQCwFQIA0jsCAOUxAwCGYQMATjEEAA9F BAAFegQAtwMFANocBQCgOwUAXEcFAPtbBQCxcwUAawYGALUaBgD6LgYANEMGAEdhBgD3cgYAyiAH AEcyBwBkVAcA9i0IAN1OCACVZggAph0JAMAiCQDKSQkAo1UJAA4JCgD8UwoALBYLAGgrCwBnTgsA 5XsLAKMQDAClQgwAEmsMAHscDQDkNg0AOFsNADtwDQAvfw0A4DcOAOF6DgC0FA8Aq00PABh6DwBj GRAAHF0QAD9iEADNdxAAWBMRAEYjEQAuJBEAbDYRAFJJEQBpeBEAIgwSAK0vEgC5NRIA0lMSAP1j EgDqfRIABQgTANYvEwBcNhMAA1MTAMNhEwDcAhQAjiIUABUjFACCRBQAoEsUAIVeFAACbRQA33sU AEh9FACUehUAowYWAJwLFgBFOxYAtEEWACd2FgDKLRcArDMXADtXFwD4fhcAUk8YAHtfGABDBhkA Dh8ZAHAhGQD+IhkAZjAZAKhZGQBaYhkAbmwZABcXGgBxIRoAwCcaAPYqGgAFOxoAaGIaAHxjGgC2 FRsAYzUbAOhHGwAGShsAD1YbAP01HAAuNxwANhIdAJwVHQD+MB0AAj0dAJ9eHQB0Xx0A9WodALwW HgD3FB8AYCYfAK43HwBFQx8AFlAfAO5hHwCjbB8AiwQgAMQiIACDcyAAjXMgAE0wIQAwOCEAxkEh ACpqIQBzCCIARQ8iACouIgCGRCIA5QojACsOIwBtGSMAtyIjAAJlIwDNZSMAdQ4kAIUUJAB7MiQA 930kANF/JADNIyUAXUclAJpHJQAISyUA2EYmAJdjJgCYbCYAsQcnAAAOJwBDGCcAFjUnAIlFJwDK cycALjYoAMw2KACwRSgAD14oAMZjKAAydigA+QApADoOKQArESkAsT4pAK5vKQBQDyoAyRsqAH89 KgCbayoAvR4rABVMKwBLUisAn2ErAB8fLADGLSwABkMsAPRlLACaGC0AQxktAAkdLQAnXi0AVmst AKACLgCMFC4AFhouAN8bLgDkUi4Am1YuALpnLgAFMy8A0D0vAPtcLwB3bC8AXHovAE8AMABjBTAA 2iUwALdkMADYAzEAMS8xAOEwMQDwNTEAG1oxAH9kMQCUJDIAYysyAKU2MgCTTDIAHE0yAF5VMgCX WzIAo30yAJRFMwBVMzQAyjw0AC9cNACPKDUAzFg1AMJZNQDMBTYAiBQ2AP4aNgBDMTYAOVQ2AG8n NwDtQzcAHk03AHFYNwBRNjgAJmg4ACAROQCrGjkAdx05AN02OQDFUTkAt3Y5ABIgOgBBSjoAHmE6 AANxOgApHTsAKEs7AOpuOwBDFTwAu0w8ACtaPACGED0ARTk9AABLPQC8ZT0ANS8+AP83PgBePz4A Olw+ANkAPwAqST8AFm8/AHAyQQAJM0EAbUFBAE4FQgAQH0IARgpDAMsNQwCmQEMAx1BDAGhdQwC7 XUMAPGtDABp1QwBhAUQA/glEAHdPRAB7XUQAcV9EAIkmRQAqQkUAVktFADxSRQBUD0YATzVGAEU7 RgCGR0YAkWZGAH4PRwBJG0cAhSdHAIkuRwDCREcAAEZHAKcbSADyH0gA/i1IADpeSACebEgAwXxI AIAFSQBUCUkA7A1JADEVSQDmXkkAiUNKAAdGSgDFSUoAhE9KAP97SgApQ0sAhFFLAIBmSwA8Z0sA bmtLABgETABLCEwA9i9MAFRiTACkZUwAnXNMAMh9TADtCU0AmhJNACZBTQDTdk0A2hhOAEFbTgDL YU4AOQ5PAJ4ZTwCCGk8AJzBPAHozTwAqOk8AyUpPANpgTwCZE1AASi9QAEdcUACwCVEAuBRRAHMn UQDEKVEA+m5RAI4DUgDECFIAMRtSAGZcUgCkN1MASFpTAGgEVABMOFQAHUxUABxYVAAgHFUAwlhV ANlgVQCNY1UAhzxWACJAVgAmQVYAUWtWAFpQVwCrAFgAxABYAGQXWAA0Y1gAt2lYAMNpWAByflgA WglZAHQVWQA+PVkAuEZZAKkkWgDTRFoAlUVaAGcAWwBeWlsASW5bABgMXADCFlwA5SVcAFEmXAAm QlwAiUdcAH5SXAB2LV0As1RdAEJeXQAbYF0A8hdeAKtQXgDFUV4AA1ZeAPdcXgCHbl4ALDxfAFwL YAD7L2AADkRgABlzYAAjc2AAtQBhACgTYQCCOGEADWthABhuYQBIdmEAOExiABlPYgDVT2IAmGZi AAkKYwDASmMAElBjAMdjYwBLamMAqXBjABIdZABaOmQAbGhkAL5kZQBlB2YASCZmACQ6ZgBJEWcA hxVnAMUfZwBEcmcAdwVoABMJaADgDWgA9yJoAOA3aAAmTmgAX1NoAAhcaAAfc2gAx3ppAFATagBd OGoAQTpqAJk+agCOZ2oA+gBrAE5QawBDUWsA1xpsAMotbADQMmwAsFpsABdcbAC6AW0ALEZtAI9b bQDLb20A5G9tAAIMbgDZQm4AMilvADMpbwDnVG8AxGZvAC9rbwBzem8AGjtwAMJEcAAYYHAAyDdx AFoHcgA0MHIAKG9yALgrcwCnQHMA70VzAL5fcwDBAXQAhgJ0APQTdAAyLHQAT3l0APU4dQDEPXUA BU11AMFcdQA/dHUAkQh2AAc3dgDTO3YAjj92AG0YdwDZQXcAJ093ABpudwCtf3cAzx14AFw+eAAl ZngAyg55AJYYeQD4QHkA+0t5ACxreQCBXHoA/Wl6AMlyegDKE3sABhp7AKhQewCIeXsAmg58AHUR fACkRHwAWBV9AClxfQB7An4AzgZ+AAsffgAxH34ASCl+ACUufgBhOX4A8Dt+AH8+fgC8XX4AbWh+ AP8XfwDWRX8A+X9/ABYEgACObYAAJHCAALV1gACXU4EAkEmCALRcggARd4IAc36CALMOgwCkG4MA KCGDALBfgwB5c4MA1yuEANNahABZYIQACWiEAMtzhAAQGYUAECuFAMg8hQA1VYUAs1mFAFFdhQBL c4UA7CSGAGomhgCwEYcAUh2HAFwqhwC9NocAGUCHAElShwCUIIkA/0mJAO9YiQBAOIoAEk2KAM9T igA3ZYoAOWOLAM5liwC0cYsAjwCMAOgOjACzRIwAlguNAHkujQDzU40AM1SNAF8CjgDpCI4ApTSO AAk3jgD+T44A/2WOAOBojgCafo4AWy6PAKsokAAdQpAAcFSQAK90kAC0EZEAxhSRAN49kQCaYJEA b3aRADR6kQBrDJIAaReSAOsckgAyNpIA32WSAC9qkgDla5IAnTuTAGlnkwB7QJQA4Q6VAEAglQAC I5UA3kCVAPpLlQDlapUAPiKWAH8plgDXRJYAcQaXAG8klwChRZcAEkmXAFFhlwDiK5gASTOYAExG mABlS5gA7kuYAAMXmQDwS5kAUVKZAL5XmQCzBJoAcwaaAGg9mgDrRpoAG02aAPZvmgDCepoABRGb AAlfmwCBaZsAO3qbAKh6mwDqAJwABwecAHQSnACeLZwA6m6cAK5wnAC0AJ0AMBCdABgZnQBIHJ0A 0CadACc3nQA8PZ0AuT6dAMtknQDQHp4AtyeeAHAyngAbM54AWzWfAH9tnwBlIKAA4V+gAKhloABx bqAAhHOgAL0YoQDXQqEArnOhAAgTogARHaIAtDuiAMk6owAAbKMARX2jACJEpACfTKQAN1GkAEFe pACicKQAxhGlAIldpQDZaaUAX3SlAMcWpgCpQaYA/FamAMlupgADGKcANyunAHU4pwDwP6cAzEen ADNapwBMa6cAinCnAH4tqAA8W6gAeGKoAO92qQDGfKkA1A6qAHo0qgCqIasATCerACBWqwBCY6sA LgCsAMMTrACgGawA506sABhgrACbHq0ARCatAIEVrgDhNa4AAHauAKt4rgCkVa8AmAawAIEksAB1 BbEAcRmxAJQusQC2ebEA+DyyAM19sgBSJLMAMy+zACIJtACIMrQAh1a0ANVptAAsfrQAlmW1ANZ+ tQB4JLYAJCy2AGwttgDsU7YAdVu2AKZWtwDPRbgAUW+4ADcauQBcKLkAlVW5ABB2uQDHebkA8Ba6 AL4ZugCXIboAnym6AIo+ugDxRLoAZki6ABxlugCDc7oAQAu7AI8ouwB7brsAgSO8AMZovAC1b7wA OBa9ABcavQA2R70Auk+9AIljvQBKbr0Ar269AIxxvQBve70Ahw++AP1HvgARUL4AZFy+AHdmvgD2 CL8Atxm/AEtMvwAmab8ANiXAAGgmwACJMMAAtTzAAAlGwADubsAAuA3BALBOwQCVW8EAsnrBAMQA wgBRV8IA6GjCAOx3wgBIL8MALlrDAGJswwBbdcMAAAXEAGYhxACkKcQAITfEAIsqxQBlS8UA9nHF AFp8xQAwfcUAiBPGAIE8xgAkUcYAdBHHAGsDyABaB8gArhrIAKNayADtbMgAsHrIAAcFyQAaEMkA CTnJADpCyQC1W8kAbH7JAMIhygBER8oAvWHKAIgAywAWBcsAbwnLAKILywDlNcsA8jfLANhpywAG NcwAzkXMAF9azACxQ80AgWXNABArzgDQTc4AI1jOAE57zgCPGc8AKjHPAA86zwAxbM8AKwPQAPMJ 0AADGNAA6xnQAMgR0QAfXNEA513RAMFo0QD1StIAx3zSAKp+0gD4BNMA+w7TAKwP0wAfJtMAUCfT AEI/0wBEAtQAZEHUAPJ01ABbf9QA/wTVAM0W1gCeI9YA1HrWAAx+1gBpH9cAyDXXALdG1wCMTdcA KmbXAAxr1wAZAtgAVQ/YAAcf2ABGJ9gADCnYALgu2ACsTdgApXDYAPYH2QAbGdkANCvZAI1Y2QBO btkArjHaAH5D2gCCaNoAQ3/aAKMC2wAxBdsAHA3bAD8N2wArINwA4S3cAI9x3ABsLd0A0zPdAAdX 3QDdet0A3wDeAF9X3gBHY94A0HHeAKoU3wDfdd8A2XjfACEn4AC7MuAAL0zgAJpU4ADoXeAAll/g AEcG4QD9BuEAbz/hAHBn4QDWC+IAniTiAJ4l4gCmSOIAGlbiAB8B4wD+AeMArC/jAAsz4wBXR+MA tErjAAdY4wClBeQArRXkAJsX5ADEGuQA7S/kADY25ACkQOQALkHkAN9T5ADxZOQAv2XkAHUA5QAm WOUAhHPlABF85QDaAuYAbQPmACoo5gCzOOYAjHLmADAa5wAxPOcA22bnAJB75wC9B+gAFB7oAN0m 6ADURugA0X/oAH0O6QCUG+oA/jHqAFo76gBYSuoAYFjqAHlh6gArAesA2RbrAA4e6wD1QusAT1vr AIsV7ACXPOwAOVHsAAxc7ADGduwACA3tACQZ7gC+Me4AfDPuAE1c7gAAcu4AuCPvAKEr7wCrLu8A Sk3vAD1S7wBuGPAAbUnwAL5+8AAPC/EAaCzxADQ58QCoR/EAO23xAHwP8gA4IvIAGjryAD9Q8gBL APMAFAfzAOc38wBqTvMAoWLzAPcJ9ABpGPQAqCH0AIRH9AAYVvQA/2n0AIZA9QCrd/UAqXv1ADJD 9gBTa/YAGEP3AIJs9wDSAPgAqg34AOYY+AD5HvgARCX4AAYu+ACtOPgA7gL5ADYY+QCxG/kA6i35 APMy+QDFVvkAnFv5AB5j+QBrZPkABWr5APsD+gDnBPoA/y36AJNh+gBQavoAbBT7APAd+wCTIPsA ADz7AOdL+wBDTPsAO1X7AIJZ+wAmaPsAKmz7AB4X/AB8MfwAFTj8ABlK/ABVCP0Any79AD8y/QA4 P/0AAE79AOEF/gCNMv4ALUz+AFxn/gACdf4ApxL/AHkb/wDkTv8ACVH/AP5s/wC4bf8A93n/AEl/ /wAAAAAApQoAAKcKAAAAAAAAAQAAAP9AA4ABAAIAAAACAAAAAAAAAAEAAQACAAAAAAAAAAIAAAAA AAAAAhAAAAAAAAAApQoAAMgAABAAQAAA//8BAAAABwBVAG4AawBuAG8AdwBuAP//AQAIAAAAAAAA AAAAAAD//wEAAAAAAP//AAACAP//AAAAAP//AAACAP//AAAAAAsAAABHHpABAAACAgYDBQQFAgME /yoA4EF4AMAJAAAAAAAAAP8BAAAAAAAAVABpAG0AZQBzACAATgBlAHcAIABSAG8AbQBhAG4AAAA1 HpABAgAFBQECAQcGAgUHAAAAAAAAABAAAAAAAAAAAAAAAIAAAAAAUwB5AG0AYgBvAGwAAAAzLpAB AAACCwYEAgICAgIE/yoA4EN4AMAJAAAAAAAAAP8BAAAAAAAAQQByAGkAYQBsAAAANx6QAQAAAgQF AwUEBgMCBP8CAOD/BABAAAAAAAAAAACfAQAAAAAAAEMAYQBtAGIAcgBpAGEAAAA3MZABAAACBwQJ AgIFAgQEAwAAAAAAAAAAAAAAAAAAAAEAAAAAAAAAQwBvAHUAcgBpAGUAcgAAADUukAEAAAILBgQD BQQEAgT/LgDhW2AAwCkAAAAAAAAA/wEBAAAAAABUAGEAaABvAG0AYQAAADk9kAEAAAILBgkCAgQD AgT/AgDh//wAQAkAAAAAAAAAnwEAAAAAAABDAG8AbgBzAG8AbABhAHMAAAA3LpABAAACDwUCAgIE AwIE/wIA4P+sAEABAAAAAAAAAJ8BAAAAAAAAQwBhAGwAaQBiAHIAaQAAAD89kAEAAAIHAwkCAgUC BAT/KgDgQ3gAwAkAAAAAAAAA/wEAAAAAAABDAG8AdQByAGkAZQByACAATgBlAHcAAAA7DpABAgAF AAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAAAAAAIAAAAAAVwBpAG4AZwBkAGkAbgBnAHMAAABBHpAB AAACBAUDBQQGAwIE/wIA4P8kAEIAAAAAAAAAAJ8BAAAAAAAAQwBhAG0AYgByAGkAYQAgAE0AYQB0 AGgAAAAiAAQAQQCIGADw0ALkBGgBAAAAAFjjNEcN5DRHG2woZwkAKwAAAJYBAAAPCQAAAgAFAAAA BAADABMAAACWAQAADwkAAAIABQAAABMAAAAAAAAAIQMA8BAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAoAXQAmQAAAAAgHIEAAAAAAAAAAAAAAAAAACgCgAAoAoAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAIAAAAAAAAAAAAMM4Nx APAQAN/f//0BAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACEhYAAAAAAjw/w8BCAE/AADkBAAA//// f////3////9/////f////3////9/////fzNUjQAABAAAMgAAAAAAAAAAAAAAAAAAAAMAAAAAACEE AAAAAAAAAAAAAAAAAAAAAAAAEBwAAAoAAAAAAAAAAAB4AAAAeAAAAAAAAAAAAAAAoAUAAAAAAAAL AAAAAAAAANwAAAD//xIAAAAAAAAADwBUAE8AVwBOACAATwBGACAASABBAE4ATwBWAEUAUgAAAAAA AAALAFYASQBDAEsASQAgAFMATQBJAFQASAALAFYAaQBjAGsAaQAgAFMAbQBpAHQAaAAAAAAAAAAA AAAAAAAAAAAAAAAAADAAAAAGAAAACQAAAAAADAABAAwAAgAMAAMADAAEAAwABQAMAAYADAAHAAwA CAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD+/wAABgECAAAAAAAAAAAAAAAA AAAAAAABAAAA4IWf8vlPaBCrkQgAKyez2TAAAABUAQAADwAAAAEAAACAAAAAAgAAAIgAAAAEAAAA oAAAAAcAAAC0AAAACAAAAMQAAAAJAAAA2AAAABIAAADkAAAACgAAAAQBAAALAAAAEAEAAAwAAAAc AQAADQAAACgBAAAOAAAANAEAAA8AAAA8AQAAEAAAAEQBAAATAAAATAEAAAIAAADkBAAAHgAAABAA AABUT1dOIE9GIEhBTk9WRVIAHgAAAAwAAABWSUNLSSBTTUlUSAAeAAAACAAAAE5vcm1hbAAAHgAA AAwAAABWaWNraSBTbWl0aAAeAAAABAAAADkAAAAeAAAAGAAAAE1pY3Jvc29mdCBPZmZpY2UgV29y ZAAAAEAAAAAAwswBBgAAAEAAAAAAQmvvNLfPAUAAAAAAuGcd2IHQAUAAAAAA7lC574HQAQMAAAAC AAAAAwAAAJYBAAADAAAADwkAAAMAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA/v8AAAYBAgAAAAAAAAAAAAAAAAAAAAAAAQAA AALVzdWcLhsQk5cIACss+a4wAAAA+AAAAAwAAAABAAAAaAAAAA8AAABwAAAABQAAAHwAAAAGAAAA hAAAABEAAACMAAAAFwAAAJQAAAALAAAAnAAAABAAAACkAAAAEwAAAKwAAAAWAAAAtAAAAA0AAAC8 AAAADAAAANgAAAACAAAA5AQAAB4AAAAEAAAALi4AAAMAAAATAAAAAwAAAAUAAAADAAAAoAoAAAMA AAAAAA4ACwAAAAAAAAALAAAAAAAAAAsAAAAAAAAACwAAAAAAAAAeEAAAAQAAABAAAABUT1dOIE9G IEhBTk9WRVIADBAAAAIAAAAeAAAABgAAAFRpdGxlAAMAAAABAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAACAAAAAwAAAAQAAAAFAAAABgAAAAcAAAAIAAAACQAA AAoAAAALAAAADAAAAA0AAAAOAAAADwAAABAAAAARAAAAEgAAAP7///8UAAAAFQAAABYAAAAXAAAA GAAAABkAAAAaAAAAGwAAABwAAAAdAAAAHgAAAB8AAAAgAAAAIQAAACIAAAAjAAAAJAAAACUAAAD+ ////JwAAACgAAAApAAAAKgAAACsAAAAsAAAALQAAAC4AAAAvAAAAMAAAADEAAAAyAAAAMwAAADQA AAA1AAAANgAAADcAAAA4AAAAOQAAADoAAAA7AAAAPAAAAD0AAAA+AAAAPwAAAEAAAABBAAAAQgAA AEMAAABEAAAARQAAAEYAAABHAAAASAAAAEkAAABKAAAA/v///0wAAABNAAAATgAAAE8AAABQAAAA UQAAAFIAAAD+////VAAAAFUAAABWAAAAVwAAAFgAAABZAAAAWgAAAP7////9////XQAAAF4AAAD+ /////v///2EAAAD+//////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// ////////////////////////////UgBvAG8AdAAgAEUAbgB0AHIAeQAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABYABQH//////////wMAAAAGCQIAAAAAAMAAAAAA AABGAAAAAAAAAAAAAAAA8FO/wu+B0AFgAAAAAAMAAAAAAABEAGEAdABhAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACgACAf////////////// /wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABMAAACBJQAAAAAAADEAVABhAGIA bABlAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAO AAIAAQAAAP//////////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJgAAAE9I AAAAAAAAVwBvAHIAZABEAG8AYwB1AG0AZQBuAHQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAABoAAgEKAAAABQAAAP////8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAQSQAAAAAAAAFAFMAdQBtAG0AYQByAHkASQBuAGYAbwByAG0AYQB0AGkAbwBu AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKAACAf///////////////wAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAEsAAAAAEAAAAAAAAAUARABvAGMAdQBtAGUAbgB0AFMAdQBt AG0AYQByAHkASQBuAGYAbwByAG0AYQB0AGkAbwBuAAAAAAAAAAAAAAA4AAIBBAAAAP////////// AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUwAAAAAQAAAAAAAATQBzAG8ARABh AHQAYQBTAHQAbwByAGUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABoA AQD//////////wcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFCfm8LvgdABAAq9wu+B0AEAAAAAAAAA AAAAAADJAM4AUQBCANsAywBYAEIASADUANIARADMAEoAwwDKANUAwQAyANUASgDAAD0APQAAAAAA AAAAAAAAAAAAAAAAMgABAf//////////CAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAUJ+bwu+B0AEA Cr3C74HQAQAAAAAAAAAAAAAAAEkAdABlAG0AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAKAAIB/////wkAAAD/////AAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAANoAAAAAAAAAUAByAG8AcABlAHIAdABpAGUAcwAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABYAAgD///////////////8A AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEAAAAVQEAAAAAAAABAEMAbwBtAHAA TwBiAGoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEgAC AQIAAAAGAAAA/////wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAoAAAByAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA////////////////AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAQAAAAIAAAADAAAA/v///wUAAAAGAAAABwAAAAgAAAAJAAAA/v// /wsAAAD+//////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////// //////////////////////88YjpTb3VyY2VzIFNlbGVjdGVkU3R5bGU9IlxBUEEuWFNMIiBTdHls ZU5hbWU9IkFQQSIgeG1sbnM6Yj0iaHR0cDovL3NjaGVtYXMub3BlbnhtbGZvcm1hdHMub3JnL29m ZmljZURvY3VtZW50LzIwMDYvYmlibGlvZ3JhcGh5IiB4bWxucz0iaHR0cDovL3NjaGVtYXMub3Bl bnhtbGZvcm1hdHMub3JnL29mZmljZURvY3VtZW50LzIwMDYvYmlibGlvZ3JhcGh5Ij48L2I6U291 cmNlcz4NCgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAPD94bWwgdmVyc2lv bj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiIHN0YW5kYWxvbmU9Im5vIj8+DQo8ZHM6ZGF0YXN0b3Jl SXRlbSBkczppdGVtSUQ9IntFRTAxRTRBNi1DMUI1LTRDMUYtODNCMC05OEVBRDYxNzM1MjZ9IiB4 bWxuczpkcz0iaHR0cDovL3NjaGVtYXMub3BlbnhtbGZvcm1hdHMub3JnL29mZmljZURvY3VtZW50 LzIwMDYvY3VzdG9tWG1sIj48ZHM6c2NoZW1hUmVmcz48ZHM6c2NoZW1hUmVmIGRzOnVyaT0iaHR0 cDovL3NjaGVtYXMub3BlbnhtbGZvcm1hdHMub3JnL29mZmljZURvY3VtZW50LzIwMDYvYmlibGlv Z3JhcGh5Ii8+PC9kczpzY2hlbWFSZWZzPjwvZHM6ZGF0YXN0b3JlSXRlbT4AAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQD+/wMKAAD/////BgkCAAAAAADAAAAAAAAA RiAAAABNaWNyb3NvZnQgV29yZCA5Ny0yMDAzIERvY3VtZW50AAoAAABNU1dvcmREb2MAEAAAAFdv cmQuRG9jdW1lbnQuOAD0ObJxAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAA --_004_BN3PR02MB1224A7F725FB8D0626CBD179FAE80BN3PR02MB1224namp_-- From ok at cs.otago.ac.nz Wed Apr 29 01:14:36 2015 From: ok at cs.otago.ac.nz (Richard A. O'Keefe) Date: Wed, 29 Apr 2015 13:14:36 +1200 Subject: [Haskell-cafe] Coplanarity or Colinearity [Was: low-cost matrix rank?] In-Reply-To: References: Message-ID: <28B3FCD1-5BE8-4769-8BA9-A20D8426F4C2@cs.otago.ac.nz> On 26/04/2015, at 1:53 am, Mike Meyer wrote: > My real problem is that I've got a list of points in R3 and want to decide if they determine a plane, meaning they are coplanar but not colinear. Similarly, given a list of points in R2, I want to verify that they aren't colinear. Both of these can be done by converting the list of points to a matrix and finding the rank of the matrix, but I only use the rank function in the definitions of colinear and coplanar. To compute the rank of a matrix, perform elementary row operations until the matrix is left in echelon form; the number of nonzero rows remaining in the reduced matrix is the rank. (http://www.cliffsnotes.com/math/algebra/linear-algebra/real-euclidean-vector-spaces/the-rank-of-a-matrix) A matrix is in row echelon form when it satisfies the following conditions: * The first non-zero element in each row, called the leading entry, is 1 * Each leading entry is in a column to the right of the leading entry in the previous row * Rows with all zero elements, if any, are below rows having a non-zero element. (http://stattrek.com/matrix-algebra/echelon-transform.aspx) Row echelon forms aren't unique, but for determining the rank of a matrix, that doesn't matter. Code working on a list of points left as an exercise for the reader. From doug at cs.dartmouth.edu Wed Apr 29 01:36:52 2015 From: doug at cs.dartmouth.edu (Doug McIlroy) Date: Tue, 28 Apr 2015 21:36:52 -0400 Subject: [Haskell-cafe] Prime sieve and Haskell demo Message-ID: <201504290136.t3T1aqI1020799@coolidge.cs.dartmouth.edu> With deep apologies for sending the wrong file, I try again. Doug >> How about simply changing `sieve` to `trialDiv`? It's not that I >> don't like the given example, because it gives a very small use case >> for laziness that is difficult enough to reproduce in an eagerly >> evaluated language. > > Is it really so difficult to reproduce in a strict language? Here is > that Haskell example in OCaml > > let primes = > let rec trialDiv (Cons (p,xs)) = > Cons (p, lazy (trialDiv @@ filter (fun x -> x mod p <> 0) @@ Lazy.force xs)) > in trialDiv @@ iota 2 I'm afraid I don't understand why the program isn't a sieve. Is the concern that the sequence of integers is thinned by dropping composites rather than by merely marking them and counting across them? Or is it that a trace of lazy evaluation will show that all the divisibility tests on a single integer are clustered together in time? Or something I haven't thought of? Of course the program can be written in any Turing-complete language, but the effort is likely to cause beads of sweat, like "lazy", "force", or "spawn" to be shed on the algorithmic pearl. The sieve can even be written succinctly as a bash shell script (below), which exhibits warts (e.g. five flavors of parentheses) but no sweat. Though both the Ocaml and the shell code are compact, neither dulls the luster that lazy evaluation imparts to the Haskell. sift() { while true; do read p if (( $p % $1 != 0 )); then echo $p; fi done } sink() { read p; echo $p; sift $p | sink } seq 2 1000000 | sink From ky3 at atamo.com Wed Apr 29 02:42:03 2015 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Wed, 29 Apr 2015 09:42:03 +0700 Subject: [Haskell-cafe] Prime sieve and Haskell demo In-Reply-To: <201504290136.t3T1aqI1020799@coolidge.cs.dartmouth.edu> References: <201504290136.t3T1aqI1020799@coolidge.cs.dartmouth.edu> Message-ID: On Wed, Apr 29, 2015 at 8:36 AM, Doug McIlroy wrote: > I'm afraid I don't understand why the program isn't a sieve. Is > the concern that the sequence of integers is thinned by dropping > composites rather than by merely marking them and counting across > them? Or is it that a trace of lazy evaluation will show that all > the divisibility tests on a single integer are clustered together > in time? Or something I haven't thought of? > When I reread Ertugrul's original email, I see that he's alerting to the danger of derision. There will be people who will mock Haskell for having an un-performant and un-Eratosthenian non-sieve on its front page. As in, Haskell people don't even know their basic math, ha ha. It used to be fibonaccis. That's too inviting of derision. Primes are more noble, so the thinking goes. That very small space on the face of Haskell must perform incredible duties. Among them, it has to showcase beautiful syntax, see: https://github.com/haskell-infra/hl/issues/46#issuecomment-72331664 HTH, -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From fr33domlover at riseup.net Wed Apr 29 06:07:36 2015 From: fr33domlover at riseup.net (fr33domlover) Date: Wed, 29 Apr 2015 09:07:36 +0300 Subject: [Haskell-cafe] Wiki user Message-ID: Hello, The Haskell Wiki says automatic registration has been disabled, and that I should send an e-mail. Could you please create a wiki account for me? The username I'd like to have is: akrasner. Thanks in advance! From hjgtuyl at chello.nl Wed Apr 29 09:21:48 2015 From: hjgtuyl at chello.nl (Henk-Jan van Tuyl) Date: Wed, 29 Apr 2015 11:21:48 +0200 Subject: [Haskell-cafe] Wiki user In-Reply-To: <20150429060541.8BD7BBCE53@haskell.org> References: <20150429060541.8BD7BBCE53@haskell.org> Message-ID: On Wed, 29 Apr 2015 08:07:36 +0200, fr33domlover wrote: > should send an e-mail. Could you please create a wiki account for me? The > username I'd like to have is: akrasner. Done. Regards, Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- From fr33domlover at riseup.net Wed Apr 29 11:25:02 2015 From: fr33domlover at riseup.net (fr33domlover) Date: Wed, 29 Apr 2015 14:25:02 +0300 Subject: [Haskell-cafe] Wiki user In-Reply-To: References: <20150429060541.8BD7BBCE53@haskell.org> Message-ID: Thank you very much! On Wed, 29 Apr 2015 11:21:48 +0200 "Henk-Jan van Tuyl" wrote: > On Wed, 29 Apr 2015 08:07:36 +0200, fr33domlover > wrote: > > > should send an e-mail. Could you please create a wiki account for me? The > > username I'd like to have is: akrasner. > > Done. > > Regards, > Henk-Jan van Tuyl > > From eir at cis.upenn.edu Wed Apr 29 11:43:45 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 29 Apr 2015 07:43:45 -0400 Subject: [Haskell-cafe] dependent types, singleton types.... In-Reply-To: References: Message-ID: <6CF4D7D9-76A1-4B2A-9ABE-3409739FFD56@cis.upenn.edu> Hello Mark, Your suspicion that your singleton tree type is wrong is well-founded. The problem is that, in my opinion, that exercise is mentioned too early in the tutorial. To properly implement a singleton type for a parameterized type, like a binary tree, you will need `data family Sing (a :: k)`, as explained just a little bit further down in the post. You'll need to rewrite your definition for singleton numbers and booleans to work with `Sing` as well. Your code except the definition for SBranch is all correct. The problem with your definition is that you don't get the right information when pattern-matching. For example, say you have x with type `SBTree a`. If you successfully pattern match against `SBranch SZ SLeaf SLeaf`, you would want to learn `a ~ Branch Z Leaf Leaf`. But that's not what you'll get in your implementation: you'll get a type error saying that we don't know that `a0` is an `SNat`, where `a ~ Branch a0 Leaf Leaf`, or something like that. The type-level information is simply encoded in the wrong place for this to work out. Write back and I'll give you the full answer if this isn't enough to get you moving in the right direction! Richard On Apr 28, 2015, at 10:45 AM, "Nicholls, Mark" wrote: > Can someone check my answer (no I?m not doing an assessment?I?m actually learning stuff out of interest!) > > working through > > https://www.fpcomplete.com/user/konn/prove-your-haskell-for-great-safety/dependent-types-in-haskell > > still there is a section about singleton types and the exercise is > > ?Exercise: Define the binary tree type and implement its singleton type.? > > Ok, I think I?m probably wrong?.a binary tree is something like? > > > data BTree a = Leaf | Branch a (BTree a) (BTree a) > > With DataKind > > My logic goes? > Leaf is an uninhabited type, so I need a value isomorphic to it?. > > Easy? > > > data SBTree a where > > SLeaf :: SBTree Leaf > > Things like > Branch Integer Leaf (Branch String Leaf Leaf) > Are uninhabited?so I need to add > > > SBranch :: (a :: *) -> (SBTree (b :: BTree *)) -> (SBTree (c :: BTree *)) -> SBTree (Branch a b c) > > ? > > It compiles?but?.is it actually correct? > Things like > > > y = SBranch (SS (SS SZ)) SLeaf SLeaf > > z = SBranch (SS (SS SZ)) (SBranch SZ SLeaf SLeaf) SLeaf > > Seem to make sense ish. > > From: Nicholls, Mark > Sent: 28 April 2015 9:33 AM > To: Nicholls, Mark > Subject: sds > > Hello, > > working through > > https://www.fpcomplete.com/user/konn/prove-your-haskell-for-great-safety/dependent-types-in-haskell > > but a bit stuck...with an error... > > > {-# LANGUAGE DataKinds, TypeFamilies, TypeOperators, UndecidableInstances, GADTs, StandaloneDeriving #-} > > > data Nat = Z | S Nat > > > data Vector a n where > > Nil :: Vector a Z > > (:-) :: a -> Vector a n -> Vector a (S n) > > infixr 5 :- > > I assume init...is a bit like tail but take n - 1 elements from the front....but... > > > init' :: Vector a ('S n) -> Vector a n > > init' (x :- Nil) = Nil > > init' (x :- xs@(_ :- _)) = x :- (init' xs) > > > zipWithSame :: (a -> b -> c) -> Vector a n -> Vector b n -> Vector c n > > zipWithSame f Nil Nil = Nil > > zipWithSame f (x :- xs) (y :- xs@(_ :- _)) = Nil > > Mark Nicholls | Senior Technical Director, Programmes & Development - Viacom International Media Networks > A: 17-29 Hawley Crescent London NW1 8TT | e: Nicholls.Mark at vimn.com T: +44 (0)203 580 2223 > > > > > > CONFIDENTIALITY NOTICE > > This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. > > While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. > > Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. > > MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholls.mark at vimn.com Wed Apr 29 12:54:38 2015 From: nicholls.mark at vimn.com (Nicholls, Mark) Date: Wed, 29 Apr 2015 12:54:38 +0000 Subject: [Haskell-cafe] dependent types, singleton types.... In-Reply-To: <6CF4D7D9-76A1-4B2A-9ABE-3409739FFD56@cis.upenn.edu> References: , <6CF4D7D9-76A1-4B2A-9ABE-3409739FFD56@cis.upenn.edu> Message-ID: <7AEF3084-B33E-42B7-97C0-6A1D815A6E58@vimn.com> Accchhhh Thats another level of pain, i'm not a big haskeller so i'll have to wrestle with it to get all the knobs and dials working, leave it with me for the moment and i'll start wrestling with cabal Excuse the spelling, sent from a phone with itty bitty keys, it like trying to sow a button on a shirt with a sausage. On 29 Apr 2015, at 12:43, Richard Eisenberg > wrote: Hello Mark, Your suspicion that your singleton tree type is wrong is well-founded. The problem is that, in my opinion, that exercise is mentioned too early in the tutorial. To properly implement a singleton type for a parameterized type, like a binary tree, you will need `data family Sing (a :: k)`, as explained just a little bit further down in the post. You'll need to rewrite your definition for singleton numbers and booleans to work with `Sing` as well. Your code except the definition for SBranch is all correct. The problem with your definition is that you don't get the right information when pattern-matching. For example, say you have x with type `SBTree a`. If you successfully pattern match against `SBranch SZ SLeaf SLeaf`, you would want to learn `a ~ Branch Z Leaf Leaf`. But that's not what you'll get in your implementation: you'll get a type error saying that we don't know that `a0` is an `SNat`, where `a ~ Branch a0 Leaf Leaf`, or something like that. The type-level information is simply encoded in the wrong place for this to work out. Write back and I'll give you the full answer if this isn't enough to get you moving in the right direction! Richard On Apr 28, 2015, at 10:45 AM, "Nicholls, Mark" > wrote: Can someone check my answer (no I?m not doing an assessment?I?m actually learning stuff out of interest!) working through https://www.fpcomplete.com/user/konn/prove-your-haskell-for-great-safety/dependent-types-in-haskell still there is a section about singleton types and the exercise is ?Exercise: Define the binary tree type and implement its singleton type.? Ok, I think I?m probably wrong?.a binary tree is something like? > data BTree a = Leaf | Branch a (BTree a) (BTree a) With DataKind My logic goes? Leaf is an uninhabited type, so I need a value isomorphic to it?. Easy? > data SBTree a where > SLeaf :: SBTree Leaf Things like Branch Integer Leaf (Branch String Leaf Leaf) Are uninhabited?so I need to add > SBranch :: (a :: *) -> (SBTree (b :: BTree *)) -> (SBTree (c :: BTree *)) -> SBTree (Branch a b c) ? It compiles?but?.is it actually correct? Things like > y = SBranch (SS (SS SZ)) SLeaf SLeaf > z = SBranch (SS (SS SZ)) (SBranch SZ SLeaf SLeaf) SLeaf Seem to make sense ish. From: Nicholls, Mark Sent: 28 April 2015 9:33 AM To: Nicholls, Mark Subject: sds Hello, working through https://www.fpcomplete.com/user/konn/prove-your-haskell-for-great-safety/dependent-types-in-haskell but a bit stuck...with an error... > {-# LANGUAGE DataKinds, TypeFamilies, TypeOperators, UndecidableInstances, GADTs, StandaloneDeriving #-} > data Nat = Z | S Nat > data Vector a n where > Nil :: Vector a Z > (:-) :: a -> Vector a n -> Vector a (S n) > infixr 5 :- I assume init...is a bit like tail but take n - 1 elements from the front....but... > init' :: Vector a ('S n) -> Vector a n > init' (x :- Nil) = Nil > init' (x :- xs@(_ :- _)) = x :- (init' xs) > zipWithSame :: (a -> b -> c) -> Vector a n -> Vector b n -> Vector c n > zipWithSame f Nil Nil = Nil > zipWithSame f (x :- xs) (y :- xs@(_ :- _)) = Nil Mark Nicholls | Senior Technical Director, Programmes & Development - Viacom International Media Networks A: 17-29 Hawley Crescent London NW1 8TT | e: Nicholls.Mark at vimn.com T: +44 (0)203 580 2223 CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scaroo at gmail.com Wed Apr 29 13:07:09 2015 From: scaroo at gmail.com (Alexandre Mazari) Date: Wed, 29 Apr 2015 15:07:09 +0200 Subject: [Haskell-cafe] HTTP client library supporting Server sent events ? In-Reply-To: References: Message-ID: In an effort to build an Haskell client library for the Firebase service [0], which rely heavily on HTTP event source/server sent events [1], I am looking for an HTTP client lib supporting this spec. AFAIK, both WAI and yesod handle the mechanism server-side but nor http-client, wreq or http-streams seem to provide the client counterpart. Am I looking in the wrong direction? SSE are basically '\n' separated yaml messages over a kept open http response stream. I guess a seasoned Haskell dev could build a solution quite easily but o couldn't find a way to keep the response stream opened. Ideally a conduit/pipe sink exposing each message could be exposed for further parsing and usage. I'd be very glassful of someone could help me contribute such handling or come up with a solution. Thanks for your time, Alexandre [0] https://www.firebase.com/docs/rest/api/ [1] http://www.w3.org/TR/2011/WD-eventsource-20110208/ Le 29 avr. 2015 14:02, a ?crit : Send Haskell-Cafe mailing list submissions to haskell-cafe at haskell.org To subscribe or unsubscribe via the World Wide Web, visit http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe or, via email, send a message with subject or body 'help' to haskell-cafe-request at haskell.org You can reach the person managing the list at haskell-cafe-owner at haskell.org When replying, please edit your Subject line so it is more specific than "Re: Contents of Haskell-Cafe digest..." Today's Topics: 1. Re: Coplanarity or Colinearity [Was: low-cost matrix rank?] (Richard A. O'Keefe) 2. Re: Prime sieve and Haskell demo (Doug McIlroy) 3. Re: Prime sieve and Haskell demo (Kim-Ee Yeoh) 4. Wiki user (fr33domlover) 5. Re: Wiki user (Henk-Jan van Tuyl) 6. Re: Wiki user (fr33domlover) 7. Re: dependent types, singleton types.... (Richard Eisenberg) ---------------------------------------------------------------------- Message: 1 Date: Wed, 29 Apr 2015 13:14:36 +1200 From: "Richard A. O'Keefe" To: Mike Meyer Cc: Haskell-Cafe Subject: Re: [Haskell-cafe] Coplanarity or Colinearity [Was: low-cost matrix rank?] Message-ID: <28B3FCD1-5BE8-4769-8BA9-A20D8426F4C2 at cs.otago.ac.nz> Content-Type: text/plain; charset=us-ascii On 26/04/2015, at 1:53 am, Mike Meyer wrote: > My real problem is that I've got a list of points in R3 and want to decide if they determine a plane, meaning they are coplanar but not colinear. Similarly, given a list of points in R2, I want to verify that they aren't colinear. Both of these can be done by converting the list of points to a matrix and finding the rank of the matrix, but I only use the rank function in the definitions of colinear and coplanar. To compute the rank of a matrix, perform elementary row operations until the matrix is left in echelon form; the number of nonzero rows remaining in the reduced matrix is the rank. ( http://www.cliffsnotes.com/math/algebra/linear-algebra/real-euclidean-vector-spaces/the-rank-of-a-matrix ) A matrix is in row echelon form when it satisfies the following conditions: * The first non-zero element in each row, called the leading entry, is 1 * Each leading entry is in a column to the right of the leading entry in the previous row * Rows with all zero elements, if any, are below rows having a non-zero element. (http://stattrek.com/matrix-algebra/echelon-transform.aspx) Row echelon forms aren't unique, but for determining the rank of a matrix, that doesn't matter. Code working on a list of points left as an exercise for the reader. ------------------------------ Message: 2 Date: Tue, 28 Apr 2015 21:36:52 -0400 From: Doug McIlroy To: haskell-cafe at haskell.org Subject: Re: [Haskell-cafe] Prime sieve and Haskell demo Message-ID: <201504290136.t3T1aqI1020799 at coolidge.cs.dartmouth.edu> Content-Type: text/plain; charset=us-ascii With deep apologies for sending the wrong file, I try again. Doug >> How about simply changing `sieve` to `trialDiv`? It's not that I >> don't like the given example, because it gives a very small use case >> for laziness that is difficult enough to reproduce in an eagerly >> evaluated language. > > Is it really so difficult to reproduce in a strict language? Here is > that Haskell example in OCaml > > let primes = > let rec trialDiv (Cons (p,xs)) = > Cons (p, lazy (trialDiv @@ filter (fun x -> x mod p <> 0) @@ Lazy.force xs)) > in trialDiv @@ iota 2 I'm afraid I don't understand why the program isn't a sieve. Is the concern that the sequence of integers is thinned by dropping composites rather than by merely marking them and counting across them? Or is it that a trace of lazy evaluation will show that all the divisibility tests on a single integer are clustered together in time? Or something I haven't thought of? Of course the program can be written in any Turing-complete language, but the effort is likely to cause beads of sweat, like "lazy", "force", or "spawn" to be shed on the algorithmic pearl. The sieve can even be written succinctly as a bash shell script (below), which exhibits warts (e.g. five flavors of parentheses) but no sweat. Though both the Ocaml and the shell code are compact, neither dulls the luster that lazy evaluation imparts to the Haskell. sift() { while true; do read p if (( $p % $1 != 0 )); then echo $p; fi done } sink() { read p; echo $p; sift $p | sink } seq 2 1000000 | sink ------------------------------ Message: 3 Date: Wed, 29 Apr 2015 09:42:03 +0700 From: Kim-Ee Yeoh To: Doug McIlroy Cc: Haskell Cafe Subject: Re: [Haskell-cafe] Prime sieve and Haskell demo Message-ID: Content-Type: text/plain; charset="utf-8" On Wed, Apr 29, 2015 at 8:36 AM, Doug McIlroy wrote: > I'm afraid I don't understand why the program isn't a sieve. Is > the concern that the sequence of integers is thinned by dropping > composites rather than by merely marking them and counting across > them? Or is it that a trace of lazy evaluation will show that all > the divisibility tests on a single integer are clustered together > in time? Or something I haven't thought of? > When I reread Ertugrul's original email, I see that he's alerting to the danger of derision. There will be people who will mock Haskell for having an un-performant and un-Eratosthenian non-sieve on its front page. As in, Haskell people don't even know their basic math, ha ha. It used to be fibonaccis. That's too inviting of derision. Primes are more noble, so the thinking goes. That very small space on the face of Haskell must perform incredible duties. Among them, it has to showcase beautiful syntax, see: https://github.com/haskell-infra/hl/issues/46#issuecomment-72331664 HTH, -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://mail.haskell.org/pipermail/haskell-cafe/attachments/20150429/7bf2d4e8/attachment-0001.html > ------------------------------ Message: 4 Date: Wed, 29 Apr 2015 09:07:36 +0300 From: fr33domlover To: "Haskell-Cafe" Subject: [Haskell-cafe] Wiki user Message-ID: Content-Type: text/plain; charset=US-ASCII Hello, The Haskell Wiki says automatic registration has been disabled, and that I should send an e-mail. Could you please create a wiki account for me? The username I'd like to have is: akrasner. Thanks in advance! ------------------------------ Message: 5 Date: Wed, 29 Apr 2015 11:21:48 +0200 From: "Henk-Jan van Tuyl" To: Haskell-Cafe , fr33domlover Subject: Re: [Haskell-cafe] Wiki user Message-ID: Content-Type: text/plain; charset=iso-8859-15; format=flowed; delsp=yes On Wed, 29 Apr 2015 08:07:36 +0200, fr33domlover wrote: > should send an e-mail. Could you please create a wiki account for me? The > username I'd like to have is: akrasner. Done. Regards, Henk-Jan van Tuyl -- Folding at home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming -- ------------------------------ Message: 6 Date: Wed, 29 Apr 2015 14:25:02 +0300 From: fr33domlover To: "Henk-Jan van Tuyl" Cc: Haskell-Cafe Subject: Re: [Haskell-cafe] Wiki user Message-ID: Content-Type: text/plain; charset=US-ASCII Thank you very much! On Wed, 29 Apr 2015 11:21:48 +0200 "Henk-Jan van Tuyl" wrote: > On Wed, 29 Apr 2015 08:07:36 +0200, fr33domlover > wrote: > > > should send an e-mail. Could you please create a wiki account for me? The > > username I'd like to have is: akrasner. > > Done. > > Regards, > Henk-Jan van Tuyl > > ------------------------------ Message: 7 Date: Wed, 29 Apr 2015 07:43:45 -0400 From: Richard Eisenberg To: "Nicholls, Mark" Cc: "haskell-cafe at haskell.org" Subject: Re: [Haskell-cafe] dependent types, singleton types.... Message-ID: <6CF4D7D9-76A1-4B2A-9ABE-3409739FFD56 at cis.upenn.edu> Content-Type: text/plain; charset="windows-1252" Hello Mark, Your suspicion that your singleton tree type is wrong is well-founded. The problem is that, in my opinion, that exercise is mentioned too early in the tutorial. To properly implement a singleton type for a parameterized type, like a binary tree, you will need `data family Sing (a :: k)`, as explained just a little bit further down in the post. You'll need to rewrite your definition for singleton numbers and booleans to work with `Sing` as well. Your code except the definition for SBranch is all correct. The problem with your definition is that you don't get the right information when pattern-matching. For example, say you have x with type `SBTree a`. If you successfully pattern match against `SBranch SZ SLeaf SLeaf`, you would want to learn `a ~ Branch Z Leaf Leaf`. But that's not what you'll get in your implementation: you'll get a type error saying that we don't know that `a0` is an `SNat`, where `a ~ Branch a0 Leaf Leaf`, or something like that. The type-level information is simply encoded in the wrong place for this to work out. Write back and I'll give you the full answer if this isn't enough to get you moving in the right direction! Richard On Apr 28, 2015, at 10:45 AM, "Nicholls, Mark" wrote: > Can someone check my answer (no I?m not doing an assessment?I?m actually learning stuff out of interest!) > > working through > > https://www.fpcomplete.com/user/konn/prove-your-haskell-for-great-safety/dependent-types-in-haskell > > still there is a section about singleton types and the exercise is > > ?Exercise: Define the binary tree type and implement its singleton type.? > > Ok, I think I?m probably wrong?.a binary tree is something like? > > > data BTree a = Leaf | Branch a (BTree a) (BTree a) > > With DataKind > > My logic goes? > Leaf is an uninhabited type, so I need a value isomorphic to it?. > > Easy? > > > data SBTree a where > > SLeaf :: SBTree Leaf > > Things like > Branch Integer Leaf (Branch String Leaf Leaf) > Are uninhabited?so I need to add > > > SBranch :: (a :: *) -> (SBTree (b :: BTree *)) -> (SBTree (c :: BTree *)) -> SBTree (Branch a b c) > > ? > > It compiles?but?.is it actually correct? > Things like > > > y = SBranch (SS (SS SZ)) SLeaf SLeaf > > z = SBranch (SS (SS SZ)) (SBranch SZ SLeaf SLeaf) SLeaf > > Seem to make sense ish. > > From: Nicholls, Mark > Sent: 28 April 2015 9:33 AM > To: Nicholls, Mark > Subject: sds > > Hello, > > working through > > https://www.fpcomplete.com/user/konn/prove-your-haskell-for-great-safety/dependent-types-in-haskell > > but a bit stuck...with an error... > > > {-# LANGUAGE DataKinds, TypeFamilies, TypeOperators, UndecidableInstances, GADTs, StandaloneDeriving #-} > > > data Nat = Z | S Nat > > > data Vector a n where > > Nil :: Vector a Z > > (:-) :: a -> Vector a n -> Vector a (S n) > > infixr 5 :- > > I assume init...is a bit like tail but take n - 1 elements from the front....but... > > > init' :: Vector a ('S n) -> Vector a n > > init' (x :- Nil) = Nil > > init' (x :- xs@(_ :- _)) = x :- (init' xs) > > > zipWithSame :: (a -> b -> c) -> Vector a n -> Vector b n -> Vector c n > > zipWithSame f Nil Nil = Nil > > zipWithSame f (x :- xs) (y :- xs@(_ :- _)) = Nil > > Mark Nicholls | Senior Technical Director, Programmes & Development - Viacom International Media Networks > A: 17-29 Hawley Crescent London NW1 8TT | e: Nicholls.Mark at vimn.com T: +44 (0)203 580 2223 > > > > > > CONFIDENTIALITY NOTICE > > This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. > > While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. > > Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. > > MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe -------------- next part -------------- An HTML attachment was scrubbed... URL: < http://mail.haskell.org/pipermail/haskell-cafe/attachments/20150429/5cfd0d5d/attachment-0001.html > ------------------------------ Subject: Digest Footer _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe at haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe ------------------------------ End of Haskell-Cafe Digest, Vol 140, Issue 52 ********************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From greg at gregorycollins.net Wed Apr 29 13:20:32 2015 From: greg at gregorycollins.net (Gregory Collins) Date: Wed, 29 Apr 2015 06:20:32 -0700 Subject: [Haskell-cafe] HTTP client library supporting Server sent events ? In-Reply-To: References: Message-ID: On Wed, Apr 29, 2015 at 6:07 AM, Alexandre Mazari wrote: > In an effort to build an Haskell client library for the Firebase service > [0], which rely heavily on HTTP event source/server sent events [1], I am > looking for an HTTP client lib supporting this spec. > > AFAIK, both WAI and yesod handle the mechanism server-side but nor > http-client, wreq or http-streams seem to provide the client counterpart. > > Am I looking in the wrong direction? > At least http-client and http-streams should support this use case easily, you'll just have to parse the stream yourself. The easiest way is to write a parser using attoparsec and lift that into a stream transformer. The io-streams library has native support for this ( http://hackage.haskell.org/package/io-streams-1.3.0.0/docs/System-IO-Streams-Attoparsec.html#v:parserToInputStream), conduits supply a similar thing in the conduit-extra package. Greg -- Gregory Collins -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Wed Apr 29 13:42:50 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Wed, 29 Apr 2015 09:42:50 -0400 Subject: [Haskell-cafe] dependent types, singleton types.... In-Reply-To: <7AEF3084-B33E-42B7-97C0-6A1D815A6E58@vimn.com> References: , <6CF4D7D9-76A1-4B2A-9ABE-3409739FFD56@cis.upenn.edu> <7AEF3084-B33E-42B7-97C0-6A1D815A6E58@vimn.com> Message-ID: On Apr 29, 2015, at 8:54 AM, "Nicholls, Mark" wrote: > Accchhhh > > Thats another level of pain It is, unfortunately. > , i'm not a big haskeller so i'll have to wrestle with it to get all the knobs and dials working, leave it with me for the moment and i'll start wrestling with cabal You won't need cabal and such. Just say `data family Sing (a :: k)` in your file and you'll have the definition. There's really nothing more to it than that! Richard From mathieu at fpcomplete.com Wed Apr 29 14:49:34 2015 From: mathieu at fpcomplete.com (Mathieu Boespflug) Date: Wed, 29 Apr 2015 16:49:34 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429185521.25663.103.camel@dunky.localdomain> <87bninfdkp.fsf@karetnikov.org> Message-ID: Define "superior"? As argued in the proposal, the salient features are that by devolving practically everything to Git+GPG, we end up with less code to maintain in our tooling, less code to maintain in our infrastructure (namely hackage-server), a more reliable service, and a smaller chance of buggering up security related activities (such as signing and managing trust). We're not introducing dependencies on dynamically linked system libraries that makes tooling hard to distribute. We're not asking users to install anything new that isn't already a staple of most developer desktops, and not asking users, Hackage trustees and Hackage admins to manage new identities with new key formats that aren't the existing ones they already have (namely GnuPG). Further, users can still opt-out of signature verification if they want to. Compared to alternative approaches - there has been a proposal to get incremental updates ? la Git differently by growing (potentially infinitely) the end of a tar file served by the server via HTTP. This means grabbing the history for new package revisions cannot be opted out from easily. With Git, you get this for free, since users can `git clone --depth=1` and still be able to do a `git pull` later and verify signatures. You further get the advantage of being able to directly mine the history of changes, using standard tools, something that can't be done directly on the tar file without more custom tooling (or post conversion to Git). On 28 April 2015 at 23:33, Bardur Arantsson wrote: > On 28-04-2015 23:09, Mathieu Boespflug wrote: > > [removing erroneous haskell-cafe at googlegroups.com from To list.] > > (I'm not the person you're responing to. From the mail-headers, I can't > see the person(s) you're responding to, but so be it.) > > Do you have evidence that your approach is superior, and could you > please cite it? [Or, alternatively provide negative evidence for > $OTHER_APPROACH.]) > > Regards, > > -- > You received this message because you are subscribed to the Google Groups > "Commercial Haskell" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to commercialhaskell+unsubscribe at googlegroups.com. > To post to this group, send email to commercialhaskell at googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/commercialhaskell/mhouaf%24it2%241%40ger.gmane.org > . > For more options, visit https://groups.google.com/d/optout. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eyeinsky9 at gmail.com Wed Apr 29 18:54:47 2015 From: eyeinsky9 at gmail.com (Carl Eyeinsky) Date: Wed, 29 Apr 2015 21:54:47 +0300 Subject: [Haskell-cafe] dependent types, singleton types.... In-Reply-To: References: <6CF4D7D9-76A1-4B2A-9ABE-3409739FFD56@cis.upenn.edu> <7AEF3084-B33E-42B7-97C0-6A1D815A6E58@vimn.com> Message-ID: As Richard said, you don't need much of cabal for dependent types. BUT, a scenario that has worked for me is: install ghc and cabal-install from your package manager, update cabal-install via cabal ("cabal install cabal-install"), and install all else in a sandbox. Works predictably if something fails as you can just wipe the sandbox. Here is something to refer to if one wishes to go deeper :) http://www.vex.net/~trebla/haskell/sicp.xhtml On Wed, Apr 29, 2015 at 4:42 PM, Richard Eisenberg wrote: > On Apr 29, 2015, at 8:54 AM, "Nicholls, Mark" > wrote: > > > Accchhhh > > > > Thats another level of pain > > It is, unfortunately. > > > , i'm not a big haskeller so i'll have to wrestle with it to get all the > knobs and dials working, leave it with me for the moment and i'll start > wrestling with cabal > > You won't need cabal and such. Just say `data family Sing (a :: k)` in > your file and you'll have the definition. There's really nothing more to it > than that! > > Richard > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > -- Carl Eyeinsky -------------- next part -------------- An HTML attachment was scrubbed... URL: From onepoint at starurchin.org Wed Apr 29 19:41:57 2015 From: onepoint at starurchin.org (Jeremy Henty) Date: Wed, 29 Apr 2015 20:41:57 +0100 Subject: [Haskell-cafe] Cannot run GHC-7.10.1 tests In-Reply-To: References: <20150428214821.GE13478@omphalos.singularity> Message-ID: <20150429194157.GA3723@omphalos.singularity> Christiaan Baaij wrote: > You are running these commands from _within_ the 'testsuite' > directory, right? No, because the instructions at https://ghc.haskell.org/trac/ghc/wiki/Building/RunningTests told me to run these commands in "the root of the GHC tree". But since you suggested it, I tried running all the test commands from within the ghc-7.10.1/testsuite directory and they all failed as follows: mk/boilerplate.mk:168: mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk: No such file or directory ./mk/ghc-config "/data/build.d/6.8/ghc-7.10.1/inplace/bin/ghc-stage2" >"mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk"; if [ $? != 0 ]; then rm -f "mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk"; exit 1; fi /bin/sh: ./mk/ghc-config: cannot execute binary file make: *** [mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk] Error 1 Something is really broken here. Regards, Jeremy Henty From yalnashi at odu.edu Wed Apr 29 21:15:09 2015 From: yalnashi at odu.edu (Al-Nashif, Youssif B.) Date: Wed, 29 Apr 2015 21:15:09 +0000 Subject: [Haskell-cafe] CFP: IEEE International Conference on Cloud and Autonomic Computing (CAC 2015) Message-ID: <3D163FDE-B37D-4127-B150-C38066ACE959@odu.edu> [ Less than two weeks to deadline. Apologies if you receive multiple copies of this email.] IEEE International Conference on Cloud and Autonomic Computing (CAC 2015) (pending IEEE support) autonomic-conference.org Cambridge, MA, USA September 21-25, 2015 Co-located with the Ninth IEEE International Conference on Self-Adaptive and Self-Organizing System (SASO 2015) and with the 15th IEEE Peer-to-Peer Computing Conference Call for Papers Overview Enterprise-scale cloud platforms and services systems, present common and cross-cutting challenges in maximizing power efficiency and performance while maintaining predictable and reliable behavior, and at the same time responding appropriately to environmental and system changes such as hardware failures and varying workloads. Autonomic computing systems address the challenges in managing these environments by integrating monitoring, decision-processing and actuation capabilities to autonomously manage resources and applications based on high-level policies. Research in cloud and autonomic computing spans a variety of areas, from computer systems, architecture, middleware services, databases and data-stores, and networks to machine learning and control theory. The purpose of the 3rd International Conference on Cloud and Autonomic Computing (CAC) is to bring together researchers and practitioners across these disciplines to address the multiple facets of self-management in computing systems and applications. Papers are solicited on a broad array of topics of relevance to cloud and autonomic computing and their intersections, particularly those that bear on connections and relationships among different research areas or report on prototype systems or experiences. The goal is to confirm a premier international forum focused on the latest research, applications, and technologies aimed at making cloud and autonomic computing systems and services easy to design, to deploy and to implement, while achieving the simultaneous goals to be self-manageable, self-regulating and scalable with little involvement of human or system administrators. Topics of interest include, but are not limited to: Autonomic Cloud Computing * Self-managing cloud services * Autonomic resource and energy management in cloud computing * Autonomic cloud applications and services * Autonomic virtual cloud resources and services * Cloud workload characterization and prediction * Monitoring, modeling and analysis of cloud resources and services * Anomaly behavior analysis of autonomic systems and services Autonomics for Extreme Scales * Large scale autonomic systems * Self-optimizing and self-healing at petacomputing scale * Self-managing middleware and tools for extreme scales * Experiences in autonomic systems and applications at extreme scales (peta/exa-computing) Autonomic Computing Foundations and Design Methods * Evaluation, validation and quality and correctness assessment of autonomic loops * Theoretical frameworks for modeling and analyzing autonomic computing systems, control and decision theory * Model-based design, software engineering, formal methods, testing, programming languages and environments support * Knowledge representation and visualization of behavior of autonomic systems and services Autonomic Computing Systems, Tools and Applications * Self-protection techniques of computing systems, networks and applications * Stochastic analysis and prediction of autonomic systems and applications * Benchmarks and tools to evaluate and compare different architectures to implement autonomic cloud systems * High performance autonomic applications * Self-* applications in science and engineering * Self-* Human Machine Interface Paper/Poster Submission and Publication: Full papers (a maximum of 12 pages in length), industrial experience reports (a maximum of 8 pages) and posters (a maximum of 4 pages) are invited on a wide variety of topics relating to cloud and autonomic computing as indicated above. All papers must follow the IEEE proceedings format. All manuscripts will be reviewed and judged on merits including originality, significance, interest, correctness, clarity, and relevance to the broader community. Papers are strongly encouraged to report experiences, measurements, and user studies, and to provide an appropriate quantitative evaluation. Submitted papers must include original work, and may not be under consideration for another conference or journal. They should also not be under review or be submitted to another forum during the CAC 2015 review process. Authors should submit full papers or posters electronically following the instructions from the CAC 2015 conference web site. Accepted papers and posters will appear in proceedings distributed at the conference and available electronically. Authors of accepted papers/poster are expected to present their work at the conference. Authors are also encouraged to submit a poster or demo that summarizes and highlights the main points of their paper. Workshops, Demonstrations and Exhibitions: CAC 2015 welcomes proposals for co-located workshops on specific topics of general interest to the cloud and autonomic computing community. Workshops are expected to publish proceedings, and should cover areas that may not be properly addressed in the main scientific program. CAC 2015 will also feature a demonstration and exhibition session consisting of prototypes and technology artifacts such as demonstrating autonomic software or autonomic computing principles. Important dates: * Abstract registration: May 8, 2015 * Papers submission deadline: May 15, 2015 * Authors notification: June 15, 2015 * Camera-ready papers due: July 1, 2015 Organizing Committee General Chair: * Danny Menasce (George Mason University, USA) PC Co-chairs: * Eric Rutten (INRIA, France) * Prashant Shenoy (University of Massachusetts Amherst, USA) PC Committee (preliminary list): * Karl-Erik Arzen, U. Lund, Sweden * Ioana Banicescu, Mississippi State Univ., USA * Thais Vasconcelos Batista, University of Rio Grande do Norte, Brazil * Umesh Bellur, IIT Bombay, India * Nelly Bencomo, Aston University, UK * Junwei Cao, Tsinghua University, China * Franck Cappello, Urbana-Champaign, USA * Giuliano Casale, Imperial College London, UK * Emiliano Casalicchio, University of Rome, Tor Vergata, Italy * Abhishek Chandra, University of Minnesota, USA * Lydia Y. Chen, IBM Research GmbH, Zurich, Switzerland * Fabio Costa, Universidade Federal de Goias, Brazil * Marco Danelutto, University of Pisa * Frederic Desprez, INRIA, France * Yixin Diao, IBM Research, USA * Jim Dowling, Swedish Institute of Computer Science (SICS), Sweden * Laurence Duchien, Universite de Lille/INRIA, France * Erik Elmroth, Umea University and Elastisys, Sweden * Antonio Filieri, University of Stuttgart, Germany * Harry Foxwell, Oracle, USA * Indranil Gupta, UIUC, USA * David Irwin, UMass, USA * Yoonhee Kim, Sookmyung Women's University, Korea * Hector Alejandro Duran Limon, Univ. de Guadalajara, Mexico * Martina Maggio, Lund University, Sweden * Maitreya Natu, TCS India * Marco D. Santambrogio, Politecnico di Milano, Italy * Tallat M. Shafaat, Google, Mountain View, CA, USA * Evgenia Smirni, College of William and Mary, VA, USA * Chris Stewart, Ohio State University, USA * Bhuvan Urgaonkar, Penn State Univ, USA * Timothy Wood, George Washington University, DC, USA * Dongyan Xu, Purdue University, USA Publicity Committee: * Youssif Al-Nashif, chair (Old Dominion University, USA) * Yaser Jararwah, co-chair (Jordan University of Science and Technology, Jordan) * Bharat Madan, co-chair (Old Dominion University, USA) * Ivan Rodero, co-chair (Rutgers University, USA) * Keiichi Shima, co-chair (Research Laboratory, IIJ Innovation Institute, Inc., Japan) * Jinsong Wu, co-chair (Universidad de Chile, Santiago, Chile) Steering Committee: * Simon Dobson (University of St Andrews, Scotland) * Geoffrey Fox (Indiana University Bloomington, USA) * Salim Hariri (University of Arizona, USA) * Soonwook Hwang (Korea Institute of Science and Technology Information, South Korea) * Julie McCann (Imperial College, UK) * Manish Parashar (Rutgers University, USA) * S. Masoud Sadjadi (Florida International University, USA) * Alan Sill (Texas Tech University, USA) * Vladimir Vlassov (KTH Royal Institute of Technology, Sweden) From voldermort at hotmail.com Thu Apr 30 07:08:00 2015 From: voldermort at hotmail.com (Jeremy) Date: Thu, 30 Apr 2015 00:08:00 -0700 (MST) Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <87bninfdkp.fsf@karetnikov.org> Message-ID: <1430377680233-5808179.post@n5.nabble.com> Mathieu Boespflug-4 wrote > We're not introducing dependencies on dynamically linked system libraries > that makes tooling hard to distribute. We're not asking users to install > anything new that isn't already a staple of most developer desktops My sole concern with this is that git is often not present on build servers, which may be minimal cloud VMs. Here's what I get when I try to install git on mine: # apt install git --no-install-recommends ... The following NEW packages will be installed: git git-man libcurl3-gnutls liberror-perl libexpat1 libgdbm3 perl perl-modules 0 upgraded, 8 newly installed, 0 to remove and 2 not upgraded. Need to get 10.4 MB of archives. After this operation, 57.2 MB of additional disk space will be used. Not unbearable, but not insignificant either. -- View this message in context: http://haskell.1045720.n5.nabble.com/Improvements-to-package-hosting-and-security-tp5768710p5808179.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. From michael at snoyman.com Thu Apr 30 07:21:53 2015 From: michael at snoyman.com (Michael Snoyman) Date: Thu, 30 Apr 2015 07:21:53 +0000 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: <1430377680233-5808179.post@n5.nabble.com> References: <87bninfdkp.fsf@karetnikov.org> <1430377680233-5808179.post@n5.nabble.com> Message-ID: On Thu, Apr 30, 2015 at 10:08 AM Jeremy wrote: > Mathieu Boespflug-4 wrote > > We're not introducing dependencies on dynamically linked system libraries > > that makes tooling hard to distribute. We're not asking users to install > > anything new that isn't already a staple of most developer desktops > > My sole concern with this is that git is often not present on build > servers, > which may be minimal cloud VMs. Here's what I get when I try to install git > on mine: > > # apt install git --no-install-recommends > ... > The following NEW packages will be installed: > git git-man libcurl3-gnutls liberror-perl libexpat1 libgdbm3 perl > perl-modules > 0 upgraded, 8 newly installed, 0 to remove and 2 not upgraded. > Need to get 10.4 MB of archives. > After this operation, 57.2 MB of additional disk space will be used. > > Not unbearable, but not insignificant either. > > > One possible workflow[1] would be to have a dedicated system that uses Git and GPG to pull the current versions of all packages and verify signatures. That system could then create a snapshot of that information that could simply be downloaded by a build server. In fact, there could even be a public server available providing that functionality, with the caveat that- like today- you'd need to trust that server to not be compromised. I think this is what Mathieu was getting at when he said: > Further, users can still opt-out of signature verification if they want to. Michael [1] And possible may be too weak a word, as I have an implementation pretty close to this already. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spam at scientician.net Thu Apr 30 08:37:01 2015 From: spam at scientician.net (Bardur Arantsson) Date: Thu, 30 Apr 2015 10:37:01 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <87bninfdkp.fsf@karetnikov.org> <1430377680233-5808179.post@n5.nabble.com> Message-ID: On 30-04-2015 09:21, Michael Snoyman wrote: > On Thu, Apr 30, 2015 at 10:08 AM Jeremy wrote: > >> Mathieu Boespflug-4 wrote >>> We're not introducing dependencies on dynamically linked system libraries >>> that makes tooling hard to distribute. We're not asking users to install >>> anything new that isn't already a staple of most developer desktops >> >> My sole concern with this is that git is often not present on build >> servers, >> which may be minimal cloud VMs. Here's what I get when I try to install git >> on mine: >> >> # apt install git --no-install-recommends >> ... >> The following NEW packages will be installed: >> git git-man libcurl3-gnutls liberror-perl libexpat1 libgdbm3 perl >> perl-modules >> 0 upgraded, 8 newly installed, 0 to remove and 2 not upgraded. >> Need to get 10.4 MB of archives. >> After this operation, 57.2 MB of additional disk space will be used. >> >> Not unbearable, but not insignificant either. >> >> >> > One possible workflow[1] would be to have a dedicated system that uses Git > and GPG to pull the current versions of all packages and verify signatures. > That system could then create a snapshot of that information that could > simply be downloaded by a build server. In fact, there could even be a > public server available providing that functionality, with the caveat that- > like today- you'd need to trust that server to not be compromised. > Isn't this just another moving part? (Moving parts are generally considered bad news when you're trying to engineer a secure system.) And has there been any review of git wrt. if it is robust to malicious servers? (E.g. if I do a "git fetch", can the server I happen to be talking to just spew data at the client indefinitely to, for example, fill up its disk or to prevent it from ever progressing?) Regards, From rendel at informatik.uni-tuebingen.de Thu Apr 30 09:43:22 2015 From: rendel at informatik.uni-tuebingen.de (Tillmann Rendel) Date: Thu, 30 Apr 2015 11:43:22 +0200 Subject: [Haskell-cafe] Improvements to package hosting and security In-Reply-To: References: <1429181874.25663.80.camel@dunky.localdomain> <1429185521.25663.103.camel@dunky.localdomain> <87bninfdkp.fsf@karetnikov.org> Message-ID: <5541F93A.3050309@informatik.uni-tuebingen.de> Hi, Mathieu Boespflug wrote: > This is a valid concern. One that I should have addressed explicitly > in the proposal. Git is fairly well supported on Windows these days > and installs easily. It could conceivably be included as part of > MinGHC. There are many alternatives, but I doubt we'll need them: > statically linking a C implementation (libgit2 or another), or a > simple native implementation of the git protocol (the protocol is > quite straightforward and is documented) and basic disk format. I did not read your proposal, but if it entails that new Haskell users on Windows need to manually install git before they can use `cabal install something` for the first time, I think that would be bad. For programming beginners (think B.Sc. students in a field other than computer science that take an "intro to programming" class), every installation that requires manual configuration is a hassle. Making the cabal executable find the git executable on the path would potentially require manual configuration to set up the search path. I believe that both ghc and git binary packages for Windows package MSYS (or maybe something similar, not sure), so there is also some potential for a cabal+ghc+git installation to confuse which bundled copy of MSYS to use. Some of these "intro to programming" classes consist mostly of object-oriented programming, with a bit of FP thrown in. If helping the students to set up a Haskell environment on their laptops takes one lab session or one week of office hours, that's a significant cut from the FP learning time. If students fail their homework because they fail to install the Haskell environment, that takes a significant cut of their FP learning motivation. If instructors only teach plain Haskell without ever using cabal, this gives the impression that Haskell only works for classroom problems, because there seem to be no libraries. I'm aware that programming beginners are not the main target for a programming language infrastructure, but we shouldn't forget about their first-use experience completely, either. I'm not even sure whether "statically linking a C implementation" is any better. How would it support `cabal install cabal-install` on Windows, in practice? Tillmann From bh at intevation.de Thu Apr 30 10:55:25 2015 From: bh at intevation.de (Bernhard Herzog) Date: Thu, 30 Apr 2015 12:55:25 +0200 Subject: [Haskell-cafe] Cannot run GHC-7.10.1 tests In-Reply-To: <20150429194157.GA3723@omphalos.singularity> References: <20150428214821.GE13478@omphalos.singularity> <20150429194157.GA3723@omphalos.singularity> Message-ID: <201504301255.26306.bh@intevation.de> On 29.04.2015, Jeremy Henty wrote: > But since you suggested it, I tried running all the test commands from > within the ghc-7.10.1/testsuite directory and they all failed as > follows: > > mk/boilerplate.mk:168: mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk: No such file or directory > ./mk/ghc-config "/data/build.d/6.8/ghc-7.10.1/inplace/bin/ghc-stage2" > >"mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk"; if [ $? != 0 ]; then rm -f "mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk"; exit 1; fi > /bin/sh: ./mk/ghc-config: cannot execute binary file > make: *** [mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk] Error 1 Just tried this myself. I get very similar errors. The reason is, AFAICT, that the testsuite tarball contains some build-artifacts, particularly some binaries. A "make clean" in the testsuite directory before running the tests helps. Regards, Bernhard From nicholls.mark at vimn.com Thu Apr 30 13:34:00 2015 From: nicholls.mark at vimn.com (Nicholls, Mark) Date: Thu, 30 Apr 2015 13:34:00 +0000 Subject: [Haskell-cafe] dependent types, singleton types.... In-Reply-To: References: , <6CF4D7D9-76A1-4B2A-9ABE-3409739FFD56@cis.upenn.edu> <7AEF3084-B33E-42B7-97C0-6A1D815A6E58@vimn.com> Message-ID: See questions below... > {-# LANGUAGE DataKinds, PolyKinds, TypeFamilies, TypeOperators, UndecidableInstances, GADTs, StandaloneDeriving #-} > data Nat = Z | S Nat > data SNat n where > SZ :: SNat 'Z > SS :: SNat n -> SNat ('S n) > --type family Plus (n :: Nat) (m :: Nat) :: Nat > --type instance Plus Z m = m > --type instance Plus (S n) m = S (Plus n m) > --infixl 6 :+ > --type family (n :: Nat) :+ (m :: Nat) :: Nat > --type instance Z :+ m = m > --type instance (S n) :+ m = S (n :+ m) > --type family (n :: Nat) :* (m :: Nat) :: Nat > --type instance Z :* m = Z > --type instance (S n) :* m = (n :* m) :+ m > data BTree a = Leaf | Branch a (BTree a) (BTree a) > data SBTree a where > SLeaf :: SBTree 'Leaf > SBranch :: (a :: *) -> (SBTree (b :: BTree *)) -> (SBTree (c :: BTree *)) -> SBTree ('Branch a b c) ok so this does work... > y :: SBTree ('Branch (SNat ('S ('S 'Z))) 'Leaf 'Leaf) > y = SBranch (SS (SS SZ)) SLeaf SLeaf > z :: SBTree ('Branch (SNat ('S ('S 'Z))) ('Branch (SNat 'Z) 'Leaf 'Leaf) 'Leaf) > z = SBranch (SS (SS SZ)) (SBranch SZ SLeaf SLeaf) SLeaf but this doesnt. > f :: SBTree a -> Integer > f (SBranch SZ SLeaf SLeaf) = 1 but (for me) the important thing to work out is what's actually wrong....i.e. walk before I run and land on my face I struggle with Haskell errors. it says exerviceOnDependentTypes.lhs:42:14: Could not deduce (a1 ~ SNat t0) from the context (a ~ 'Branch a1 b c) bound by a pattern with constructor SBranch :: forall a (b :: BTree *) (c :: BTree *). a -> SBTree b -> SBTree c -> SBTree ('Branch a b c), in an equation for 'f' at exerviceOnDependentTypes.lhs:42:6-27 'a1' is a rigid type variable bound by a pattern with constructor SBranch :: forall a (b :: BTree *) (c :: BTree *). a -> SBTree b -> SBTree c -> SBTree ('Branch a b c), in an equation for 'f' at exerviceOnDependentTypes.lhs:42:6 In the pattern: SZ In the pattern: SBranch SZ SLeaf SLeaf In an equation for 'f': f (SBranch SZ SLeaf SLeaf) = 1 so its trying to work out what? "a" is valid in "SBTree a" in the context of the function definition "f (SBranch SZ SLeaf SLeaf) = 1" ? so "a" ~ "Branch a1 b c" from the defintion "SBranch :: (a :: *) -> (SBTree (b :: BTree *)) -> (SBTree (c :: BTree *)) -> SBTree ('Branch a b c)"? and it wants to deduce that "a1 ~ SNat t0"? why? Why does it care? (is this my OO type head getting in the way) Mark Nicholls | Senior Technical Director, Programmes & Development - Viacom International Media Networks A: 17-29 Hawley Crescent London NW1 8TT | e: Nicholls.Mark at vimn.com T: +44 (0)203 580 2223 -----Original Message----- From: Richard Eisenberg [mailto:eir at cis.upenn.edu] Sent: 29 April 2015 2:43 PM To: Nicholls, Mark Cc: haskell-cafe at haskell.org Subject: Re: [Haskell-cafe] dependent types, singleton types.... On Apr 29, 2015, at 8:54 AM, "Nicholls, Mark" wrote: > Accchhhh > > Thats another level of pain It is, unfortunately. > , i'm not a big haskeller so i'll have to wrestle with it to get all the knobs and dials working, leave it with me for the moment and i'll start wrestling with cabal You won't need cabal and such. Just say `data family Sing (a :: k)` in your file and you'll have the definition. There's really nothing more to it than that! Richard CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. From eir at cis.upenn.edu Thu Apr 30 15:41:57 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Thu, 30 Apr 2015 11:41:57 -0400 Subject: [Haskell-cafe] dependent types, singleton types.... In-Reply-To: References: , <6CF4D7D9-76A1-4B2A-9ABE-3409739FFD56@cis.upenn.edu> <7AEF3084-B33E-42B7-97C0-6A1D815A6E58@vimn.com> Message-ID: See below. On Apr 30, 2015, at 9:34 AM, "Nicholls, Mark" wrote: > > but this doesnt. > >> f :: SBTree a -> Integer >> f (SBranch SZ SLeaf SLeaf) = 1 > > but (for me) the important thing to work out is what's actually wrong....i.e. walk before I run and land on my face > I struggle with Haskell errors. > > it says > > exerviceOnDependentTypes.lhs:42:14: > Could not deduce (a1 ~ SNat t0) > from the context (a ~ 'Branch a1 b c) > bound by a pattern with constructor > SBranch :: forall a (b :: BTree *) (c :: BTree *). > a -> SBTree b -> SBTree c -> SBTree ('Branch a b c), > in an equation for 'f' > at exerviceOnDependentTypes.lhs:42:6-27 > 'a1' is a rigid type variable bound by > a pattern with constructor > SBranch :: forall a (b :: BTree *) (c :: BTree *). > a -> SBTree b -> SBTree c -> SBTree ('Branch a b c), > in an equation for 'f' > at exerviceOnDependentTypes.lhs:42:6 > In the pattern: SZ > In the pattern: SBranch SZ SLeaf SLeaf > In an equation for 'f': f (SBranch SZ SLeaf SLeaf) = 1 > > so its trying to work out what? > "a" is valid in "SBTree a" in the context of the function definition "f (SBranch SZ SLeaf SLeaf) = 1" ? > so "a" ~ "Branch a1 b c" from the defintion "SBranch :: (a :: *) -> (SBTree (b :: BTree *)) -> (SBTree (c :: BTree *)) -> SBTree ('Branch a b c)"? > and it wants to deduce that "a1 ~ SNat t0"? > why? GHC is trying to type-check the pattern `SBranch SZ SLeaf SLeaf` against the type `SBTree a`. Once GHC sees the constructor `SBranch`, it knows that `a ~ 'Branch a1 b c` for some `a1`, some `b`, and some `c`. Then, it tries to type-check the pattern `SZ` against the type `a1`. For this to type-check, GHC must be convinced that `a1 ~ SNat t0` for some `t0`. But there's no reason for GHC to believe this, so it reports an error. One may say "Well, GHC should just decide that `a1` must be `SNat t0` and get on with it." But, that is tantamount to arguing that `foo :: a -> a; foo True = False` should type-check, by saying that GHC should decide that `a` is `Bool`. It's the same thing in the `SBTree` case: we just don't know enough to assume that `a1` should be an `SNat`. > Why does it care? (is this my OO type head getting in the way) This doesn't look, in particular, like a question from a recovering OO programmer. It's a good question to ask and a hard part to figure out. I hope this helps! Richard > > > Mark Nicholls | Senior Technical Director, Programmes & Development - Viacom International Media Networks > A: 17-29 Hawley Crescent London NW1 8TT | e: Nicholls.Mark at vimn.com T: +44 (0)203 580 2223 > > > > > -----Original Message----- > From: Richard Eisenberg [mailto:eir at cis.upenn.edu] > Sent: 29 April 2015 2:43 PM > To: Nicholls, Mark > Cc: haskell-cafe at haskell.org > Subject: Re: [Haskell-cafe] dependent types, singleton types.... > > On Apr 29, 2015, at 8:54 AM, "Nicholls, Mark" wrote: > >> Accchhhh >> >> Thats another level of pain > > It is, unfortunately. > >> , i'm not a big haskeller so i'll have to wrestle with it to get all the knobs and dials working, leave it with me for the moment and i'll start wrestling with cabal > > You won't need cabal and such. Just say `data family Sing (a :: k)` in your file and you'll have the definition. There's really nothing more to it than that! > > Richard > CONFIDENTIALITY NOTICE > > This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. > > While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. > > Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. > > MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. > > From s9gf4ult at gmail.com Thu Apr 30 15:45:18 2015 From: s9gf4ult at gmail.com (Alexey Uimanov) Date: Thu, 30 Apr 2015 20:45:18 +0500 Subject: [Haskell-cafe] Is it acceptable if Applicative behave not like a Monad Message-ID: Hello, I have such a question: assume you have some type `T` which has Applicative and Monad instances. Is it ok if code like this: foo :: Int -> T String bar :: Int -> T Int (,) <$> foo 10 <*> bar "20" behaves not like this code: foobar = do x <- foo 10 y <- bar "20" return (x, y) The word "behaves" I mean not just returning value but the effect performed also. -------------- next part -------------- An HTML attachment was scrubbed... URL: From adam at bergmark.nl Thu Apr 30 15:49:57 2015 From: adam at bergmark.nl (Adam Bergmark) Date: Thu, 30 Apr 2015 17:49:57 +0200 Subject: [Haskell-cafe] Is it acceptable if Applicative behave not like a Monad In-Reply-To: References: Message-ID: Yes it's okay and sometimes expected for them not not behave the same, but see e.g. haxl and ApplicativeDo. For example, the applicative version may run foo and bar in parallell. But using monad they run sequentially. - Adam On Thu, Apr 30, 2015 at 5:45 PM, Alexey Uimanov wrote: > Hello, I have such a question: assume you have some type `T` which has > Applicative and Monad instances. Is it ok if code like this: > > foo :: Int -> T String > bar :: Int -> T Int > > (,) <$> foo 10 <*> bar "20" > > behaves not like this code: > > foobar = do > x <- foo 10 > y <- bar "20" > return (x, y) > > The word "behaves" I mean not just returning value but the effect > performed also. > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ky3 at atamo.com Thu Apr 30 16:07:27 2015 From: ky3 at atamo.com (Kim-Ee Yeoh) Date: Thu, 30 Apr 2015 23:07:27 +0700 Subject: [Haskell-cafe] Is it acceptable if Applicative behave not like a Monad In-Reply-To: References: Message-ID: On Thu, Apr 30, 2015 at 10:49 PM, Adam Bergmark wrote: > Yes it's okay and sometimes expected for them not not behave the same, but > see e.g. haxl and ApplicativeDo. > This is a value of okay I've never seen before. > For example, the applicative version may run foo and bar in parallell. But > using monad they run sequentially. > What if pure weren't a perfect synonym for return? What if the methods of the semigroup instance didn't match those of the monoid? The mind boggles at the confusion that would result. Such cases typically call for newtypes to sort out the effect classes. -- Kim-Ee -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicholls.mark at vimn.com Thu Apr 30 16:24:58 2015 From: nicholls.mark at vimn.com (Nicholls, Mark) Date: Thu, 30 Apr 2015 16:24:58 +0000 Subject: [Haskell-cafe] dependent types, singleton types.... In-Reply-To: References: , <6CF4D7D9-76A1-4B2A-9ABE-3409739FFD56@cis.upenn.edu> <7AEF3084-B33E-42B7-97C0-6A1D815A6E58@vimn.com> Message-ID: Hmmm it does..... My oo compiler would say...cannot match True against "a".... or something like.....at some point you go so far out on a (mental) limb...it just snaps. Let me try to fix it all. See below. On Apr 30, 2015, at 9:34 AM, "Nicholls, Mark" wrote: > > but this doesnt. > >> f :: SBTree a -> Integer >> f (SBranch SZ SLeaf SLeaf) = 1 > > but (for me) the important thing to work out is what's actually > wrong....i.e. walk before I run and land on my face I struggle with Haskell errors. > > it says > > exerviceOnDependentTypes.lhs:42:14: > Could not deduce (a1 ~ SNat t0) > from the context (a ~ 'Branch a1 b c) > bound by a pattern with constructor > SBranch :: forall a (b :: BTree *) (c :: BTree *). > a -> SBTree b -> SBTree c -> SBTree ('Branch a b c), > in an equation for 'f' > at exerviceOnDependentTypes.lhs:42:6-27 > 'a1' is a rigid type variable bound by > a pattern with constructor > SBranch :: forall a (b :: BTree *) (c :: BTree *). > a -> SBTree b -> SBTree c -> SBTree ('Branch a b c), > in an equation for 'f' > at exerviceOnDependentTypes.lhs:42:6 > In the pattern: SZ > In the pattern: SBranch SZ SLeaf SLeaf > In an equation for 'f': f (SBranch SZ SLeaf SLeaf) = 1 > > so its trying to work out what? > "a" is valid in "SBTree a" in the context of the function definition "f (SBranch SZ SLeaf SLeaf) = 1" ? > so "a" ~ "Branch a1 b c" from the defintion "SBranch :: (a :: *) -> (SBTree (b :: BTree *)) -> (SBTree (c :: BTree *)) -> SBTree ('Branch a b c)"? > and it wants to deduce that "a1 ~ SNat t0"? > why? GHC is trying to type-check the pattern `SBranch SZ SLeaf SLeaf` against the type `SBTree a`. Once GHC sees the constructor `SBranch`, it knows that `a ~ 'Branch a1 b c` for some `a1`, some `b`, and some `c`. Then, it tries to type-check the pattern `SZ` against the type `a1`. For this to type-check, GHC must be convinced that `a1 ~ SNat t0` for some `t0`. But there's no reason for GHC to believe this, so it reports an error. One may say "Well, GHC should just decide that `a1` must be `SNat t0` and get on with it." But, that is tantamount to arguing that `foo :: a -> a; foo True = False` should type-check, by saying that GHC should decide that `a` is `Bool`. It's the same thing in the `SBTree` case: we just don't know enough to assume that `a1` should be an `SNat`. > Why does it care? (is this my OO type head getting in the way) This doesn't look, in particular, like a question from a recovering OO programmer. It's a good question to ask and a hard part to figure out. I hope this helps! Richard > > > Mark Nicholls | Senior Technical Director, Programmes & Development - > Viacom International Media Networks > A: 17-29 Hawley Crescent London NW1 8TT | e: Nicholls.Mark at vimn.com T: > +44 (0)203 580 2223 > > > > > -----Original Message----- > From: Richard Eisenberg [mailto:eir at cis.upenn.edu] > Sent: 29 April 2015 2:43 PM > To: Nicholls, Mark > Cc: haskell-cafe at haskell.org > Subject: Re: [Haskell-cafe] dependent types, singleton types.... > > On Apr 29, 2015, at 8:54 AM, "Nicholls, Mark" wrote: > >> Accchhhh >> >> Thats another level of pain > > It is, unfortunately. > >> , i'm not a big haskeller so i'll have to wrestle with it to get all >> the knobs and dials working, leave it with me for the moment and i'll >> start wrestling with cabal > > You won't need cabal and such. Just say `data family Sing (a :: k)` in your file and you'll have the definition. There's really nothing more to it than that! > > Richard > CONFIDENTIALITY NOTICE > > This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. > > While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. > > Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. > > MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. > > CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. From nicholls.mark at vimn.com Thu Apr 30 17:02:57 2015 From: nicholls.mark at vimn.com (Nicholls, Mark) Date: Thu, 30 Apr 2015 17:02:57 +0000 Subject: [Haskell-cafe] dependent types, singleton types.... References: , <6CF4D7D9-76A1-4B2A-9ABE-3409739FFD56@cis.upenn.edu> <7AEF3084-B33E-42B7-97C0-6A1D815A6E58@vimn.com> Message-ID: This reminds me of the thing about the typewriter and the chimpanzees and Shakespeare (though I'm not claiming what follows is Shakespearean). > {-# LANGUAGE DataKinds, PolyKinds, TypeFamilies, TypeOperators, UndecidableInstances, GADTs, StandaloneDeriving #-} yep > data Nat = Z | S Nat Sort of know whats happening here... > data family Sing (a :: k) This is just about believable (though its just copied from the article) > data instance Sing (n :: Nat) where > SZ :: Sing 'Z > SS :: Sing n -> Sing ('S n) shorthand > type SNat (n :: Nat) = Sing n Yep...this at least feels grounded > data BTree a = Leaf | Branch a (BTree a) (BTree a) Here we go....1st 2 lines feel ok ish.... (now the chimps take over...tap tap boom...tap tap tap boom...hmmmm) 3rd line feels a bit lightweight....b & c seemingly can be anything!...but I test this na?ve assumumption later. > data instance Sing (n :: BTree a) where > SLeaf :: Sing ('Leaf) > SBranch :: Sing a -> Sing b -> Sing c -> Sing ('Branch a b c) shorthand > type SBTree (n :: BTree a) = Sing n This works (good) > y :: SBTree ('Branch ('S ('S 'Z)) 'Leaf 'Leaf) > y = SBranch (SS (SS SZ)) SLeaf SLeaf This works (good)! > z :: SBTree ('Branch 'Z 'Leaf 'Leaf) > z = SBranch SZ SLeaf SLeaf Even this works (v good) > f :: SBTree ('Branch 'Z 'Leaf 'Leaf) -> Integer > f (SBranch SZ SLeaf SLeaf) = 1 Now let's do something stupid > a = SBranch SZ SZ SZ BOOOM....(which is a GOOD BOOM). exerviceOnDependentTypes.lhs:34:18: Couldn't match kind 'Nat' with 'BTree Nat' Expected type: Sing b Actual type: Sing 'Z In the second argument of 'SBranch', namely 'SZ' In the expression: SBranch SZ SZ SZ But I'm not convinced the chimp knows why! It all comes down to substituting the "b" in "'Branch a b c" with an SZ... It knows it's wrong. It knows the b in the type "'Branch" is of kind "BTree Nat"... Then chimp's head hurts.... I may have forgotten what datakinds actually does...its doing more than I'm expecting... Let me know if its horribly wrong, and I'll try and reread the exercise....(I'm doing this in between doing my proper job...which aint Haskell). CONFIDENTIALITY NOTICE This e-mail (and any attached files) is confidential and protected by copyright (and other intellectual property rights). If you are not the intended recipient please e-mail the sender and then delete the email and any attached files immediately. Any further use or dissemination is prohibited. While MTV Networks Europe has taken steps to ensure that this email and any attachments are virus free, it is your responsibility to ensure that this message and any attachments are virus free and do not affect your systems / data. Communicating by email is not 100% secure and carries risks such as delay, data corruption, non-delivery, wrongful interception and unauthorised amendment. If you communicate with us by e-mail, you acknowledge and assume these risks, and you agree to take appropriate measures to minimise these risks when e-mailing us. MTV Networks International, MTV Networks UK & Ireland, Greenhouse, Nickelodeon Viacom Consumer Products, VBSi, Viacom Brand Solutions International, Be Viacom, Viacom International Media Networks and VIMN and Comedy Central are all trading names of MTV Networks Europe. MTV Networks Europe is a partnership between MTV Networks Europe Inc. and Viacom Networks Europe Inc. Address for service in Great Britain is 17-29 Hawley Crescent, London, NW1 8TT. From nikita.y.volkov at mail.ru Thu Apr 30 17:32:54 2015 From: nikita.y.volkov at mail.ru (Nikita Volkov) Date: Thu, 30 Apr 2015 20:32:54 +0300 Subject: [Haskell-cafe] Is it acceptable if Applicative behave not like a Monad In-Reply-To: References: Message-ID: I'm afraid I have to disagree with Adam as well. Recently I've triggered a prolonged discussion on exactly the subject ( https://github.com/ekmett/either/pull/38). Being originally convinced that the instances can behave however it fits, I think I've been over-persuaded in the end. Shortly speaking, while I can't say I like it, the rule seems to be that `<*>` should produce the same side effects as Monad's `ap`. The most convincing argument I've seen so far against the two digressing is that it can cause unexpected behaviour of the "do" notation ( https://github.com/ekmett/either/pull/38#issuecomment-95695814), but you'll find plenty of other arguments in the discussion as well. Best regards, Nikita Volkov 2015-04-30 19:07 GMT+03:00 Kim-Ee Yeoh : > > On Thu, Apr 30, 2015 at 10:49 PM, Adam Bergmark wrote: > >> Yes it's okay and sometimes expected for them not not behave the same, >> but see e.g. haxl and ApplicativeDo. >> > > This is a value of okay I've never seen before. > > >> For example, the applicative version may run foo and bar in parallell. >> But using monad they run sequentially. >> > > What if pure weren't a perfect synonym for return? > > What if the methods of the semigroup instance didn't match those of the > monoid? > > The mind boggles at the confusion that would result. > > Such cases typically call for newtypes to sort out the effect classes. > > -- Kim-Ee > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eir at cis.upenn.edu Thu Apr 30 17:42:38 2015 From: eir at cis.upenn.edu (Richard Eisenberg) Date: Thu, 30 Apr 2015 13:42:38 -0400 Subject: [Haskell-cafe] dependent types, singleton types.... In-Reply-To: References: , <6CF4D7D9-76A1-4B2A-9ABE-3409739FFD56@cis.upenn.edu> <7AEF3084-B33E-42B7-97C0-6A1D815A6E58@vimn.com> Message-ID: See below. On Apr 30, 2015, at 1:02 PM, "Nicholls, Mark" wrote: > Here we go....1st 2 lines feel ok ish.... > (now the chimps take over...tap tap boom...tap tap tap boom...hmmmm) > 3rd line feels a bit lightweight....b & c seemingly can be anything!...but I test this na?ve assumumption later. > >> data instance Sing (n :: BTree a) where >> SLeaf :: Sing ('Leaf) >> SBranch :: Sing a -> Sing b -> Sing c -> Sing ('Branch a b c) Your comment "b & c seemingly can be anything" is telling: they *can't* be anything. Types in Haskell are classified by /kinds/, in much the same way that terms are classified by types. GHC infers kinds for type variables. Let me rewrite SBranch with more kinds explicit: > SBranch :: forall (a :: k) (b :: BTree k) (c :: BTree k). > Sing a -> Sing b -> Sing c -> Sing ('Branch a b c) How does GHC infer the kinds? From the appearance of the type variables as arguments to `'Branch`. `'Branch` has kind `forall k. k -> BTree k -> BTree k -> BTree k`, where I've used `k`, as is common, to denote a kind variable. (The variable is, of course, `a` in the declaration for `Branch`.) So, when GHC sees you write `('Branch a b c)`, it can infer the correct kinds for the variables. > > > > Now let's do something stupid > >> a = SBranch SZ SZ SZ > > BOOOM....(which is a GOOD BOOM). > > exerviceOnDependentTypes.lhs:34:18: > Couldn't match kind 'Nat' with 'BTree Nat' > Expected type: Sing b > Actual type: Sing 'Z > In the second argument of 'SBranch', namely 'SZ' > In the expression: SBranch SZ SZ SZ > > But I'm not convinced the chimp knows why! This follows from the kinds I was talking about earlier. `SBranch` expects three arguments. The first is `Sing a`, where `a` can have *any* kind `k`. `SZ` has type `Sing Z`, where `Z` has kind `Nat`. All good. The next argument to `SBranch` has type `Sing b`, where `b` has kind `BTree k` for the *same* kind `k`. Thus, because `Z` has kind `Nat`, we know that `k` really is `Nat` in this case. The second argument to `SBranch` then has type `Sing b`, where `b` has kind `BTree Nat`. But, when you provide `SZ` there, you've given something of type `Sing Z`, and `Z` has the wrong kind. Exactly as reported in the error message. (My "exactly" there is not to mean that any of this is obvious to someone who hasn't spent a lot of time thinking about it... just that GHC got lucky here in producing something rather sensible.) > > It all comes down to substituting the "b" in "'Branch a b c" with an SZ... > It knows it's wrong. > It knows the b in the type "'Branch" is of kind "BTree Nat"... Try compiling with `-fprint-explicit-kinds -fprint-explicit-foralls`. Once you become accustomed to the more verbose output, it might be very helpful. > > Then chimp's head hurts.... > > I may have forgotten what datakinds actually does...its doing more than I'm expecting... > > Let me know if its horribly wrong, and I'll try and reread the exercise....(I'm doing this in between doing my proper job...which aint Haskell). Everything so far is perfectly correct. Keep fighting the good fight. You'll be glad you did, in the end. :) Richard From david.feuer at gmail.com Thu Apr 30 19:23:15 2015 From: david.feuer at gmail.com (David Feuer) Date: Thu, 30 Apr 2015 15:23:15 -0400 Subject: [Haskell-cafe] Is it acceptable if Applicative behave not like a Monad In-Reply-To: References: Message-ID: As usual, it depends on what aspects of behavior you consider important. As an obvious example, if the two produce (or store) distinct structures that are (==), that should be fine. Similarly, if the effects are different in a way that does not matter to your application (e.g., one may buffer more input than the other, or they may read from the same files in different orders, or they may produce the same images on screen using different graphics operations) then that should be fine too. On Apr 30, 2015 11:45 AM, "Alexey Uimanov" wrote: > Hello, I have such a question: assume you have some type `T` which has > Applicative and Monad instances. Is it ok if code like this: > > foo :: Int -> T String > bar :: Int -> T Int > > (,) <$> foo 10 <*> bar "20" > > behaves not like this code: > > foobar = do > x <- foo 10 > y <- bar "20" > return (x, y) > > The word "behaves" I mean not just returning value but the effect > performed also. > > > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From onepoint at starurchin.org Thu Apr 30 21:33:10 2015 From: onepoint at starurchin.org (Jeremy Henty) Date: Thu, 30 Apr 2015 22:33:10 +0100 Subject: [Haskell-cafe] Cannot run GHC-7.10.1 tests In-Reply-To: <201504301255.26306.bh@intevation.de> References: <20150428214821.GE13478@omphalos.singularity> <20150429194157.GA3723@omphalos.singularity> <201504301255.26306.bh@intevation.de> Message-ID: <20150430213310.GB3723@omphalos.singularity> Bernhard Herzog wrote: > On 29.04.2015, Jeremy Henty wrote: > > But since you suggested it, I tried running all the test commands from > > within the ghc-7.10.1/testsuite directory and they all failed as > > follows: > > > > mk/boilerplate.mk:168: mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk: No such file or directory > > ./mk/ghc-config "/data/build.d/6.8/ghc-7.10.1/inplace/bin/ghc-stage2" > > >"mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk"; if [ $? != 0 ]; then > rm -f "mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk"; exit 1; fi > > /bin/sh: ./mk/ghc-config: cannot execute binary file > > make: *** [mk/ghcconfig_data_build.d_6.8_ghc-7.10.1_inplace_bin_ghc-stage2.mk] Error 1 > > > Just tried this myself. I get very similar errors. The reason is, > AFAICT, that the testsuite tarball contains some build-artifacts, > particularly some binaries. A "make clean" in the testsuite > directory before running the tests helps. Thanks, that worked! I get a few unexpected failures from "make fast". Should I be concerned? Regards, Jeremy Henty From d at davidterei.com Thu Apr 30 22:58:23 2015 From: d at davidterei.com (David Terei) Date: Thu, 30 Apr 2015 15:58:23 -0700 Subject: [Haskell-cafe] Roles, GND, Data.Coerce Message-ID: All, An issue that came up with GHC 7.8 was the design of Roles and Data.Coerce, with it's ability to break module abstractions by default, requiring programmers adopt explicit role annotations to enforce some types of invariants for abstract data types. Because of this issue, as of now, GND & Data.Coerce are still disabled in Safe Haskell. We'd like to change this but need feedback from everyone. I've written up a wiki page with lots of information on the issue: https://ghc.haskell.org/trac/ghc/wiki/SafeRoles Please read, it has information on how Roles work, the problem it raises and subtleties in the current system. Possible paths forward are also there, but I'll include below. Please be aware that this discussion is about Roles in general, not specifically how when using -XSafe. Ideally we'd come up with a solution that works regardless of if using Safe Haskell or not. I personally like option (3), although it isn't clear to me how hairy the implementation would be. == Possible Paths Forward for Roles == 1) Do Nothing -- Keep Roles & GND unchanged, keep them unsafe in Safe Haskell. 2) Accept as Safe -- Keep Roles & GND unchanged, accept them as safe in Safe Haskell and warn users that they neednominal role annotations on ADTs. 3) In-scope constructor restriction for lifting instances -- The newtype constructor restriction for unwrapping instances could be extended to both data types, and the lifting instances of Data.Coerce. This is, GND & Coercing under a type constructor is allowed if (a) all involved constructors are in scope, or (b) the constructors involved have been explicitly declared to allow coercion without them being in scope. I.e., (b) allows library authors to opt-into the current GHC behavior. This would require new syntax, probably just an explicit deriving Coercible statement. 4) Change default role to nominal -- This will prioritize safety over GND, and the belief is that it may break a lot of code. Worse, that it will be an ongoing tax as role annotations will be needed to enable GND. 5) Nominal default when constructors aren't exported -- When a module doesn't export all the constructors of a data type, then the type parameters of the data type should default to nominal. This heuristic seems to capture somewhat the intention of the user, but given the practice of defining an Internal module that exports everything, it seems of limited use. 6) Nominal default in future -- Add a new extension, SafeNewtypeDeriving that switches the default role to nominal, but continue to provide a deprecated GND extension to help with the transition. The claims in support of representational roles as default though believe that nominal by default has an ongoing, continuous tax, not just a transition cost. So it isn't clear that any scheme like this satisfies that argument. 7) Safe Haskell Specific -- Many of the above approaches could be adopted in a Safe Haskell specific manner. This isn't ideal as it makes safe-inference harder and Safe Haskell less likely to remain viable going forward. Richard suggests one such idea. == The belief by many people seems to be that (4) and (6) would be too much of burden. I'd like to avoid (7) if possible. It isn't clear to me if (7) ends up better than (2). I'm going to try to setup some infastructure for compiling Hackage so that we can measure how impactful these changes would be. Cheers, David